<![CDATA[Chris DeCairos]]>https://chrisdecairos.ca/https://chrisdecairos.ca/favicon.pngChris DeCairoshttps://chrisdecairos.ca/Ghost 5.62Tue, 09 Jan 2024 21:21:44 GMT60<![CDATA[How To: Mass S3 Object Creation with Terraform]]>https://chrisdecairos.ca/s3-objects-terraform/6213d91efa1a311a1f5d255dThu, 30 Jul 2020 20:18:41 GMTI've been working a bit with Terraform recently to support managing some of our infrastructure at the Mozilla Foundation. In doing so I came across a problem I couldn't find a documented solution for, so today I'm going to publish a bit of a how-to in the hopes that folks in similar problems might find it helpful.

Let's say you have one hundred files in the "src" directory, and you need to upload them to S3. You could add a resource entry for each file, but that would be tedious and repetitive:


locals {
  src_dir      = "${path.module}/src"
}


resource "aws_s3_bucket_object" "index" {
  bucket        = local.my_bucket_id
  key           = "index.html"
  source        = "${local.src_dir}/index.html"
  content_type  = "text/html"l
}

resource "aws_s3_bucket_object" "about" {
  bucket        = local.my_bucket_id
  key           = "about.html"
  source        = "${local.src_dir}/about.html"
  content_type  = "text/html"
}

resource "aws_s3_bucket_object" "main_css" {
  bucket        = local.my_bucket_id
  key           = "main.css"
  source        = "${local.src_dir}/main.css"
  content_type  = "text/css"
}

resource "aws_s3_bucket_object" "javascript" {
  bucket        = local.my_bucket_id
  key           = "main.js"
  source        = "${local.src_dir}/main.js"
  content_type  = "application/javascript"
}

resource "aws_s3_bucket_object" "favicon" {
  bucket        = local.my_bucket_id
  key           = "favicon.ico"
  source        = "${local.src_dir}/favicon.ico"
  content_type  = "image/x-icon"
}

resource "aws_s3_bucket_object" "header_image" {
  bucket        = local.my_bucket_id
  key           = "header.png"
  source        = "${local.src_dir}/header.png"
  content_type  = "image/png"
}

# and so on, 90+ more times

Thankfully, Terraform has the for_each meta-argument which lets us pass a map or set of strings to create an instance of a resource for each element in the meta-argument. Pairing this functionality with the fileset built-in function allows us to generate all these objects in far fewer lines of configuration:


locals {
  src_dir      = "${path.module}/src"
}

resource "aws_s3_bucket_object" "site_files" {
  # Enumerate all the files in ./src
  for_each = fileset(local.src_dir, "**")

  # Create an object from each
  bucket        = aws_s3_bucket.bucket.id
  key           = each.value
  source        = "${local.src_dir}/${each.value}"
  
  # Uh oh, what should we do here?
  # content_type  = ???
}

There is one small problem though, as indicated above: content_type. How can we set the content type correctly for each file? Well, there's a few built-ins to help us out here. Firstly, there's the lookup built-in, which returns the value from a map given a key, and can set a default if no key is found. So, if we define a content type map like so:

locals {
  content_type_map = {
    html        = "text/html",
    js          = "application/javascript",
    css         = "text/css",
    svg         = "image/svg+xml",
    jpg         = "image/jpeg",
    ico         = "image/x-icon",
    png         = "image/png",
    gif         = "image/gif",
    pdf         = "application/pdf"
  }
}

We can use lookup get the content type by extracting the file extension from the filename. To accomplish that, we use the regex built-in function:

regex("\\.(?P<extension>[A-Za-z0-9]+)$", filename).extension

So, if we put it all together, we get:

locals {
  src_dir      = "${path.module}/src",
  content_type_map = {
    html        = "text/html",
    js          = "application/javascript",
    css         = "text/css",
    svg         = "image/svg+xml",
    jpg         = "image/jpeg",
    ico         = "image/x-icon",
    png         = "image/png",
    gif         = "image/gif",
    pdf         = "application/pdf"
  }
}

resource "aws_s3_bucket_object" "site_files" {
  # Enumerate all the files in ./src
  for_each = fileset(local.src_dir, "**")

  # Create an object from each
  bucket        = aws_s3_bucket.bucket.id
  key           = each.value
  source        = "${local.src_dir}/${each.value}"
  
  content_type  = lookup(local.content_type_map, regex("\\.(?P<extension>[A-Za-z0-9]+)$", each.value).extension, "application/octet-stream")
}

And there you have it! all the files in your "src" directory should now have an associated S3 resource managed using terraform, and each has the appropriate content type, so you can serve the files up via S3's static site functionality or via a CloudFront CDN.

]]>
<![CDATA[TabSweeper For Firefox]]>https://chrisdecairos.ca/tabsweeper/6213d91efa1a311a1f5d255aSat, 22 Jul 2017 16:56:12 GMTTabSweeper For Firefox

TLDR;

you can download TabSweeper here

What is TabSweeper

I can't stand it when I've got more tabs open than I can see. Firefox (and basically all browsers for that matter) just doesn't have a useful way to manage tabs that is also compatible with its new multiprocess architecture. I used to use one-tab, but it's a legacy add-on and as far as I can see it will cease to work this November with the release of Firefox 57.

I took a look at the documentation for Firefox's web extension APIs and figured I'd give writing my first extension a try. This extension would re-implement the core functionality of one-tab - which closes all your open tabs and stored the URLs for later viewing, where you can either restore them or delete them from memory.

One-tab uses a special tab interface to manage your saved sessions, but I decided to use Firefox's new sidebar feature instead. It's easy to toggle visibility in and off, and you can still see what you've got open in your active tab. The sidebar can be toggled using a keyboard shortcut or through the sidebar button on the browser toolbar. The tab cleaning function is also bound to a keyboard shortcut, making cleaning tabs up really easy.

The sidebar is very basic at the moment, but it's functional. I plan to optimize the interface and how it's updated. I also plan on removing bootstrap and using some custom CSS, since bootstrap was only used for MVP purposes.

If you're interested in trying out TabSweeper, the signed add-on can be downloaded from the GitHub releases page. I hope to get it up on the Mozilla add-ons site, but that's blocked for now while the add-on awaits a code and security review.

If you have any feedback, suggestions, or bugs to report, file an issue in GitHub or send me an email

]]>
<![CDATA[Introducing Autoku]]>

I've just made my latest project public: Autoku

Autoku Demo
(Watch closely above, you might notice a little guest!)

Autoku is a command line tool for creating and configuring Heroku applications using configuration files i.e. "Infrastructure as Code". It allows you to specify exactly how your application

]]>
https://chrisdecairos.ca/introducing-autoku/6213d91efa1a311a1f5d2559Tue, 21 Feb 2017 23:26:47 GMT

I've just made my latest project public: Autoku

Autoku Demo
(Watch closely above, you might notice a little guest!)

Autoku is a command line tool for creating and configuring Heroku applications using configuration files i.e. "Infrastructure as Code". It allows you to specify exactly how your application should be configured, and modifies it to reflect that.

The tool has reached a point where I'm comfortable sharing it, but be forewarned that it's still in need of plenty of work. There are some quirks that need to be worked out and plenty of automated testing to get done. Seeing as this tool is open source, I welcome any and all contributions.

Install

yarn global add autoku or npm install -g autoku

Usage

Given this yaml file sample.yaml:

name: sample-app

region: us

maintenance: false

stack: cedar-14

configVars:
  SOME_VAR: foo 

addons:
  heroku-postgresql: hobby-dev

collaborators:
  - person@example.com

features:
  - log-runtime-metrics
  - http-session-affinity

formation:
  web:
    quantity: 1
    size: hobby
  worker:
    quantity: 1
    size: hobby

logDrains:
  - https://example.com:7000

domains:
  - mydomain.example.com

sni:
  - certificate-chain: "-----BEGIN CERTIFICATE----- ..."
    private-key: "-----BEGIN RSA PRIVATE KEY----- ..."

buildpacks:
  - heroku/python

Execute autoku deploy ./sample.yaml -k $HEROKU_KEY to have Autoku create (if needed) and configure your application.

Subsequent calls will update the application if sample.yaml changes.

Using Autoku you can maintain the following parts of your Heroku app using configuration files:

  • maintenance status
  • Configuration Variables
  • addons
  • collaborators
  • platform features
  • formations
  • log drains
  • domains
  • sni endpoints
  • buildpacks

Again, this is a very new piece of software and so it has flaws. There's plenty of problems to solve, like how to store and retrieve sensitive variables, for example. On that note: don't use an Autoku config file to store secrets in a public repository

]]>
<![CDATA[Deploying Mattermost To Heroku]]>

At Mozilla, we've been using Mattermost to facilitate communication between employees and contributors for the better part of this year. Mattermost is an open source (more like open core - but the team edition is still awesome), self hosted Slack alternative. It's been really great so

]]>
https://chrisdecairos.ca/deploying-mattermost-to-heroku/6213d91efa1a311a1f5d2558Fri, 16 Sep 2016 03:13:36 GMTDeploying Mattermost To Heroku

At Mozilla, we've been using Mattermost to facilitate communication between employees and contributors for the better part of this year. Mattermost is an open source (more like open core - but the team edition is still awesome), self hosted Slack alternative. It's been really great so far, and Mattermost has really improved its stability and features over this time period.

I'd like to take the time today to share with you how I've deployed a reliable Mattermost instance to Heroku (200+ users and going strong!), so your teams can benefit from Mattermost just like we have!

TLDR?

If you have a Heroku account, click the button below and you'll have a custom Mattermost instance in less than one minute

Deploying Mattermost To Heroku

You're welcome. Read the docs to customize further!

How it works

The Heroku app is configured to use two cusom buildpacks:

  1. cadecairos/nginx-buildpack - a fork of beanieboi/nginx-buildpack that installs NGINX and configures it as a reverse proxy to Mattermost. The fork modifies the NGINX config to expect a connection on a port and not on a socket. The NGINX proxy will automatically 301 http connections to https. Hooray!

  2. mozilla/mattermost-heroku - a fork of tommyvn/mattermost-heroku which improves on the build process and adds in a large variety of configuration options over the original.

Both of these forks are interested in having their changes upstreamed.

Getting Mattermost to run on Heroku

The mechanics of the buildpack are what allows Mattermost to run on Heroku. Mattermost uses file based configuration, which is a big issue when you try to run it on Heroku - there's no way to tell Mattermost how it's configured using only environment variables. Enter the inline buildpack: A buildpack that you deploy to Heroku, which uses itself as a buildpack.

This affords one major capability: We can provide a template config, and render in environment variables before we run the Mattermost executable, pointing it to the fresh configuration file. This means that no matter what, it's always got the latest settings from your environment.

Unfortunately, that means you can't use the Mattermost admin console to change environment variables, because they'll be written to disk, and if the dyno reboots (which it will every 24 hours) - you lose that config setting due to the ephemeral nature of the dyno's filesystem.

Another awesome feature of the buildpack is the ability to run enterprise Mattermost on Heroku. Just set the MATTERMOST_TYPE="enterprise" on your environment and build/rebuild the app. You can then install your license for Mattermost e10/e20 as per the instructions here

One more thing

The easiest way to deploy new versions of Mattermost using the above setup is to go to the "Deploy" tab of your Heroku app and link the app to mozilla/mattermost-heroku. You'll then be able to deploy specific branches from that repo. I've set up branches that point to the same commits as their tag counterparts (minus the 'v').

Another strategy is to use the build API:

 curl -n -X POST https://api.heroku.com/apps/$APP_ID_OR_NAME/builds \
  -d '{
  "buildpacks": [
    {
      "url": "https://github.com/cadecairos/nginx-buildpack.git#v1.0.0"
    },
    {
      "url": "http://github.com/mozilla/mattermost-heroku.git#v1.0.3"
    }
  ],
  "source_blob": {
    "url": "https://github.com/mozilla/mattermost-heroku/archive/v1.0.3.tar.gz",
    "version": "v1.0.3"
  }
}' \
  -H "Content-Type: application/json" \
  -H "Accept: application/vnd.heroku+json; version=3"

Next up

I'll be following up this post sometime in the future with details on how to get your Mattermost instance a kick-ass Chat bot that can do whatever your mind can make it do!

]]>
<![CDATA[Finding my Calling]]>

Five or so years ago, I began an incredible journey. My professor at the time, David Humphrey, introduced me to the concept of open source software and the Mozilla Project, and I dove right in. Thanks to my explorations in open source software development (and to David!), I had the

]]>
https://chrisdecairos.ca/finding-my-calling/6213d91efa1a311a1f5d2557Wed, 20 Apr 2016 02:03:00 GMTFinding my Calling

Five or so years ago, I began an incredible journey. My professor at the time, David Humphrey, introduced me to the concept of open source software and the Mozilla Project, and I dove right in. Thanks to my explorations in open source software development (and to David!), I had the good fortune of getting involved in a FOSS research team at the Centre for Development of Open Technology (CDOT). Through this co-op placement, and a one year contract with CDOT after graduation, I helped develop two amazing open source projects: popcorn.js and Popcorn Maker.

Through this work, I got to collaborate with The Mozilla Foundation, a non-profit with a mission to keep the web open. When my time at CDOT was nearing it's end, I reached out to the powers that be at the foundation to see if I could make their mission my career... And they said yes!

Thus began three years of hard work, learning, successes, failures, imposter syndrome and most importantly, making an impact in people's lives. This time involved me doing a considerable amount of client side (early on) and server side (last two years) application development. I enjoyed the latter, didn't care so much for the former (but I have improved my skills in that area!). Despite enjoying my work, I've always felt like I wanted more, but I wasn't sure what it was.

During the last few weeks, my role in the foundation has shifted to Operations Engineer (role name not final). This role shift will have me working with a much wider group of people inside and outside the organization to build, deploy, monitor and maintain our applications, services and networks. I will work to improve our development and deployment tools, and be tasked with sharing the knowledge of these processes and tools with our team and collaborators.

Had I been approached with this position a year ago, I don't think I'd have been ready to say yes, but I'm currently at a point in my life and career where I want to push myself to become a better engineer. From the beginning of my time at Mozilla, I've worked closely with my predecessors in ops, JP Schneider and Jon Buckley. They're both incredibly intelligent and resourceful people, and much of my knowledge was gained through constant IRC pings, impromptu white boarding sessions and coffee break chats. It was through these interactions I've built up a familiarity with our systems here at the foundation, giving me confidence in my ability to serve in this capacity to the highest degree.

I'm excited to continue being a MoFo, to evolve my skills and to share my knowledge, all while helping protect the worlds largest public resource. This is my calling, and it's something I'm extremely humbled and proud to be a part of.

Here's to another five years of learning, challenges, successes, failures, and most importantly, sharing reactions using animated GIFs!

Finding my Calling

]]>
<![CDATA[Webmaker Services In a Box]]>https://chrisdecairos.ca/webmaker-services-in-a-box/6213d91efa1a311a1f5d2556Fri, 20 Nov 2015 19:11:17 GMT

Configuring and running services for Webmaker is a real pain in the ass.

(Webmaker services = Webmaker API, Webmaker ID, and Webmaker LoginAPI)

There's database and caching services to install and configure, npm dependencies to install, native NodeJS bindings to compile (sorry, Windows), and database scripts/migrations to run. This makes life difficult for many. Particularly those who don't handle these things on a day to day basis, like front-end focused developers or designers. I wanted to make life easier for them, so I took some time to fix that problem.

I decided to fix this problem using Docker. Docker is a platform for building self contained applications that have their own filesystem, runtimes, system libraries, and more. What this basically means is that you can build a Docker image once, and expect it to run anywhere (...that can run Docker).

Webmaker-Docker is a new repo I've created to contain everything you need to run Webmaker Services, without needing to install anything except git, Docker, and Docker Compose. That's right, no need to install NodeJS, npm, Postgres, MySQL, SQLite, Redis. No need to run npm install or bower install. We're finally living the dream, folks.

This development will enable something we've never had before: The ability to give front end devs and designers simple and easy access to services they can use when prototyping and developing new features. It's even possible to create dockerized feature branches for team mates to use new service features to develop front end parts in paralell! (Follow up post on that another time)

How To Run

Okay, so you're super excited about this, but not sure exactly what to do next. Let's go over the set-up (subject to changes :P). I should note that docker will automatically download container images from Docker Hub, so make sure you've got a decent internet connection or time (or both) - but don't worry, this will only happen the first time.

  1. Install Docker and Docker Compose (available for Linux, Mac and Windows) - Installation instructions
  2. Clone the git repository: git clone git@github.com:cadecairos/webmaker-docker
  3. Go to the data-services directory and run docker-compose up -d to start the database services Webmaker needs.
  • They're started in the background by passing the -d flag, omit it if you want to see container logs.
  1. Go to the services directory and run docker-compose up -d to start the Webmaker services.
  • omit -d to see logs from the Applications (helpful for debugging)
  1. To shut down the containers when they're detached, go to the two directories mentioned above and run docker-compose stop, otherwise, ctrl-c does the trick.
Now what?

That's up to you! You've got fully functional services running that you can use to do whatever you please. Wanna prototype a new feature or site that depends on one of these services? Go for it! Here's a rundown on what services are exposed, and where:

  • Postgres is listening on localhost:5432
    • username: 'webmaker'
    • password: 'webmaker'
    • database: 'webmaker'
  • MariaDB is listening on localhost:3306
    • username: 'wmlogin'
    • password: 'wmlogin'
    • database: 'wmlogin'
    • root password :'root_wmlogin'
  • Redis is listening on localhost:6379
    • There aren't any credentials to worry about
  • Webmaker API is listening on localhost:2015
  • Webmaker ID is listening on localhost:1234
    • there's a client and secret already in the database for you
      • client_id: 'webmaker'
      • secret: 'webmaker'
      • all grant types and response types allowed.
      • redirect_url is 'example.com' - You can manually change it or insert a new one if you desire
  • Legacy Login is listening on localhost:3000, but don't touch it if you know what's good for you.

Let's get technical

Let's talk about how this all is put together. The base of all this magic is two files known as Compose files. Compose files use YAML to define a collection of Docker Images to build/download/execute. It defines what ports to expose, what kind of networking to use and lets you pull in env files to configure the applications running within.

Here's the data-services Compose file:

Here's the Webmaker services Compose file:

Each of the services is put together with a Dockerfile. Dockerfiles are text files that automate the creation of images. Here's the Dockerfile for Webmaker API:

There's plenty more to find in the repo, so go take a look!

]]>
<![CDATA[Intercepting HTTP traffic with Zaproxy]]>https://chrisdecairos.ca/intercepting-traffic-with-zaproxy/6213d91efa1a311a1f5d2555Fri, 28 Aug 2015 18:51:28 GMTIntercepting HTTP traffic with Zaproxy

Today I'm going to show you how to use the Zed Attack Proxy (ZAP) to debug and test the security of web applications. ZAP is an intercepting proxy that serves as a great tool for security beginners and veterans alike. It provides tools to intercept and modify HTTP/HTTPS and WebSocket traffic, as well as an assortment of other useful tools.

Before I continue, I feel obligated to warn you that you should never, ever, ever use this tool on an application you don't own (that would be illegal). Only use it with a program you're hosting yourself or one you've been given explicit permission to test.

Set-up

Local Proxy Settings

After installing ZAP, you're going to need to do a little configuration to get things running. Firstly, you need to figure out your requirements for using it as a proxy. If you want to direct your browser traffic at ZAP you only need to change your browser's proxy settings to point at ZAP. In most cases, ZAP listens for connections on http://localhost:8080. If you want to intercept HTTPS traffic, it gets a bit more complicated. Zap can generate you a custom root CA certificate to install on your machine so that it can proxy secure connections for you.

To change your local proxy settings, go to tools -> options... in ZAP, and look for the Local Proxy sub-menu. It's also possible to point a device (i.e. Android phone) connected to the same network as your computer to your ZAP proxy. Simply configure ZAP to listen for connections on your IP address, and proxy your device traffic through it.

Contexts

Now that you've got your proxy set up lets create a new context. Contexts are a way to group relevant URLs, so that ZAP only shows you the traffic you care about. Create a new context by clicking on the "new context button" and giving it a name.

Intercepting HTTP traffic with Zaproxy

For the purposes of this demonstration, I'll be proxying requests from the Webmaker app on my Android phone through a ZAP proxy on my computer. I built a debug version of Webmaker for Android that uses localhost:2015 for storing projects and localhost:6767 for authentication.

I added these addresses to my Context by navigating to file -> session properties And opening the Contexts sub-menu. From there, select the context you created and find the "Include in context" tab. Add the URLs you're filtering for in the menu there.

Intercepting HTTP traffic with Zaproxy

If the hosts already are listed in your Sites tab in ZAP, you can right click and select add to context.

Intercepting HTTP traffic with Zaproxy

Scope

To filter on a configured context, you want to mark it "in scope" and likely mark the "Default Context" as "not in scope". Do so by right clicking the contexts and selecting add to scope or remove from scope as required. Additionally, you must also enable scope filtering in the various lists you see in the ZAP UI by clicking the little bulls-eye symbol:

Intercepting HTTP traffic with Zaproxy
Intercepting HTTP traffic with Zaproxy

Inspecting Requests

As your proxy intercepts and forwards traffic, it keeps a running log of all the request and response data it handles. You can view this information by selecting a request from the History tab:

Intercepting HTTP traffic with Zaproxy

This data is invaluable when you're debugging weird issues with external services.

The Fun Stuff

Now that we can record traffic between a client and a server, lets use the breakpoints feature of ZAP to stop a request in-flight and modify it!

Continuing with my Webmaker example, lets create a project, and see what the initial payload looks like:

POST http://localhost:2015/users/1/bulk HTTP/1.1
Proxy-Connection: keep-alive
Content-Length: 169
Accept: application/json
Origin: file://
User-Agent: Mozilla/5.0 (Linux; Android 5.0.1; HTC One_M8 Build/LRX22C) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.135 Mobile Crosswalk/13.42.319.12 Mobile Safari/537.36
Authorization: token 5712cf871106f07233cfaff7cfba157b3b647c57487151f9e6de19e274031095
Content-Type: application/json
Accept-Language: en-us,en
Cookie: crumb=bWwSsAfq8bajB0hts05UA_qfeWnqLm5Iny9mjfe3j3x
Host: localhost:2015

{"actions":[{"method":"create","type":"projects","data":{"title":"My project"}},{"method":"create","type":"pages","data":{"projectId":"$0.id","x":0,"y":0,"styles":{}}}]}

Here we can see that the app is using the bulk endpoint to create a new project and a starter page for it. Let have some fun!

Firstly we need to right click on the request in the history tab and locate the Break... option (oh how appropriate). In general, the default settings it shows you are fine enough, but there's ways to add conditional breakpoints that you should totally check out!

Once you set up the break point, use your client to make the request again. You'll notice that ZAP creates a new tab in the interface called "Break":

Intercepting HTTP traffic with Zaproxy

This tab lets you modify the request headers and body before sending it along to the destination server. Lets add a second page to actions array:

{
  "actions": [{
    "method": "create",
    "type": "projects",
    "data": {
      "title": "My project"
    }
  }, {
    "method": "create",
    "type": "pages",
    "data": {
      "projectId": "$0.id",
      "x": 0,
      "y": 0,
      "styles": {}
    }
  }, {
    "method": "create",
    "type": "pages",
    "data": {
      "projectId": "$0.id",
      "x": 1,
      "y": 0,
      "styles": {}
    }
  }]
}

Send the request along by pressing the step or continue buttons in the top level toolbar:

Intercepting HTTP traffic with Zaproxy

ZAP will forward the request to its destination with your changes, and then (if configured to do so) intercept the response for modification as well. In the above example, I produce a new project with two pages instead of one. While this isn't very evil, you may be understanding the power of an intercepting proxy right now:

Intercepting HTTP traffic with Zaproxy

Don't be too evil

Intercepting HTTP traffic with Zaproxy

So, there you have it. You now have the knowledge and skills to sit an intercepting proxy up between your client and server applications (hopefully for the purpose of making them better!).

What I've covered today is just a very small part of the things that ZAP is capable of. If you want to learn more, I'd be glad to cover some more topics in the future. In the mean time, you can't go wrong reading the official ZAP wiki

]]>
<![CDATA[Dark GTK Themes and Firefox]]>https://chrisdecairos.ca/dark-gtk-themes-and-firefox/6213d91efa1a311a1f5d2554Thu, 06 Aug 2015 03:04:44 GMTDark GTK Themes and Firefox

I've been using Linux for some time now, and I've always been partial to dark themes. They're easy on the eyes, especially when you spend most of your day working on a computer. The trouble is, Firefox doesn't play very nicely with these themes. It seems like it tries to borrow the theme's colour palette for it's user-agent style sheets.

While that seems great and all, it becomes impossible to use your browser when input elements (which seem to be the elements affected by dark themes) get dark backgrounds with dark text.

For example, here's what my blog settings page looks like without a fix applied:
Dark GTK Themes and Firefox

To fix this problem, I installed a Firefox add-on named Stylish, which allows you to write custom stylesheets to apply to the browser chrome and the content of websites.

I added a small style sheet to stylish:

and now I get:

Dark GTK Themes and Firefox

One problem I'm still running into is that select elements aren't reflecting the fixed background colour settings... If you have a fix or can figure one out, I would love to hear from you!

]]>
<![CDATA[Hapi: The Good Parts]]>

Recently, I've been working with a new framework called Hapi to build an API for Webmaker. This is a bit of a departure from the past, where we traditionally would have used Express to build the our server applications. The decision to use Hapi was based on several

]]>
https://chrisdecairos.ca/hapi-the-good-parts/6213d91efa1a311a1f5d2553Fri, 05 Jun 2015 16:15:56 GMT

Recently, I've been working with a new framework called Hapi to build an API for Webmaker. This is a bit of a departure from the past, where we traditionally would have used Express to build the our server applications. The decision to use Hapi was based on several features that we found in our experimentation with the framework. I'd like to outline these features, and give examples about how we're using them.

Tests

We wanted our services to be highly testable. Hapi's Server API makes this a cinch. Its configuration-centric approach to building servers means you can split all of your configuration (like routes) into require-able modules that can be tested in isolation:

As you can see above, we can test the routes' configuration outside of actually building a running Hapi server. While the tests above don't cover situations where more configuration is added, you can use libraries like Joi to provide far more strict assertions on the configuration object.

One other key Hapi feature is it's inject function, which lets you simulate receiving a request. It is invaluable when testing, because it enables you to do very cool things like providing credentials that step over the authentication of your routes.

Plugins

Hapi provides a plugin API, which makes separating your application into independent units very easy. This separation consequently makes testing really easy too. In your tests, you can register the plugin on a bare Hapi server, with whatever test specific configuration you desire, and test it's behaviour in isolation.

Server Methods

In the last gist I embedded, I added something called a server method. Server methods are a way to expose functions on your server object, which removes the need to require a common module everywhere a function is needed. Basically, if you define your server methods in a plugin, you register it once, and it's available everywhere!

Another really handy feature that server methods have is caching. Hapi is compatible with catbox, a multi-strategy key-value store, and Hapi leverages it for easy caching. This is extremely useful if the server method requests data from a database:

Validation

Hapi provides an interface for enforcing strict rules on the data coming into your application. This validation functionality works perfectly with the Joi library. It can be applied to route params (/foo/{bar}) request payload ({foo: 'bar'}) or query params (/foo/bar?fizz=buzz).

In summary, I'm very impressed with Hapi. With it (and with the help of a couple other great libraries called sinon and nock), I was able to achieve 100% test coverage on api.webmaker.org. All without having any external dependencies (other than PostgreSQL, but I can live with that, since the tests feel more real if they use an actual database)

Here's the part where I engage with you:

Do you use Hapi?

What do you think of it?

What are your favourite features or tricks when developing applications with Hapi?

edit: I didn't realize until this morning that a recent theme update disabled disqus. It's working now, should you wish to chat.

]]>
<![CDATA[One Time Passwords (Part Two)]]>

In my previous post, I wrote about the new login system we're working on for Webmaker. In short, the new system facilitates the authentication of a user by generating a one time use password and sending it to the user's email account. The user can then

]]>
https://chrisdecairos.ca/one-time-passwords-pt-2/6213d91efa1a311a1f5d2552Thu, 18 Sep 2014 03:22:15 GMT

In my previous post, I wrote about the new login system we're working on for Webmaker. In short, the new system facilitates the authentication of a user by generating a one time use password and sending it to the user's email account. The user can then click a link in the email to log them in right away (either for the session only or for one year).

After getting some great feedback about my last post, I'm going to try and outline the protocol with less noise and more of the important details.

What's new

The latest implementation of the system adds in several things that weren't in the previous iteration.

Most notably, we've chosen to provide users with the option to disable one time passwords and use a custom set password. You can read more about how these are implemented in the Protocol section below.

Secondly, I've updated the OTP generation process to create ten character, pronounceable tokens. This new method aids users if they need to type the token on one device, while reading from another.

Lastly, I have put in rate limiting middleware on the login server routes that handle the new protocol. It is backed by redis, and provides a simple way to control how often someone can use the routes.

Here's how it works:

  1. A request comes in!
  2. Data about rate limits is stored in redis, keyed on "{API_route}:{IP_Address}:{uid}". The middleware queries redis to see if this IP and uid have been here recently, and if so, increments the hit count OR The rate limiting middleware stores the hit count(1) in redis, using the key pattern described, and the key is set to expire after some amount of time.
  3. Should further requests be made from the same source, the count stored in the key is incremented
  4. If the count in the key reaches some max value before it expires, any further requests get a 429 response from the API server (until the key expires).

High Level Overview

I've put together four high level flow charts that show the new actions in the system. For more detail about the actions, read the Protocol section below.

  1. Create Account Flow
  2. Sign In Flow
  3. Enable and Disable Password Flow
  4. Password Reset Flow

Protocol

Creating a token
  1. A request is made to /api/v2/user/request with a JSON post body containing a uid (username or email)
  2. The server generates 4 random bytes using node's crypto.randomBytes function
  3. A module called proquint turns the random bytes into a pronouncable string, i.e: "joban-ladim"
  4. The string is stored in the database, and an login email is dispatched
Sign in with Token
  1. The user enters clicks a link in a login email. The link includes the following query parameters:
    • username=<username>
    • token=<OTP>
    • validFor=<'session' or 'one-year'>
  2. The page issues a post request to /api/v2/user/authenticateToken with the query params as JSON in the body
  3. The token is verified, and marked as used.
  4. The server sends a set-cookie header in the response to the client, that either expires when the browser session ends (public/shared computers), or one that will not expire for one year (for trusted computers)
Sign in with Password
  1. The user enters in their uid (email or username) and their password, which are posted to /api/v2/user/verify-password
  2. The server looks up the user's salted and hashed password, and verifies that the provided password produces the same output from bcrypt
  3. The server sends a set-cookie header containing the user's session object, which can expire after the session or in one year.
Enable Passwords
  1. A logged in user provides a unique, secure password (ideally) which is send in to api/v2/user/enable-passwords in the post body.
  2. The server uses bcrypt to salt and hash the password (12 rounds, to slow things down a bit)
  3. The salt and hash are stored in the database
Disable Passwords
  1. A logged in user causes the site to post to /api/v2/user/remove-password
  2. The user's password is removed from the database, enabling OTP for the user once again.
Reset Password Request
  1. A user provides a uid (email or username) which is posted to /api/v2/user/request-reset-code
  2. The server uses Hat to generate 256 random bits, which is base 16 encoded, creating a 64 character long string.
  3. The string is saved to the database and an email is dispatched to the account owner.
Reset Password
  1. A user clicks the reset password link in a reset request email, which links to something like /reset-account?uid={username}&code={reset_code}
  2. The user enters a unique, secure password (every time, right?)
  3. The code, uid, and password are posted to /api/v2/user/reset-password
  4. The reset code is validated
  5. The same steps described above are followed to salt and hash the password, which is then stored in the database.

Wrap-up

The system we've been working on is an effort to make the sign up and sign on experience on Webmaker easier for our users. We think we've nailed it with the dead-easy sign up flow. However, getting sign in to not suck isn't as easy. As we roll out the new system in the near future, we're going to be watching closely and gathering data about the various ways people interact with the new system. This data will help inform our decisions moving forward as we try to make the Webmaker experience better for everyone.

If you've made it this far, congratulations! Do you have an opinon on the new system? We'd love to hear from you! There are many ways to get in touch:

  1. Use the Disqus comments at the end of this post
  2. Tweet @ me
  3. Come talk to us in IRC - irc://irc.mozilla.org/#webmaker (I'm "cade")
]]>
<![CDATA[One Time Passwords]]>

Webmaker users currently sign in to their accounts using Persona, Mozilla's privacy respecting authentication system. It's fairly simple, and has worked really well since our rewrite this past march. You can read the details of the implementation in the blog post I've just linked.

]]>
https://chrisdecairos.ca/one-time-passwords/6213d91efa1a311a1f5d2551Fri, 29 Aug 2014 18:46:30 GMT

Webmaker users currently sign in to their accounts using Persona, Mozilla's privacy respecting authentication system. It's fairly simple, and has worked really well since our rewrite this past march. You can read the details of the implementation in the blog post I've just linked.

Today, we're experimenting with alternative methods for users to log in and sign up. One such method, which Matthew Wilse has built prototypes for, has been called the "Passwordless" or "Handshake" method. I would argue that calling the system passwordless is a bit misleading and that handshake is not clear about what it really is, so I'll refer to it from here on out as the One Time Password(OTP) system.

For the past several weeks I've been building this system into existing Webmaker applications. We're hoping to deploy it to a limited number of people and gather some feedback and statistics, particularly whether or not it improves successful signup/signins (Persona is great, but a lot of users just give up trying to make it work).

Another benefit of the work I've been doing is a huge improvement on the account creation process. Here's some background on how it currently works:

1: A user clicks "Sign Up" somewhere on Webmaker
2: A persona pop-up appears
3: A user who does not have a Persona account must now create one. If they have one already, Go To 5
4: Verify email with Persona.org
5: Sign into Persona
6: A new user modal pops up, Asks for a username, among other things.
7: Yay, a new user!

At every step above number 7, some significant percentage of users just simply gives up. I don't blame them, it's incredibly confusing to have to create two accounts as well as verify an email address.

Matthew's prototypes provided a sign-up flow that looked a little bit like this:

  1. A user clicks "Sign Up" somewhere on Webmaker
  2. They provide an email and a username
  3. Yay, a new user!

Wow, that sounds really nice! So I'Sign up GIF](GHOST_URL/content/images/sign-up.gif)

I've also made the OTP sigSign In GIF](GHOST_URL/content/images/sign-in.gif)

During the early stages of this work, I build the front end bits as best I could, but my CSS-Fu has never been that strong. Recently, Ricardo Vazquez has come on board the login train to make the prototype modal dialogs beautiful to see and use. Stay tuned to my blog and Webmaker demos in the near future to see the project evolve!

How OTP's work

Here's a list of steps that describes the whole One Time Password protocol (shown above, second image), from start to end.

  1. The User-Agent POSTs: { email: "chris@example.com" } to /auth/v2/request on the app server (lets use webmaker.org)
  2. The Webmaker.org server should forward the post body to the Basic Auth protected route /auth/v2/request on login.webmaker.org
  3. The login server will look up the user account by the provided email address. If none is found, responds to the request with a 400: "User not found"
  4. A one time password is generated using Hat.
  5. The password is 24 random bits and is converted to base 36. This generates a five character string of letters and/or digits.
  6. The password is set to expire thirty minutes after creation, and is passed to Webmaker's event processing queue Sawmill. Its powered by Amazon's Simple Queue Service (SQS), and messages are sent using Hatchet, which uses Amazon's aws-sdk module.
  7. From Sawmill, The event is converted into an email using a template, and is forwarded to lumberyard
  8. Lumberyard sends the email to the one stored in User account the OTP was generated for. This uses the aws-sdk and Amazon's Simple Email Service (SES)
  9. Once the User gives the User-Agent the OTP it POSTs: { email: "chris@example.com", token: "s34xa" } to /auth/v2/authenticateToken on the app server
  10. The Server will forward the post body to /auth/v2/authenticateToken on login.webmaker.org
  11. The login server will attempt to fetch the user and login token from the database, ensuring that the fetched token was 1. created no more that thirty minutes from the system time on the server and 2. has not been used to log in already.
  12. Should the criteria not be met, a 401 reponse it returned to the app server, who should not issue a session cookie.
  13. If all criteria is met, the token is marked used and saved, and the user account object is serialized into session format and returned to the app server.
  14. The app server should then send a SET-COOKIE header to the User-Agent. It should be set to https only and be encrypted with a session secret.

What Now?

With the code in a working state, it's just a matter of iteration. Code review, fixes + optimizations, repeat. We also have folks working on copy for the emails and modal dialogs. One thing that I think is going to be the most challenging for us when we roll out the new system is clearly communicating to users the new changes. Whether it be through email comms, banners on the homepage or a first-run log-in experience, I feel it's incredibly important to get this right.

We've been thinking about keeping Persona as an optional log-in method (+1) and implementing an opt-in, run-of-the-mill password login system (I have implemented this fully, but it was cut from this iteration D:). I've also experimented (successfully) with turning login.webmaker.org into an OAuth2 provider - but havent actually gotten any resources or scopes set up to work with the oauth tokens I can generate.

I've also reached out to the security gurus in the organization, with the goal of going over the new protocol with them and getting feedback on the right and the wrong.

That said, what do you think? Have you had experience with log-in flow like this? What challenges did you face or forsee this flow facing? Feel free to use the comment section below, drop some comments in the Bugzilla bug or email me

]]>
<![CDATA[Windows And Webmaker Events]]>

Webmaker is a very large project, with dozens of parts that all come together at https://webmaker.org. One of the hardest things for new contributors is getting everything set up properly. This problem multiplies ten-fold when the desired platform of the developer is Windows. Recently, I put together a

]]>
https://chrisdecairos.ca/windows-and-webmaker-events/6213d91efa1a311a1f5d2550Wed, 30 Jul 2014 20:05:00 GMT

Webmaker is a very large project, with dozens of parts that all come together at https://webmaker.org. One of the hardest things for new contributors is getting everything set up properly. This problem multiplies ten-fold when the desired platform of the developer is Windows. Recently, I put together a guide for a new contributor, explaining the steps it takes to set up a Windows 7 computer for development of Webmaker projects (specifically, the events platform). It's a long, complicated, and frustrating process, which I'd like to share here, so that should anyone need to do this, they won't have to look very far.

Dependency Installation

Git

We use Git for version control, and Github for hosting our code repos publicly.

You can install Git from http://git-scm.com/download/win. That site should automatically prompt you to download the latest version of Git available for Windows. Once it is downloaded, run the executable to begin the installation. I used all the default settings.

Node

Basically all of Webmakers servers are written in JavaScript, and use NodeJS.

Download the latest installation executable from http://nodejs.org/download and run the installer once the download completes. Node Package Manager (npm) will be installed as well.

Python

Some modules included as dependencies will require Python 2.7 to be installed.

You can download the installer from https://www.python.org/download/releases/2.7.8/. During the installation make sure to specify that you want python.exe added to your PATH.

Visual Studio 2010

This is where things get stupid. Npm depends on something called node-gyp, a so-called "cross-platform" tool for compiling native addons. The problem is, its a pain in the ass to get it running on a Windows machine. I'll share what I did to get it running, but in the end, you might have to do something different, depending on your machine.

Installing the four items above didn't quite cut it for me though. Here's the last few steps I took:

  • I ran git bash (you can find it in your start menu) and ran the command npm install -g node-gyp to install the node-gyp package globally.

  • I then opened the file C:\Users\cade\.node-gyp\0.10.29 \common.gypi - obviously, substitute your username and the version of node-gyp you install.

  • The file contains configuration settings in JSON format. Navigate to target_defaults.msvs_settings.VCLinkerTool and add 'AdditionalLibraryDirectories': 'c:\\Program Files\\Microsoft SDKs\\Windows\\v7.1\\Lib\\x64', to the object VCLinkerTool

If you're as lucky as I was, node-gyp might now work for you!

Grunt-cli and bower

Grunt is a task runner, used maily for building resources. Bower is a front-end package manager. We're going to install these globally using npm, so that the grunt and bower commands will be added to your PATH, making life easier. To do this run: npm install -g grunt-cli bower

Setting up Webmaker Events

Login.webmaker.org

This is the Login server for webmaker. It manages Webmaker accounts through the Login API, and is required for log-in functionality to work. The code can be found at https://github.com/mozilla/login.webmaker.org

  1. In Git Bash, change into the directory you would like to clone the code into.

  2. Run git clone https://github.com/your_user_name/login.webmaker.org to clone your fork of the code to your machine. Git will set your Github fork up as the remote named "origin", so you can push code changes and branches back up to the website.

  3. Change into the login.webmaker.org folder and run cp env.sample .env. This will create a configuration file for the server from the default one provided in the repo. It's configured to work "out of the box" with other Webmaker sites and services you will run locally.

  4. Now we need to install npm and bower dependencies. Do this by running npm install; bower install

  5. You should now be able to start the server with node app. If Windows prompts you to grant permissions for Node, accept them.

Events Front End

This is the Front-End for The Webmaker Events platform. It's an Angular app that is served using Express. The source code is at https://github.com/mozilla/webmaker-events-2. Don't ask about webmaker-events, we don't talk about it anymore.

  1. In Git Bash, change into the directory you would like to clone the code into.

  2. Run git clone https://github.com/your_user_name/webmaker-events-2 to clone your fork of the code to your machine. Git will set your Github fork up as the remote named "origin", so you can push code changes and branches back up to the website.

  3. Change into the webmaker-events-2 folder and run cp .env-dist .env. This will create a configuration file for the server from the default one provided in the repo. It's configured to work "out of the box" with other Webmaker sites and services you will run locally.

  4. Now we need to install npm and bower dependencies. Do this by running npm install; bower install

  5. Before we can run the server we need to compile the CSS. Do so by running grunt build

  6. You should now be able to start the server with node server/server.js. If Windows prompts you to grant permissions for Node, accept them.

Events Service

This is the Events API, which provides RESTful read/write access to the events database. The source code can be found at https://github.com/mozilla/webmaker-events-service.

  1. In Git Bash, change into the directory you would like to clone the code into.

  2. Run git clone https://github.com/your_user_name/webmaker-events-service to clone your fork of the code to your machine. Git will set your Github fork up as the remote named "origin", so you can push code changes and branches back up to the website.

  3. Change into the webmaker-events-service folder and run cp .env-dist .env. This will create a configuration file for the server from the default one provided in the repo. It's configured to work "out of the box" with other Webmaker sites and services you will run locally.

  4. Now we need to install the npm dependencies. Do this by running npm install

  5. You should now be able to start the server with node server/server.js. If Windows prompts you to grant permissions for Node, accept them.

Summary

Developing on windows with NPM sucks, but it's not impossible. If the above helped, I'm glad. The steps for the above set-ups for login, events and the events-service can be applied to most other Webmaker sites and tools. Be sure to always check the README files for details about set-up and running a particular project.

]]>
<![CDATA[Oh noes.]]>

I somehow managed to break my blog...

Update: Blog online, data missing. I'm currently working on recovering it.

update: As of 10:15 PM EDT, the blog is back online!

What happened? Well, It all started when I decided it would be a great idea to update my

]]>
https://chrisdecairos.ca/oh-noes/6213d91efa1a311a1f5d2508Mon, 30 Jun 2014 23:51:46 GMT

I somehow managed to break my blog...

Update: Blog online, data missing. I'm currently working on recovering it.

update: As of 10:15 PM EDT, the blog is back online!

What happened? Well, It all started when I decided it would be a great idea to update my Ghost installation to 0.4.2. I'd done an upgrade before without much trouble so I thought everything would be okay. Unfortunately, Ghost wasn't on board with my plan.

I started out by checking out the latest version of ghost on my server, which went smoothly. Next, I ran sudo sh -c "npm install; bower install; grunt; grunt prod" to get all the dependencies and build all the assets. This went off without a hitch. I restarted my Ghost service and verified it was running.

This is where things get strange. My blog's content seemed to be just fine. But when I attempted to log into the admin console, it would reject the username and password for my account. I was a little annoyed, but not too concerned. I figured I could just reset the password. Nope. Upon trying to send a reset email, I was given an error message claiming that the Gmail account I configured for reset emails couldn't be authenticated. Thats when it hit me: The account was a dummy account I created with the sole purpose of sendign me password reset emails. The very same dummy account that I had gotten an email from Google about one week prior stating it was being closed due to TOS violations. -.-

A simple fix really. Just have to update those settings! Since I have a nice new AWS account with a bunch of credit to burn through, I figured I'd hook it up to SES. I logged into my console, and generated the creds to send emails using SES. I plugged them in, rebooted the service, fired up the forgotten password page and.... Nope! More errors. This time something about bad credentials. AWS is great, but it can be incredibly picky about permissions. After some more fiddling and prodding though, I did manage to get password reset emails working.

I clicked the link in the email from my blog, and it brought me to the password reset page. I filled in a new password and hit submit. Navigating back to the login screen, I proceeded to enter in the new login and password. Upon hitting submit, I was greeted with a lovely error about it not being the right username/password ......WAT?!?

Something's not right, I thought. I'm going to run the service manually and take a peek at the output. What I saw was pretty disturbing (for me at least). Something along the lines of "Your database is incompatible with this version of Ghost - You will have to create a new one". No problem, I though, I'll just switch back to the old version!

I checked out the previous version I was running, ran the scripts to set everything up and booted up the service manually. I loaded my home page, which displayed perfectly. I then tried to get to the login screen. This is when I really started to sweat... It wouldn't load! I would wait for a dozen seconds or so, and nginx (used for proxying) would 502 me! Looking in the console, I saw a message along the lines of "Unhandled exception: session table doesn't exist". I had no idea what was up.

Things didn't look good for chrisdecairos.ca... I decided my only option was to copy my contents directory (contains the sqlite db file, images, and themes directory) and my configuration file somewhere safe so that I could wipe the slate clean on my ghost installation. I downloaded the official release zip of ghost 0.4.2 and replaced my installation with its contents. I double checked that all the dependencies were in plave and that all assets were built. I copied over my config file, and started up the service.

It worked. I was able to log into a new account and regain access to the admin tools. I re-instated all the themes settings to get the look and feel of the blog back. With everything back up and running, the only thing left for me was to migrate all my posts and tags into the new db file. With a bit of research, I found what I needed:

sqlite3 ghost.db .dump > ghost.sql

Which dumps the sql to create your DB into a file. I opened up that file and deleted all the statements that weren't related to posts, tags and posts_tags. I also removed the create sytax for those tables. I should mention that I compared the schemas between the old db file and the new, and they were identical. This meant that once I ran the script, I should be back in business. Anyways, I scp'ed the script to my server, logged in and ran:

sqlite3 content/data/ghost.db < ~/ghost.sql

and crossed my fingers.....

When I reloaded my blog, everything was back to normal! I had succeeded! I had also lost most of my evening -.-

The takeaways from this are two:

  1. Back up your database, you dummy!
  2. Git checkout is a horrible way to update blog software.

But hey, at least I know how to recover my blog in situations like this.

]]>
<![CDATA[Make List API]]>

Since the creation of Webmaker.org and the MakeAPI, the recommended strategy for creating and curating galleries of makes was to use tagging.

For example, one can tag ten makes with gallery and surface that collection of makes using a tag search. If there was a need for order in

]]>
https://chrisdecairos.ca/make-lists/6213d91efa1a311a1f5d254fThu, 01 May 2014 19:20:43 GMT

Since the creation of Webmaker.org and the MakeAPI, the recommended strategy for creating and curating galleries of makes was to use tagging.

For example, one can tag ten makes with gallery and surface that collection of makes using a tag search. If there was a need for order in the data, a second unique tag must be given to each make as an indication of its order in the set. For example: gallery-1 all the way to gallery-10, then they must be sorted manually after fetching the set.

The downsides of this for the regular user are numerous:

  1. If someone else uses the same tag on an unrelated make, it will "pollute" the gallery
  2. The logic required to order the gallery is needlessly complex.
  3. It's a pain in the ass to apply tags to a set of existing makes
  4. It's a pain in the ass to update a gallery without screwing something up horribly (i.e. duplicate ordering tags)

Galleries and collections are a heavily used feature across the Webmaker universe, and the horrible system we force people to use to build these galleries is a crime.

In order to rectify this crime against the world, I identified some goals that would have to be met in order for this to be considered fixed.

  1. A gallery should not be defined by tags
  2. Gallery order should not be defined by tags, and should not require additional logic at the point of the consumer to sort the gallery data
  3. Fetching Make data from a gallery should be simple, and act as a drop in replacement old style of gallery searches.
  4. Every Webmaker User should be able to create and maintain their own galleries of Makes

Enter the Make List API. It is a collection of server endpoints that allow Webmaker apps to manage a newly defined data model built right into the Make API server.

Essentially, a Make List ("List") is a Mongoose model that stores an Array of Make ID's.

Goal 1 Completed!

Order in the array determines positioning wherever the List is to consumed.

When a List is fetched, The Make API will automatically fetch the Makes from ElasticSearch, sort them, and return the Make JSON to the client. The data is identical to the data returned by make searches.

Goal 2 and 3 Completed!

Lastly, every list is associated with a Webmaker account and only an owner of an List can modify it.

Goal 4 completed!

So the TL;DR; of this is that any user can create and curate any number of Lists (my favourites, top tens, best of makerparty, etc). These lists can be built into any number of places - The front page, the webmaker gallery, top webmaker teaching kits, webmaker profile, webmaker events, ALL THE THINGS!

If I had this running somewhere live I would link it, but I do not. For the time being, you can follow the development in Bug 997329 on Bugzilla.

Below is a Youtube Video of the Demo Application I built to show off the API:

If you have cool ideas about what this can be used for, or have questions about if it could be used for something, drop me a line!

]]>
<![CDATA[Exciting News!]]>

I'm going to be a dad!!!

I'm proud to announce that Sarah and I will be welcoming a new member into our family this November!!

]]>
https://chrisdecairos.ca/exciting-news/6213d91efa1a311a1f5d254eThu, 27 Mar 2014 23:59:00 GMT

I'm going to be a dad!!!

I'm proud to announce that Sarah and I will be welcoming a new member into our family this November!!

]]>