Canonical

Think Cloud portable Let Applications drive the Model

In our last intro to Modeling with Juju post we didn’t pay any attention to the hardware needed to run our workloads. We ran with Juju default values for what the hardware characteristics of the cloud instances should be. Rather than define a bunch of YAML about each machine needed to run the infrastructure, Juju picks sane defaults that allow you to prototype and test things out with a lot less work. 

The reason this is important is that Juju is a modeling tool. When it comes time to making decisions about your hardware infrastructure, it’s key to focus on the needs of the Applications in your Model. If we’re going to operate PostgreSQL in production then we need to look at the needs of PostgreSQL. How much disk space do we need to provide to adequately store our data and provide space for backups and the like? How much CPU power do we need in our PostgreSQL instance? How much memory is going to be required to make sure PostgreSQL has the best chance of achieving hits to data already in memory?

All of these questions fall under characteristics required to run the specific Application. It’ll be different for each Application we operate as part of our infrastructure. Juju shines at this because of the way that it’s build around the Cloud idea of infrastructure as an API. There’s no plugins required. Juju natively is put together so that whenever you deploy an Application that you want to run the API calls are made to make sure you get the right type of hardware using the underlying Cloud API calls. Infrastructure is loaded as-needed and there's no requirement to pre-load software on the machines before you use them.

Juju provides a tool called “Constraints” to manage these details in the Model. In their simplest form, a Constraint is just an option you can pass during the deploy command. Let’s compare the outcome of these two commands:

$ juju deploy postgresql 
juju deploy postgresql pgsql-constrained --constraints mem=32G

$ juju status
…
App                Version  Status  Scale  Charm       Store       Rev  OS      Notes
pgsql-constrained  9.5.9    active      1  postgresql  jujucharms  163  ubuntu
postgresql         9.5.9    active      1  postgresql  jujucharms  163  ubuntu

Unit                  Workload  Agent  Machine  Public address   Ports     Message
pgsql-constrained/0*  active    idle   1        35.190.190.119   5432/tcp  Live master (9.5.9)
postgresql/0*         active    idle   0        104.196.135.211  5432/tcp  Live master (9.5.9)

Machine  State    DNS              Inst id        Series  AZ          Message
0        started  104.196.135.211  juju-dd8186-0  xenial  us-east1-b  RUNNING
1        started  35.190.190.119   juju-dd8186-1  xenial  us-east1-c  RUNNING

From juju status these commands don’t look like they did a whole lot different. They’ve both introduced a PostgreSQL server into our Model and it’s available to be part of Relations to other Applications where it will provide them with a database to use. Juju encourages focusing on what’s running on the Application level over the minutia of hardware. However, if we compare the machine details we can see these commands definitely changed PostgreSQL’s ability to perform for us. 

$ juju show-machine 0
machines:
  "0":
    ...
    hardware: arch=amd64 cores=1 cpu-power=138 mem=1700M root-disk=10240M availability-zone=us-east1-b

$ juju show-machine 1
machines:
  "1":
    ...
    constraints: mem=32768M
    hardware: arch=amd64 cores=8 cpu-power=2200 mem=52000M root-disk=10240M availability-zone=us-east1-c

Notice the different output? The constrained machine knows that the Model states that there’s an active constraint. We want 32G of memory for this PostgreSQL Application and if I were to run a juju add-unit command the next machine would know there’s an active Constraint and respect it. In this way Constraints are part of the active Model. If we were to remove the Unit pgsql-constrained the Constraint is still in there as part of the Model. 

The next thing to note is that we’ve got more memory than I asked for. We wanted to make sure my PostgreSQL had 32G of RAM vs the default instance type which has 1.7G. In order to do that Juju had to request a larger instance which also brought in 8 cores and we ended up with 52G of ram. Looking at the GCE instances available, their instance n1-standard-8 only has 30G of memory. So Juju went looking for a better match and pulled up the n1-highmem-8 instance type. In this way we receive the best matching unit that meets or exceeds my Constraints we’ve requested. Juju walks through all the various instance types to help do this. 

As a user, we focus on what our Application needs to run at its best and Juju takes care of finding the most cost-effective instance type that will meet these goals. This is key to allowing my Model to remain cloud agnostic and maintain performance expectations, regardless of what Cloud we use this Model on.  Let’s try this same deploy on AWS and see what happens. 

$ juju add-model pgsql-aws aws
$ juju deploy postgresql pgsql-on-aws --constraints mem=32G
$ juju show-machine 0
machines:
  "0":
    ...
    constraints: mem=32768M
    hardware: arch=amd64 cores=8 cpu-power=2800 mem=62464M root-disk=8192M availability-zone=us-east-1a

Notice here we ended up with an instance that’s got 8 cores and 62G of ram. That’s the best-fit instance-type on AWS for our Constraints we need to make sure our system performs at its best. 

Other Constraints

As you’d expect we can set other constraints to customize the characteristics of the hardware our Applications leverage. We can start with the number of CPU cores. This is vital in today’s world of applications that are able to put those extra cores to use. It might also be key when we’re going to run LXD machine containers or other VM technology on a machine. Keeping a core per VM is a great way to leverage the hardware available.

$ juju deploy postgresql pgsql-4core --constraints cores=4
$ juju show-machine 2
machines:
  "2":
    ...
    constraints: cores=4
    hardware: arch=amd64 cores=4 cpu-power=1100 mem=3600M root-disk=10240M availability-zone=us-east1-d

Voila, Juju helps get the best instance for those requirements. The most cost-effective instance available with 4 cores provides 3.6G of memory.

root-disk sizing

$ juju deploy postgresql pgsql-bigdisk --constraints root-disk=1T
$ show-machine 3 
machines:
  "3":
    ...
    constraints: root-disk=1048576M
    hardware: arch=amd64 cores=1 cpu-power=138 mem=1700M root-disk=1048576M availability-zone=us-east1-b

One thing that’s not setup as part of the instance-type is the disk space allocated to the instance. Some Applications focus on CPU usage but others will want to track state and store that on disk. We can specify the size of the disk that the instances are allocated through the root-disk Constraint.

The root-disk Constraints only mean that Juju will make sure that the instance has a Volume of that size when it comes up. This is especially useful for Big Data Applications and Databases. 

Remembering Constrains in the model

From here let’s combine our Constraints and produce a true production grade PostgreSQL server. Then we’ll create the bundle that will deploy this in a repeatable fashion that we can reuse on any Cloud that we want to take it to. 

$ juju deploy postgresql pgsql-prod --constraints "cores=4 mem=24G root-disk=1T"

series: xenial
applications: 
  "pgsql-prod": 
    charm: "cs:postgresql-163"
    num_units: 1
    constraints: "cores=4 mem=24576 root-disk=1048576"

Constraints on Controllers

Once we understand how Constraints work one area we’ll find it really valuable is in the sizing of your own Juju Controllers. If we’re operating a true multi-user Controller with many models to the folks in your business we’ll want to move up from the default instances sizes. Juju is conservative so that folks testing out and experimenting with Juju don’t end up running much larger instances than expected. There are a lot more test or trial controllers out there than Controllers that run true production. It is much like any product really. 

Controller Constraints are supplied during bootstrap using the flag bootstrap-constraints. For example: 

$ juju bootstrap --bootstrap-constraints="mem=16G cores=4 root-disk=400G" google

Some considerations when you’re operating a controller include: 

  • root-disk - Juju leverages persistent database and tracks logs and such in that database. You want to make sure to provide space for running Juju backups, storage of Charms and Resources used in Models (they’re cached on the controller), and logs that are rotated over time, etc. 
  • mem - as with any database, the more it can fit into memory the better. It’ll respond quicker to events through the system. During events such as upgrades and migrations there can be a lot of memory consumption as entire models are processed as part of those actions. 
  • cores - speed, as you have more clients and units talking over the API the more cores and faster it can respond the better. Here cores is more important than raw CPU as most things are more about fetching/processing documents vs doing pure computation. 

Cloud specific Constraints

There are cloud specific constraints that are supported. MAAS, for instance, allows deploying via tags because MAAS supports tags as a way of classifying bare metal machines where you might optimize a hardware purchase for network applications, storage applications, etc. Make sure to check out the documentation to see all available Constraints and what Clouds they are supported on. 

The great thing about the Constraint system is that it keeps the focus on requirements on the Applications and what they actually need to get their jobs done. We’re not trying to force Applications onto existing hardware that might not fit it’s operating profile. It’s an idea that really excels in the IaaS Cloud world. 

Give it a try and if you have any questions or hit any problems let us know. You can find us in the #juju IRC channel on Freenode or on the Juju mailing list. You can reach me on Twitter. Don’t hesitate to reach out. 

Learning to speak Juju

Learning to speak Juju

One of my favorite quotes is that there are two hard problems in tech, cache invalidation and naming things. Naming things is fun and you quickly realize that in any tech community there’s a vocabulary you need to understand in order to participate. Programming languages, technical tools, and even just communities all build entire languages you need to understand to participate. Juju is no different here. When you look at Juju you’ll find there’s a small vocabulary that it really helps to understand. 

There are only two hard things in Computer Science: cache invalidation and naming things.
— Phil Karlton

The Juju Client

$ juju --version

We start with Juju itself. Now honestly, there’s a few layers of Juju you end up working with but let’s start with the command line client. When you invoke Juju from the CLI you’re running a local client that communicates with APIs that perform the real work. Clients are available on all major systems and you can even think of things like the Juju GUI as a web based client. You might be asked what version of Juju you’re running and the best place to start is with the version of your client. 

Cloud

$ juju clouds

Cloud        Regions  Default        Type        Description
aws               14  us-east-1      ec2         Amazon Web Services
azure             26  centralus      azure       Microsoft Azure
google             8  us-east1       gce         Google Cloud Platform
localhost          1  localhost      lxd         LXD Container Hypervisor
guimaas            0                 maas        Metal As A Service
...

Just what is the Cloud? In our ecosystem a Cloud is any API that will provision machines for running software on. There are a number of public clouds and each has an APIs, such as AWS, GCE, and Azure. There are private clouds that you can operate software on, such as OpenStack or MAAS. There’s even a local cloud using the LXD API to provide a cloud experience on your local system controlling LXD containers. The key thing is that Juju is primarily intended to abstract away the details of the various cloud APIs so that the operations work you need to perform is repeatable and consistent regardless on which cloud you choose to use. 

Controller

$ juju controllers

Controller   Model        User               Access     Cloud/Region   Models  Machines    HA  Version
guimaas      -            admin              superuser  guimaas             2         1  none  2.2.4
jaas*        k8-test      rharding@external  (unknown)                      -         -     -  2.2.2
jujuteamops  teamwebsite  admin              superuser  aws/us-east-1       2         1  none  2.2.4

A controller is the main brain of Juju. It holds the database of what is deployed, what configuration is expected, what is the current status of everything that’s been and being deployed and running over time. The controller houses the users and permissions and it’s the API endpoint that the Juju Client communicates with. The controller runs on the same cloud as the workloads that will be operated and can be scaled for HA purposes as it’s important infrastructure in the Juju system. You can see, in the above snippet, I’m running controllers on my local MAAS, AWS, and using the JAAS hosted controller. 

Model

juju models

Controller: jujuteamops

Model            Cloud/Region   Status     Machines  Cores  Access  Last connection
ci-everything    aws/us-east-1  available         0      -  admin   never connected
ricks-test-case  aws/us-east-1  available         3      3  admin   2017-09-28
teamwebsite*     aws/us-east-1  available         1      1  admin   2017-09-28

In Juju we talk a lot about modeling and models. Models are namespaces that work is done within. A single Controller can manage many many Models. Each Model tracks the state of what’s going on inside of it, it’s able to be ACL’d using the users known to the Juju Controller, and you can dump out the contents of a Model into a reusable format. In the above snippet you can see I have several Models running on a Controller in AWS right now. One is handling some CI work, another I’m using to test out a bug fix, and a third is the Model where we operate the software required to provide our website. 

Users are able to switch between models and it really helps provide a great level of focus on what a user is working on at any given moment in time. 

Charm

$ juju deploy mysql

Located charm "cs:mysql-57".
Deploying charm "cs:mysql-57".

A Charm is the component that users use to deploy software. Each software component is deployed using a ZIP file that contains all of the scripts, configuration information, event handling, and more that might be required to operate the software over time. The URL of the Charm that’s been found breaks down like so:

  • cs: = “charm store” (vs local file on disk)
  • mysql = name of the charm
  • 57 = what revision of the charm did we find. (latest when not specified)

Charms are just chunks of software themselves. They are the set of YAML, scripts, and templates that makes it possible to operate things. In the above case the Charm provides all the software required to operate MySQL over time. This includes configuration information, backup scripts, and code that handles the provisioning of databases to other software in the Model. 

Application, Unit, and Machine

$ juju status mysql

...
App    Version  Status   Scale  Charm  Store       Rev  OS      Notes
mysql           waiting    0/1  mysql  jujucharms   57  ubuntu

Unit     Workload  Agent       Machine  Public address  Ports  Message
mysql/0  waiting   allocating  1                               waiting for machine

Machine  State    DNS  Inst id              Series  AZ          Message
1        pending       i-0524bda18a2bb0cb9  xenial  us-east-1b  Start instance attempt 1
...

When you deploy MySQL you are asking that a single Machine is provisioned by the Controller using the Cloud API and that the Charm is placed on that machine and, once there, it’s executed. However, once that Charm added to the Model, we call MySQL an Application. The Application is an abstraction that is used to control how we operate our MySQL. This includes adding additional MySQL servers and building a MySQL cluster. If we state that the data for MySQL should be in /srv vs /opt we want that to be set on the Application and then each MySQL in the cluster will update and follow suit. Each member of the cluster is a Unit of the MySQL Application. We can talk to each one individually when necessary, but really we want them to treat them as a single Application in our Model. 

In this way we say that an Application consists of one or more Units that are located on a Machine. The Machine is the actual hardware or VM allocated for the Unit to run on. If we were to remove the Machine #1 in the above example, we’d also lose our unit mysql/0 but the Application would still be in the model. The state details are still in the model and any new Units added would pick up where the first one left off. 

Relation

juju relate mysql:db gypsy-danger

Having MySQL servers running is great but I have some Python software that would love to store some data into that MySQL cluster. In order to do this I need to write out the MySQL DSN to a config file that my application knows how to read. To facilitate this I’ll create a Charm for my gypsy-danger Application and in my Charm I’ll declare that I know how to speak the “db” language. Funny enough, the MySQL charm already knows how to speak the “db” language as well. This means that I can create a relation in the model that states that MySQL and my python project gypsy-danger can communicate using the “db” language. 

This allows Juju to facilitate a conversation between Units of both Applications. gypsy-danger will ask MySQL for a database. MySQL will then create a fresh new database with a unique username and password and send those details back to the gypsy-danger Application. The scripts contained in the Charm will then make sure each Unit of the Application gypsy-danger updates their Python configuration files with those MySQL connection details. 

As an operator, I don’t want to get into the business of specifying things that are transient or different from one Model to another. That just reduces the reusability of what I’ve built. Relations allow Applications to communicate and self-coordinate to perform actions needed to go from independent software components into a working collection of software that performs a useful function. What’s neat is that other Charms can claim to speak “db” and so one MySQL cluster can be used to server out databases to many other Applications in a Model. You can even have different servers provide those databases, such as Percona or MariaDB. By speaking a common protocol Relations provide a great way for Models to stay agile. 

 

Bundle

applications: 
  mysql: 
    charm: "cs:mysql-57"
    num_units: 1
  "gypsy-danger": 
    charm: "cs:~rharding/gypsy-danger-5"
    num_units: 1
relations: 
  - - "mysql:db"
    - "gypsy-danger:db"
$ juju add-model testbundle
$ juju deploy mybundle.yaml

A bundle is basically a static dump of a Model. It takes all the Applications that are running, how many Units are there, what configuration is specified, what Relations exist in the Model, and dumps it to a clean reusable form. In this way, you can replicate this Model in a new Model, or on another Cloud, or just save it for later. It deploys just like a single Charm would, except that it is going to be performing a lot more work. We have bundles of all shapes and sizes and when you see examples of easily deploying a Kubernetes cluster or an OpenStack installation it’s typically done using a reusable Bundle. 

There is our crash course in the language of Juju. Every project has a language, and we all get really creative naming things in the tech world. Juju is no exception. Hopefully this will help you better understand how the parts fit together as you dive into using Juju to operate your own software. 

Terms to remember:

  • Juju (and the client)
  • Cloud
  • Controller
  • Model
  • Charm
  • Application
  • Unit
  • Machine
  • Relation
  • Bundle

Modeling the infrastructure while keeping security and flexibility in mind

Juju allows the user to model their infrastructure in a clean and simple repeatable way. Often deployments are repeated across different clouds and regions. Sometimes it's repeated from dev to staging to production. Regardless of the way it's repeated, there are some solid practices that users need to follow when taking a model and reusing it. Some bits need to be unique to each deployment. Most of these are security details that need to be different from deployment to deployment. There might also be some specific bits of configuration that regularly vary. In staging the url in the apache config might be staging.jujucharm.com and in production it's jujucharms.com. You need to be able to reuse the model of how the applications, constraints, and common configuration work but make sure there's a clean and simple method of providing the extra unique bits each time you bring up another model.

Let's walk through an example model I've created. I'm going to monitor a pair of Ubuntu machines with Telegraf feeding system details to Prometheus. We'll then use Grafana to visualize those metrics. Finally, we want to setup an HaProxy front end for the Grafana so we can provide a proper SSL terminated web site. After it's up and running it looks a bit like this.

bundle-view.jpg

The first thing we need to do is use the Juju GUI to export a bundle that will be a dump of our model. That's an EXACT dump of everything we've got. We need to edit out the bits of the model that need to be unique from deployment to deployment. Once it's all cleaned up it looks a bit like this.

applications:
  ubuntu:
    charm: "cs:ubuntu"
    num_units: 2
  telegraf:
    charm: "cs:telegraf"
  prometheus:
    charm: "cs:prometheus"
  grafana:
    charm: "cs:grafana"
    options:
      admin_password: CHANGEME
  haproxy:
    charm: "cs:haproxy"
    expose: true
relations:
  - - "ubuntu:juju-info"
    - "telegraf:juju-info"
  - - "prometheus:target"
    - "telegraf:prometheus-client"
  - - "prometheus:grafana-source"
    - "grafana:grafana-source"
  - - "grafana:website"
    - "haproxy:reverseproxy"

A couple of things to note in there are the config values for the Grafana admin password. We want that to be clear that it should be changed. Other than that though, it's a pretty plain model. Where it gets fun is when we leverage new bundle features in Juju 2.2.3.

Overriding config values at deploy time

Juju 2.2.3 provides a new argument to the deploy command, --bundle-config. This flag allows you to pass a filename where that file will override config for the applications in the bundle file that you're deploying. You might use it like this:

juju deploy ./bundle.yaml --bundle-config=production.yaml

So what can we use this for? Well, let's set a unique password for our Grafana admin user. To provide a file with updated config we just mirror the bundle format and point at the application we're targeting like so. Let's edit the production.yaml file to look like this.

applications:
  grafana:
    options:
      admin_password: ImChanged

Note it looks just like the bundle file above with the same keys and we're just setting an admin password of "ImChanged" to prove it's set. We can then deploy the bundle with the --bundle-config argument and when it's done and brought up we can check it was set.

$ juju config grafana admin_password
ImChanged

Reading complex data from a file

That's handy, but sometimes you don't want to just set a new string value but read content from a file. Prometheus can be used to scrape custom jobs. We've used this in the past to scrape prometheus data from Juju controllers themselves. To set this up we need to add a YAML declaration about the job that Prometheus will process. Let's find out what the IP of our Juju controller is and add that job using another new bundle feature; include-file://

Using include-file:// you can specify a path on disk that will be read and passed to the config value in your bundle. In this way you can easily sent complicated multi-line data (like YAML) to a config value in a clean and easy way. First let's setup our new scape job definition.

juju show-machine -m controller 0
...
ip-addresses:
    - 10.0.0.8
    
vim scapejobs.yaml

  metrics_path: /introspection/metrics
  scheme: https
  static_configs:
    - targets: ['10.0.0.8:17070']
  basic_auth:
    username: user-prometheus
    password: testing

Now let's update our production.yaml file to also read this new scrapejobs.yaml file during deployment.

applications:
  grafana:
    options:
      admin_password: ImChanged
  prometheus:
    options:
      scrape-jobs: include-file://scrapejobs.yaml

In order for this to work the file is defined to be in the current working directory. If you want it elsewhere we'll need to define a full path to the file. Now when we run our deploy command we'll both set the grafana password as well as read the new job for Prometheus. 

juju deploy ./bundle.yaml --bundle-config=production.yaml
...
juju config prometheus scrape-jobs
  - job_name: juju
    metrics_path: /introspection/metrics
    scheme: https
    static_configs:
      - targets: ['10.0.0.8:17070']
    basic_auth:
      username: user-prometheus
      password: testing

Awesome, now we can do some work with templating out the file and reusing it providing unique IP address for targets as well as custom usernames and passwords as needed from deployment to deployment while keeping the basics of the model intact and reusable.

base64 the included files

There's a third option for including into this production.yaml and that's include-base64://. This allows reading of a local file and base64'ing the contents before getting set into the config. This is helpful for things like ssl keys and such that are unique to different deployments. In our demo case I want to pass in an SSL key to be used with HAProxy so that I can provide HTTPS for accessing the Grafana dashboard. To do this we need to set the ssl_key and ssl_cert config values int he HAProxy charm. Let's update the production.yaml file for this final bit of configuration overriding. 

applications:
  grafana:
    options:
      admin_password: ImChanged
  prometheus:
    options:
      scrape-jobs: include-file://scrapejobs.yaml
        haproxy:
    options:
      ssl_key: include-base64://ssl.key
      ssl_cert: include-base64://ssl.crt

With this in place the next time we deploy we get the config values updated with base64'd values. 

juju deploy ./bundle.yaml --bundle-config=production.yaml
...wait a bit...
juju config haproxy ssl_key
LS0tLS1CRUdJTiBSU0EgUFJJV...

Now we've constructed a sharable model that can be reused yet easily follow best practices for not putting our passwords and keys into the model which might leak out in some way. These tools provide you the best ways of collaborating on the operations of software at scale and I can't wait to hear about how you're using this to build out the next level of your operations best practices. 

Hit a question or want to share a story? Tell us about it in IRC, on the mailing list, or just bug me on twitter @mitechie.

IRC: #juju on Freenode
Mailing list: https://lists.ubuntu.com/mailman/listinfo/juju

testing the future of Juju with snaps

Juju 2.3 is under heavy development and one thing we all want when we're working on the next big release of our software product is to get feedback from users. Are you solving the problems your user has? Are there bugs in the corner cases that a user can find before the release? Are the performance improvements you made working for everyone like you expect? The more folks that test the software before it's out the better off your software will be!

With the recent calls for testing out the Cross Model Relations and Storage Improvements coming in Juju 2.3 I think it'd be good to point out how we can leverage the power of channels in snaps to test out the upcoming features in Juju. 

To get Juju via snaps you can search the snap store and install it like so:

$ snap find juju
$ sudo snap install --classic juju

This then drops the Juju binary in the /snap/bin directory. 

$ /snap/bin/juju --version
2.2.2-zesty-amd64

That's great that we've got the latest stable version of Juju. Let's see what other versions we can get access to. 

Let's try to use the new storage flag on the deploy command that Andrew points out in his blog post.

$ /snap/bin/juju deploy --attach-storage
ERROR flag provided but not defined: --attach-storage

Bummer! That isn't in the stable release of Juju yet. Note that it calls out the flag as not being defined. Let's see if we can get access to a more bleeding edge Juju.

$ snap info juju
name:      juju
summary:   "juju client"
publisher: canonical
contact:   http://jujucharms.com
description: |
  Through the use of charms, juju provides you with shareable, re-usable, and
  repeatable expressions of devops best practices.
commands:
  - juju
tracking:    stable
installed:   2.2.2 (2142) 25MB classic
refreshed:   2017-07-13 16:20:52 -0400 EDT
channels:                                      
  stable:    2.2.2                      (2142) 25MB classic
  candidate: 2.2.2                      (2142) 25MB classic
  beta:      2.2.3+2.2-9909aa4          (2180) 43MB classic
  edge:      2.3-alpha1+develop-1f3f66e (2187) 43MB classic

There we can see that in the edge channel has an upcoming 2.3-alpha release in there. Let's switch to it and test out what's coming in Juju 2.3. 

$ sudo snap refresh --edge juju
juju (edge) 2.3-alpha1+develop-1f3f66e from 'canonical' refreshed

$ /snap/bin/juju --version
2.3-alpha1-zesty-amd64

Now let's check out that command Andrew was talking about with the storage feature in Juju 2.3. 

$ /snap/bin/juju deploy --attach-storage
ERROR flag needs an argument: --attach-storage

There we go, now we've got access to the upcoming storage features in Juju 2.3 and we can provide great feedback to the dev team. 

After we're done testing and providing that feedback we can easily switch back to using the stable release for our normal work. 

$ sudo snap refresh --stable juju
juju 2.2.2 from 'canonical' refreshed

Give it a try, check out the latest in the upcoming 2.3 work and file bugs, send feedback, and be ready to leverage the great work being faster. 

Call for testing: Shared services with Juju

Juju has long provided the model as the best way to describe, in a cross cloud and repeatable way, your infrastructure. Often times, your infrastructure consists of shared resources outside of the different models that are operated. Examples might be a shared object storage service providing space for everyone to backup important data, or perhaps a shared nagios resource providing the single pane of glass that operators need to make sure that all is well in the data center. 

Juju 2.2 provides a new feature behind a feature flag that we’d like to ask folks to test. It’s called Cross Model Relations and builds upon the great unique feature of Juju known as relations. Relations allow components of your architecture to self-coordinate across each other passing information required to operate. This could just be the IP addresses of each other so that config files can be written such that a front end application can speak to the back end service correctly. It could also be as complicated as passing actual payloads of data back and forth. 

Cross Model Relations allows these relations to take place beyond the boundary of the current model. The idea is that I might have a centrally operated service that is made available to other models. Let’s walk through an example of this by providing a centrally operated MySQL service to other folks in the company. As the MySQL expert in our hypothetical company I’m going to create a model that has a scaled out, monitored, and properly sized MySQL deployment. 

First, we need to enable the CMR (Cross Model Relations) feature flag. To use a feature flag in Juju we export an environment variable JUJU_DEV_FEATURE_FLAGS.

$ export JUJU_DEV_FEATURE_FLAGS=cross-model

Next we need to bootstrap the controller we’re going to test this out on. I’m going to use AWS for our company today.

$ juju bootstrap aws crossing-models

Once that’s done let’s setup our production grade MySQL service.

$ juju add-model prod-db
$ juju deploy mysql --constraints "mem=16G root-disk=1T"
$ juju deploy nrpe......and more to make this a scale out mysql model

Now that we’ve got a proper scaled MySQL service going let’s look at offering that database to other models.  Now we’re able to use a new Juju command juju offer.

$ juju offer mysql:db mysqlservice
Application "mysql" endpoints [db] available at "admin/prod-db.mysqlservice"

We’ve offered to other models the db endpoint that the MySQL application provides. The only bit of our entire prod-db model that’s exposed to other folks is the endpoint we’ve selected to provide. You might provide a proxy or load balancer endpoint to other models in the case of a web application, or you might provide both a db and a Nagios web endpoint out to other models if you want them to be able to query the current status of your monitored MySQL instance. There’s nothing preventing multiple endpoints from one or more applications from being offered out there. 

Also note that there’s a URL generated to reference this endpoint. We can ask Juju to tell us about offers that are available for use. 

$ juju find-endpoints
URL                         Access  Interfaces
admin/prod-db.mysqlservice  admin   mysql:db

Now that we’ve got a database let’s find some uses for it. We’ll setup a blog for the engineering team using Wordpress which leverages a MySQL db back end. Let’s setup the blog model and give them a user account for managing it. 

$ juju add-model engineering-blog
$ juju add-user engineering-folks
$ juju grant engineering-folks write engineering-blog

Now they’ve got their own model for managing their blog. If they’d like, they can setup caching, load balancing, etc. However, we’ll let them know to use our database where we’ll manage db backups, scaling, and monitoring. 

$ juju deploy wordpress
$ juju expose wordpress
$ juju relate wordpress:db admin/prod-db.mysqlservice

This now sets up some interesting things in the status output. 

$ juju status
Model              Controller       Cloud/Region   Version  SLA
engineering-blog   crossing-models  aws/us-east-1  2.2.1    unsupported

SAAS name     Status   Store  URL
mysqlservice  unknown  local  admin/prod-db.mysqlservice

App        Version  Status  Scale  Charm      Store       Rev  OS      Notes
wordpress           active      1  wordpress  jujucharms    5  ubuntu

Unit          Workload  Agent  Machine  Public address  Ports   Message
wordpress/0*  active    idle   0        54.237.120.126  80/tcp

Machine  State    DNS             Inst id              Series  AZ          Message
0        started  54.237.120.126  i-0cd638e443cb8441b  trusty  us-east-1a  running

Relation      Provides      Consumes   Type
db            mysqlservice  wordpress  regular
loadbalancer  wordpress     wordpress  peer

Notice the new section above App called SAAS. What we’ve done is provided a SAAS-like offering of a MySQL service to users. The end users can see they’re leveraging the offered service. On top of that the relation is noted down in the Relation section. With that our blog is up and running. 

We can repeat the same process for a team wiki using Mediawiki which will also use a MySQL database backend. While setting it up notice how the Mediawiki unit complains about a database is required in the first status output. Once we add the relation to the offered service it heads to ready status. 

$ juju add-model wiki
$ juju deploy mediawiki
$ juju status
...
Unit          Workload  Agent  Machine  Public address  Ports  Message
mediawiki/0*  blocked   idle   0        54.160.86.216          Database required

$ juju relate mediawiki:db admin/prod-db.mysqlservice
$ juju status
...
SAAS name     Status   Store  URL
mysqlservice  unknown  local  admin/prod-db.mysqlservice

App        Version  Status  Scale  Charm      Store       Rev  OS      Notes
mediawiki  1.19.14  active      1  mediawiki  jujucharms    9  ubuntu
...

Relation  Provides   Consumes      Type
db        mediawiki  mysqlservice  regular

We can prove things are working by actually checking out the databases in our MySQL instance. Let’s just go peek and see they’re real. 

$ juju switch prod-db
$ juju ssh mysql/0
mysql> show databases;
+-----------------------------------------+
| Database                                |
+-----------------------------------------+
| information_schema                      |
| mysql                                   |
| performance_schema                      |
| remote-05bd1dca1bf54e7889b485a7b29c4dcd |
| remote-45dd0a769feb4ebb8d841adf359206c8 |
| sys                                     |
+-----------------------------------------+
6 rows in set (0.00 sec)

There we go. Two remote-xxxx databases for each of our models that are using our shared service. This is going to make operating our infrastructure at scale so much better!

Please go out and test this. Let us know what use cases you find for it and what the next steps should be as we prepare this feature for general use. You can find us in the #juju irc channel on Freenode, the Juju mailing list, and you can find me at @mitechie

Current limitations

As this is a new feature it’s limited to working within a single Juju Controller. It’s also a current work in progress so please watch out for bugs as they get fixed, UX that might get tweaked as we get feedback, and note that upgrading a controller with CMR to a newer version of Juju is not currently supported. 

 

Upgrading Juju using model migrations

Since Juju 2.0 there's a new feature, model migrations, intended to help provide a bulletproof upgrade process. The operator stays in control throughout and has numerous sanity checks to help provide confidence along the upgrade path. Model migrations allow an operator to bring up a new controller on a new version of Juju and to then migrate models from an older controller one at a time. These migrations go through the process of putting agents into a quiet state and queue'ing any changes that want to take place. The state is then dumped out into a standard format and shipped to the new controller. The new controller then loads the state and verifies it matches by checking it against the state from the older controller. Finally, the agents on each managed machine are checked to make sure they can communicate with the new controller and that any state matches expectations before those agents update themselves to report to the new controller for duty.

Once this is all complete the handoff is finished and the old controller can be taken down once the last model is migrated away. In order to show how this works I've got a controller running Juju 2.1.3 and we're going to upgrade my models running on that controller by migrating them to a brand new Juju 2.2 controller. 

One thing to remember is that Juju controllers are the kings of state. Juju is an event based system and on each machine or cloud instance managed an agent runs that communicates with the controller. Events from those agents are processed and the controller updates the state of applications, triggers future events, or just takes note of messages in the system. When we talk about migrating the model, we're only moving where the state system is communicating. None of the workloads are moved. All instances and machines stay exactly where they're at and there's no impact on the workloads themselves. 

$ juju models -c juju2-1
Controller: juju2-1

Model       Cloud/Region   Status     Machines  Cores  Access  Last connection
controller  aws/us-east-1  available         1      1  admin   just now
gitlab      aws/us-east-1  available         2      2  admin   49 seconds ago
k8s*        aws/us-east-1  available         3      2  admin   39 seconds ago

This is our controller running Juju 2.1.3 and it has on it a pair of models running important workloads. One is a model running a Kubernetes workload and the other is running gitlab hosting our team's source code. Lets upgrade to the new Juju 2.2 release. The first thing we need to do is to bootstrap a new controller to move the models to. 

 Gitlab running in the gitlab model to host my team's source code.

Gitlab running in the gitlab model to host my team's source code.

First we upgrade our local Juju client to Juju 2.2 by getting the updated snap from the stable channel. 

sudo snap refresh juju --classic

Now we can bootstrap a new controller making sure to match up the cloud and region of the models we want to migrate. They were in AWS in the us-east-1 region so we'll need to make sure to bootstrap there.

$ juju bootstrap aws/us-east-1 juju2-2

Looking at this model we have the two, out of the box, models a new controller always does. 

$ juju models
Controller: juju2-2

Model       Cloud/Region   Status     Machines  Cores  Access  Last connection
controller  aws/us-east-1  available         1      1  admin   2 seconds ago
default*    aws/us-east-1  available         0      -  admin   8 seconds ago

To migrate models to this new controller we need to be on the older controller. 

$ juju switch juju2-1

With that done we can now ask Juju to migrate the gitlab model to our new controller. 

$ juju migrate gitlab juju2-2
Migration started with ID "44d8626e-a829-48f0-85a8-bd5c8b7997bb:0"

The migration kicks off and the UUID is provided as a way of tracking among potentially several migrations going on. If a migration were to fail for any reason and resume its previous state we could make corrections and try again. Watching the output of juju status while the migration processes is a interesting. Once done you'll find that status errors because the model is no longer there. 

$ juju list-models -c juju2-1
Controller: juju2-1
    
Model       Cloud/Region   Status     Machines  Cores  Access  Last connection
controller  aws/us-east-1  available         1      1  admin   just now
k8s         aws/us-east-1  available         3      2  admin   54 minutes ago

Here we can see our model is gone! Where did it go?

$ juju list-models -c juju2-2                                                                                        (rharding@dingy:)
Controller: juju2-2

Model       Cloud/Region   Status     Machines  Cores  Access  Last connection
controller  aws/us-east-1  available         1      1  admin   just now
default*    aws/us-east-1  available         0      -  admin   45 minutes ago
gitlab      aws/us-east-1  available         2      2  admin   1 minute ago

There we go, it's over on the new juju2-2 controller. The controller managing state is on Juju 2.2, but now it's time for us to update the agents running on each of the machines/instances to also be the new version.

$ juju upgrade-juju -m juju2-2:gitlab
started upgrade to 2.2

$ juju status | head -n 2
Model    Controller  Cloud/Region   Version    SLA
default  juju2-2     aws/us-east-1  2.2        unsupported

Our model is now all running on Juju 2.2 and we can use any of the new features that are useful to us against this model and the applications in it. With that upgraded let's go ahead and finish up by migrating the Kubernetes model. The process is exactly the same as the gitlab model. 

$ juju migrate k8s juju2-2
Migration started with ID "2e090212-7b08-494a-859f-96639928fb02:0"

...one, two, skip a few...

$ juju models -c juju2-2
Controller: juju2-2

Model       Cloud/Region   Status     Machines  Cores  Access  Last connection
controller  aws/us-east-1  available         1      1  admin   just now
default*    aws/us-east-1  available         0      -  admin   47 minutes ago
gitlab      aws/us-east-1  available         2      2  admin   1 minute ago
k8s         aws/us-east-1  available         3      2  admin   57 minutes ago

$ juju upgrade-juju -m juju2-2:k8s
started upgrade to 2.2

The controller juju2-1 is no longer required since it's not in control of any models. There's no state for it to track and manage any longer. 

$ juju destroy-controller juju2-1

Give model migrations a try and keep your Juju up to date. If you hit any issues the best place to look is in the juju debug-log from the controller model. 

$ juju switch controller
$ juju debug-log

Make sure to reach out and let us know if it works well or if you hit a bug in the system. You can also use model migrations to move models among the same version of Juju to help balance load or for maintenance purposes. You can find the team in #juju on Freenode or send a message to the mailing list. 

Designing for long term operations, upgrades, and rebalancing

When you're building a tool for a user there's a huge amount of design, decision making, and focus put on getting the user started. There's good reason for this. Users have an ocean full of tools at their disposal, and if you're building something for others to use you need to win the first five minute test.

You've got about a five minute window for a user to make useful headway in understanding your tool and visualizing how it will aid them in their work. Often this manifests in tools that demo well, but that then have to be left behind when the real work hits. I like to use text editors as an example of this. So many users start out with notepad, nano, or some other really light editor. In time, they learn more and the learning curve of something like VIM, Eclipse, or Emacs really pays off. There's a big gap in folks that make that leap though. If you want to get the most folks in the door it needs to have that hit the ground running feel to it.

When you're designing a tool to help users deploy and run software there's a lot of focus on the install process. Nearly every hosting provider has worked with some sort of "one button install" tool. They're great because it gets users started quickly in that five minute window. Over time though, users find that those one click tools end up being very shallow. They need to add users, create new databases, back up data, restart daemons when appropriate, the list goes on and on. Two operational tasks which are particularly interesting are upgrades and rebalancing.

Juju is an operations tool that can track the state for many different models, or state of operated applications,  potentially run by many different users. These models evolve over time and are running production workloads for years. We know of many models that are nearly a year and a half old. In that same time Juju has had five 1.2x minor versions and three 2.x minor versions. That means you’d want to keep up with improvements in performance, features, and security by upgrading nearly every other month. Aiding in this the Juju 2.1 release includes a new feature, known as model migrations, specifically to help operators managing their infrastructure over the long term.

The general idea is that the largest danger is in doing complex upgrade steps such as database migrations and making sure that everything running is able to communicate on the new version of the software. In Juju 2.1, the Juju infrastructure that manages the state of the models as well as API connections from clients (known as the “controller”) uses model migrations allowing an operator to bring up a new controller and migrate the model state over to a new version. In that process the data is shipped over, it’s gone through any migrations that need to occur, the data is sanity checked between old and new controller, and the system is put together in such a way that if anything fails to check out it can roll back and use the previously running controller. That's some reassurance to lean on when you're doing an important upgrade. Since one controller operates many models, the operator is in control of which models get migrated at what time and allows for a very controlled rollout to the new software in a way that permits safely checking that all remains in the green as the new version of the Juju software is adopted.

Another big use case for model migrations is to balance out the load on Juju controllers over time. As an organization we all grow and change in our needs and over time it's important to be able to move to new hardware, shift services that generate heavy load to dedicated hardware, etc. Juju is tested to perform at thousands of managed machines, however there are dependencies such as the size of the controller machines that track state and over time a normal part of a growing organization is to put into play machines with newer CPUs, more memory, or just flat out beefier hardware with more cores.

I'd love to hear about the tool you use and which ones have fallen short on aiding you in managing your long term operational needs and what other types of long term operations you want your tools to assist with. Hit me up on twitter @mitechie

In a follow up post I'll walk through exactly how to perform a migration to the new Juju 2.2 release using the built in model migration feature.

Open Source software and operational usability

A friend of mine linked me to Yaron Haviv's article "Did Amazon Just Kill Open Source" and I can't help but want to shout a reply

NO!

There is something in premise though, and it's something I keep trying to push as it's directly related to the work I do in my day job at Canonical.

Clouds have been shifting from IaaS to SaaS as sure as can be. They've gone from just getting your quick access to hardware on a "pay for what you use" system to providing you full self service access to applications you used to have to run on that hardware. They've been doing this by taking Open Source software and wrapping it with their own API layers and charging you for their operating of your essential services.

When I look at RDS I see lovely images for PostgreSQL, MySQL, and MariaDB. I can find Elasticsearch, Hadoop, and more. It's a good carrot. Use our APIs and you don't have to operate that software on the EC2 platform any more. You don't have to worry about software upgrades, backups, or scale out operational concerns. Amazon isn't the only cloud doing this either. Each cloud if finding the services it can provide directly to the developers to build products on. 

They've taken Open Source software and fixed delivery to make it easy to consume for most folks. That directly ties into the trend we have been working on at Canonical. As software has moved more and more to Open Source and the cost of software in the average IT budget drops, the costs of finding folks that can operate it at production scale has gotten much more expensive. How many folks can really say they can run Hadoop and Kubernetes and Elasticsearch at professional production levels? How hard is it to find them and how expensive is it to retain them? 

We need to focus on how we can provide this same service, but straight from the Open Source communities. If you want users of your software to be users of YOUR software, and not some wrapped API service, then you need to take those operational concerns into account. We can do more than Open Source the code that is running, we can work together as a community to Open Source the best practices for operating that software over the full life cycle of it. Too often, projects stop short at how to install it. The user has to worry about it long after it's installed. 

I have hope that tools like Juju and the Kubernetes can provide a way for the community around the software we use and love to contribute and avoid the lock in of some vendored solution of the Open Source projects we participate in today. 

Relations and the benefits of coding to an interface

Interfaces are an awesome idea. It’s a tale that all programmers have come across. If you program to a protocol then everyone gets to say “Hey! I can speak that” and join in the fun. TCP/IP, HTTP, my api I created for Bookie. What’s interesting is that I don’t feel like it’s been completely bought into the operations side of the world. There’s a few I can think of off the top of my head; snmp, rrd, and I supposed prometheus is finding some popularity lately. It's one of the more powerful ideas built into Juju and I was hit over the head when doing my latest tinkering with Gitlab

In that blog post I used a new charm done by a member of the community that enables you to proxy anything that speaks the http interface and secure it with Let's Encrypt. At first I went "Cool, this means I can easily set this Gitlab up as https://code.bookie.io and be awesome.". Now that was true, but then I started thinking bigger. Wait a minute, we've got a ton of applications in the Juju Charm Store that all speak the http interface. So I went to work. I wanted to setup everything I might want for my open source project. A handful of juju deploy commands followed by a handful of juju relate commands and my org is up and running. I setup the follow project stack on GCE with JAAS

  • code hosting (Gitlab) - https://code.bookie.io
  • wiki (Mediawiki) - https://wiki.bookie.io
  • continuous testing (Jenkins) - https://ci.bookie.io
  • blog (Ghost) - https://blog.bookie.io
  • mailing lists (Mailman) - https://lists.bookie.io

What impressed me here is that with one simple chunk of work, the Tengu team enabled so many other applications to benefit. I suppose I'd seen this before with things like the HAProxy charm. It enables any of these applications to be placed and scaled behind HAProxy, but this one feels a bit easier to use and more use facing as it's providing that https endpoint with the clean DNS names. 

This is the type of idea that I feel like make Juju a much more interesting idea than other tools folks tend to compare it to. There's a lot of people writing Puppet modules, Chef cookbooks, or adding "one click deploy" features, but I don't see that there's this idea of standing on each other's work as built in as it is in Juju. I've worked in OpenSource a long time now and there's nothing better than finding folks that are smarter than you, and leveraging their brains in your own work. You can do this, but I find that the design that Juju has put together encourages that things are modular and solutions are stood up as a series of parts each doing their thing well in a very portable way. 

The http interface is really common, but you can imagine others that could be as impactful. I'd love to brainstorm with folks on what are some of the biggest bang for the buck ideas out there that enables sharing of the operational best practices across many software applications. I can think of a handful in monitoring, logging, and metrics. Let me know what you can think of @mitechie.

Giving Gitlab an afternoon spin

I've been meaning to check out Gitlab lately. I hear a lot about folks replacing their internal systems with it. It has an awesome checkbox of features for managing your internal code with public and private repos,  and enables you to build the best practices out there around code reviews, automated testing and gated landing. I'm lucky and work in Open Source for my work, but until recently all the code I worked on sat on some internal Git or SVN system. A better application for performing that task is exciting to a lot of users out there. Of course, the best way for me to test it out was to grab the Gitlab Juju charm and toss it up on GCE with JAAS

While looking at that I noticed that a member of the community had actually created a bundle of two applications that made the Gitlab experience even better. It includes a new charm from another community member that allows using Let's Encrypt. Cool! This means that I not only get a Gitlab instance to test with, but I can use it with others with SSL encryption so that the login credentials and such aren't passed around in clear text. 

Let's get testing by adding a new model in JAAS and deploying this bundle into GCE.

juju add-model gitlab google
juju deploy cs:~spiculecharms/bundle/gitlab-ssl
juju show-machine 0

The show-machine command here is useful because I need to go add a new DNS entry for Let's Encrypt to use. Since I'm testing out a scenario of hosting all of my Open Source project's code here I added an A record for code.bookie.io to point to the public IP address of the ssl termination application. With that DNS entry setup I need to tell the termination application what its URL is meant to be. 

juju config ssl-termination-proxy fqdn=code.bookie.io

A few minutes later and I had a working Gitlab deploy to play with. I imported all of my old Bookie code projects from Github and tested out cloning repos and such. One issue I did hit was that since my Gitlab application was proxied I needed to tell Gitlab about the URL it's actually under for end users. Do do that you need to edit a config file on the Gitlab application. 

juju ssh gitlab-server/0

sudo vim /etc/gitlab/gitlab.rb

# Change this external_url config
external_url 'http://code.bookie.io:80'
    
sudo gitlab-ctl reconfigure

I can now move forward with testing out the gitlab supplied tools for testing and building releases. If it works out I can then deploy this in my private MAAS or internal OpenStack with the same ease because Juju provides such a wide array of options. 

I want to thank the great folks at Spicule that put together the Gitlab charm and the folks at Tengu for the ssl-termination-proxy charm. I find that second one really interesting and will cover that in a follow up blog post. 

Three reasons you need a quick VPN in your pocket.

Recent news that the government has repealed regulations preventing the sale of customer browsing habits has some folks thinking about their internet use and privacy a bit more than usual. I think that most of us assume that the things we do in our home on our own devices are pretty safe from becoming shared with others. This has caused a rash of articles about running your own VPN. As these kept crossing my RSS feeds I got thinking that this is the perfect use case for Juju and JAAS.

Good news! The Tengu team has made it really easy to use Juju to setup your own VPN server. It's nearly as fast as you can get an instance from a cloud provider. As I sit here at the coffee shop I timed it and it took me six minutes, including adding it to my client and hitting connect. 

 

The 6 minute VPN setup

How do we do this? We use JAAS since it's a great way to deploy something into any public cloud and especially different regions. I personally have my personal VPN in the AWS us-east-2 region since it's the closest physically to where I am in Southern Michigan. 

juju add-model myvpn aws/us-east-2
juju deploy openvpn
juju expose openvpn
juju config openvpn clients="rick"
juju scp openvpn/0:~/rick.ovpn myvpn.ovpn

This deploys the OpenVPN charm and sets up a config file for "rick" that I can use to connect with a VPN client. On my MAC I use Viscosity and on Ubuntu I use the Network Manager vpn plugin. Both of these clients can load the .ovpn file that you download from the deployed server. 

Once connected you can see all of your traffic routed through the VPN securely. 

% ping ubuntu.com
PING ubuntu.com (91.189.94.40): 56 data bytes
64 bytes from 91.189.94.40: icmp_seq=0 ttl=47 time=193.328 ms
64 bytes from 91.189.94.40: icmp_seq=1 ttl=47 time=178.245 ms
64 bytes from 91.189.94.40: icmp_seq=2 ttl=47 time=140.312 ms

% traceroute ubuntu.com
traceroute to ubuntu.com (91.189.94.40), 64 hops max, 52 byte packets
 1  ip-10-200-200-1.us-east-2.compute.internal (10.200.200.1)  48.860 ms  47.036 ms  58.141 ms
 2  ec2-52-15-0-2.us-east-2.compute.amazonaws.com (52.15.0.2)  103.381 ms  64.848 ms
    ec2-52-15-0-6.us-east-2.compute.amazonaws.com (52.15.0.6)  69.651 ms

What is even better is that you can shorten this by automating the deploy, expose, and config with a Juju Bundle. I created one that sets up two clients out of the box. One for myself and one for a "guest". If I ever want to add additional clients I could just update the config in the charm. 

A few lines of yaml and a "charm push . cs:~rharding/rickvpn" and I've got a one line deploy of a VPN at my fingertips. If I deploy before I order my coffee the VPN is up and ready for use by the time it's done. 

Reason #2 - blocked ports on the shared wifi

I promised some other reasons for a VPN and blocked ports at a shared wifi location is #2. This Starbucks I'm sitting has had the wifi configured to block port 22 which can be a pain in the rear when you're attempting to work with a lot of cloud instances over SSH. A quick VPN and suddenly the world of SSH is opened back up. Yes, some folks will tell me to change my SSH ports, but when you're working on cloud servers across different clouds it's definitely much more a pain to change SSH everywhere than to just launch this VPN. 

Reason #3 - testing end user experience

I've also found myself working with others across the world. What's always fun is when they're having issues I just can't replicate. We have large numbers of our team in Europe and down in New Zealand and Australia. As you can imagine, their load times for things is a bit different than my midwest connection to things coming out of US based networks. Given the breadth of cloud regions these days, it's actually not as hard as it seems to replicate the experience that the remote users are seeing. I can easily throw up a Europe based VPN and force myself to test things through it. Suddenly I can see that the timeout we have doesn't work well for users whose bytes go through undersea cables. 

I'm sure you can think of some other uses that a quick VPN would come in handy. Let me know what your favorite uses for the OpenVPN charm are. Reach out on Twitter @mitechie. And thank you Tengu Team for this great OpenVPN charm!

Operations in Production

Operations in Production

As we setup our important infrastructure we set up monitoring, alerting, and keep a close eye on them. As the services we provide grow and need to expand, or failures in hardware attempt to wreck havoc, we’re ready because of the due diligence that’s gone into monitoring the infrastructure and applications deployed. To do this monitoring, we often bake it into our deployment and configuration management tooling. One thing I often see is that folks forget to monitor the tools that coordinate all of that deployment and configuration management. It’s a bit of a case of “who watches the watcher?”

Working at Canonical, three years in. a.k.a wtf just happened?

A couple of people have reached out to me via LinkedIn and reminded me that my three year work anniversary happened last Friday. Three years since I left my job at a local place to go work for the Canonical where I got the chance to be paid to work on open source software and better my Python skills with the team working on Launchpad. My wife wasn't quite sure. "You've only been at your job a year and a half, and your last one was only two years. What makes this different?" What's amazing, looking back, is just how *right* the decision turned out to be. I was nervous at the time. I really wasn't Launchpad's biggest fan. However, the team I interviewed with held this promise of making me a better developer. They were doing code reviews of every branch that went up to land. They had automated testing, and they firmly believed in unit and functional tests of the code. It was a case of the product didn't excite me, but the environment, working with smart developers from across the globe, was exactly what I felt like I needed to move forward with my career, my craft.

2013-09-02 18.17.47

I joined my team on Launchpad in a squad of four other developers. It was funny. When I joined I felt so lost. Launchpad is an amazing and huge bit of software, and I knew I was in over my head. I talked with my manager at the time, Deryck, and he told me "Don't worry, it'll take you about a year to get really productive working on Launchpad." A year! Surely you jest, and if you're not jesting...wtf did I just get myself into?

It was a long road and over time I learned how to take a code review (a really hard skill for many of us), how to do one, and how to talk with other smart and opinionated developers. I learned the value of the daily standup, how to manage work across a kanban board. I learned to really learn from others. Up until this point I'd always been the big fish in a small pond and suddenly I was the minnow hiding in the shallows. Forget books on how to code, just look at the diff in the code review you're reading right now. Learn!

My boss was right, it was nearly ten months before I really felt like I could be asked to do most things in Launchpad and get them done in an efficient way. Soon our team was moved on from Launchpad to other projects. It was actually pretty great. On the one hand, "Hey! I just got the hang of this thing" but, on the other hand, we were moving on to new things. Development life here has never been one of sitting still. We sit down and work on the Ubuntu cycle of six month plans, and it's funny because even that is such a long time. Do you really know what you'll be doing six months from now?

P1100197.jpg

Since that time in Launchpad I've gotten work on several different projects and I ended up switching teams to work on the Juju Gui. I didn't really know a lot about this Juju thing, but the Gui was a fascinating project. It's a really large scale JavaScript application. This is no "toss some jQuery on a web page" thing here.

I also moved to work under a new manager Gary. As my second manager since starting at Canonical and I was amazed at my luck. Here I've had two great mentors that made huge strides in teaching me how to work with other developers, how to do the fun stuff, the mundane, and how to take pride in the accomplishments of the team. I sit down at my computer every day and I've got the brain power of amazing people at my disposal over irc, Google Hangouts, email, and more. It's amazing to think that at these sprints we do, I'm pretty much never the smartest person in the room. However, that's what's so great. It's never boring and when there's a problem the key is that we put our joint brilliant minds to the problem. In every hard problem we've faced I've never found that a single person had the one true solution. What we come up with together is always better than what any of us had apart.

When Gary left and there was a void for team lead and it was something I was interested in. I really can't say enough awesome things about the team of folks I work with. I wanted to keep us all together and I felt like it would be great for us to try to keep things going. It was kind of a "well I'll just try not to $#@$@# it up" situation. That was more than nine months ago now. Gary and Deryck taught me so much, and I still have to bite my tongue and ask myself "What would Gary do" at times. I've kept some things the same, but I've also brought my own flavor into the team a bit, at least I like to think so. These days my Github profile doesn't show me landing a branch a day, but I take great pride in the progress of the team as a whole each and every week.

The team I run now is as awesome a group of people, the best I could hope to work for. I do mean that, I work for my team. It's never the other way around and that's one lesson I definitely picked up from my previous leads. The projects we're working on are exciting and new and are really important to Canonical. I get to sit in and have discussions and planning meetings with Canonical super genius veterans like Kapil, Gustavo, and occasionally Mark Shuttleworth himself.

Looking back I've spent the last three years becoming a better developer, getting an on the job training course on leading a team of brilliant people, and crash course on thinking about the project, not just as the bugs or features for the week, but for the project as it needs to exist in three to six months. I've spent three years bouncing between "what have I gotten myself into, this is beyond my abilities" to "I've got this. You can't find someone else to do this better". I always tell people that if you're not swimming as hard as you can to keep up, find another job. I feel like three years ago I did that and I've been swimming ever since.

P1040511.jpg

Three years is a long time in a career these days. It's been a wild ride and I can't thank the folks that let me in the door, taught me, and have given me the power to do great things with my work enough. I've worked by butt off in Budapest, Copenhagen, Cape Town, Brussels, North Carolina, London, Vegas, and the bay area a few times. Will I be here three years from now? Who knows, but I know I've got an awesome team to work with on Monday and we'll be building an awesome product to keep building. I'm going to really enjoy doing work that's challenging and fulfilling every step of the way.

DSC00329

Juju Quickstart and the power of bundles

The Juju UI team has been hard at work making it even easier for you to get started with Juju. We've got a new tool for everyone that is appropriately named Juju Quickstart and when you combine it with the power of Juju bundles you're in for something special.

Quickstart is a Juju plugin that aims to help you get up and running with Juju faster than any set of commands you can copy and paste. First, to use Quickstart you need to install it. If you're on the upcoming Ubuntu Trusty release it's already there for you. If you're on an older version of Ubuntu you need to get the Juju stable ppa

sudo add-apt-repository ppa:juju/stable sudo apt-get update

Installing Quickstart is then just:

sudo apt-get install juju-quickstart

Once you've got Quickstart installed you are ready to use it to deploy Juju environments. Just run it with `juju-quickstart`. Quickstart will then open a window to help walk you through setting up your first cloud environment using Juju.

Quickstart can help you configure and setup clouds using LXC (for local environments), OpenStack (which is used for HP Cloud), Windows Azure, and Amazon EC2. It knows what configuration data is required for each cloud provider and provides hints on where to find the information you’ll need.

Once you've configured  your cloud provider, Quickstart will bootstrap a Juju environment on it for you. This takes a while on live clouds as this is bringing up instances.

Quickstart does a couple of things to make the environment nicer than your typical bootstrap. First, it will automatically install the Juju GUI for you. It does this on the first machine brought up in the environment so that it's co-located, which means it comes up much faster and does not incur the cost of a separate machine.  Once the GUI is up and running, Quickstart will automatically launch your browser and log you into the GUI. This saves you from having to copy and paste your admin secret to log in.

If you would like to setup additional environments you can re-launch Quickstart at any time. Use juju-quickstart -i to get back to the guided setup.

Once the environment is up Quickstart still helps you out by providing a shortcut to get back to the Juju GUI running. It will auto launch your browser, find the right IP address of the GUI, and auto log you in. Come back the next day and Quickstart is still the fastest way to get back into your environment.

Finally, Quickstart works great with the new Juju charm bundles feature. A bundle is a set of services with a specific configuration and their corresponding relations that can be deployed together via a single step. Instead of deploying a single service, they can be used to deploy an entire workload, with working relations and configuration. The use of bundles allows for easy repeatability and for sharing of complex, multi-service deployments. Quickstart can accept a bundle and will deploy that bundle for you. If the environment is not bootstrapped it will bring up the environment, install the GUI, and then deploy the bundle.

For instance, here is the one command needed to deploy a bundle that we’ve created and shared:

juju-quickstart bundle:~jorge/mongodb-cluster/1/mongodb-cluster

If the environment is already bootstrapped and running then Quickstart will just deploy the bundle. The two features together work great for testing repeatable deployments. What's great is that the power of Juju means you can test this deployment on multiple clouds effortlessly.  For instance you can design and configure your bundle locally via LXC and, when satisfied, deploy it to a real environment, simply by changing the environment command-line option when launching Quickstart.

Try out Quickstart and bundles and let us know what you think. Feel free to hop into our irc channel #juju on Freenode if you've got any questions. We're happy to help.

Make sure to check out Mat's great YouTube video walk through as well over on the Juju GUI blog.

PyCon 2012: What a ride!

Phew, tiring trip to PyCon this year. This was my second year after hitting up my first last year. The conference definitely felt larger than last year as they crossed 2,200 attendees. It's unbelievable to see how large the Python community has gotten. I can't stress what great job the people that put this together.

Last year I hardly knew anyone. This year, however, I got to put faces to people I've interacted with over the last year, welcome back those I met last year, and get some face to face time with new co-workers from Canonical. The social aspect was a larger chunk of my time this year for sure.

Side note, I listen to The Changelog podcast from time to time, and I love their question on who you'd love to pair up/hack with as a programming hero type question. I got to meet and greet mine at this PyCon by meeting up with Mike Bayer. He's behind some great tools like SqlAlchemy and Mako. What I love is that, not only does he rock the code part, but the community part as well. I'm always amazed to see the time he puts into his responses to questions and support avenues. Highlight of my PyCon for sure.

I'll post a seperate blog post on my sprint notes. I feel that if you're going to go, you might as well stay for sprints. I get as much out of that as the conference parts itself. I think I made some good progress on things for Bookie this year. The big thing is that an invite system is in place, so if you'd like an account on Bmark.us let me know and I'll toss an invite your way.

Notes

  • Introduction to Metaclasses
    • Basic but reminded me how the bits worked and had some good examples. I like this because I often write 'the code I want to be writing' and then write my modules/etc to fit and metaclasses help with this sometimes.
  • Fast Test, Slow Test
    • Just a reminder that fast tests are true unit tests and run during dev which helps make things easier/faster as you go vs the whole 'mad code' then wait for feedback on how wrong you are.
  • Practical Machine Learning in Python
    • mloss.org - check out for lots of notes/etc on ML in OSS
    • ml-class.org - teach me some ML please
    • sluggerml - app he built as a ML demo
    • scikit-learn : lots of potential, very active right now
  • Introduction to PDB
    • whoa...where have you been all my life 'until' command?
    • use 'where' more to move up stack vs adding more debug lines
  • Flexing SQLAlchemy's Relational Power
  • Hand Coded Applications with SQLAlchemy
    • <3 SqlAchemy. Some really good examples of writing less code by automating the biolerplate with conventions.
  • Web Server Bottlenecks And Performance Tuning
    • lesson: if you think it's apache's fault think again. You're probably doing it wrong.
  • Advanced Celery
    • check out cyme https://github.com/celery/cyme, possible way to more easily run/distribute celery work?
    • cool to see implementations of map/reduce using celery
    • chords and groups are good, check them out more
  • Building A Python-Based Search Engine
    • Good talk for into into terms and such for fulltext search
  • Lighting talks of note
    • py3 porting docs: http://docs.python.org/howto/pyporting
    • bpython rewind feature is full of win over ipython
    • 'new virtualenv' trying to get into stdlib for py3.3, cool!
    • asyncdynamo cool example of async boto requests for high performance working with AWS api (uses tornado)
    • I WANT the concurrent.features library...but it's Python 3 :(

Ubuntu Community Appreciation Day: My Loco!

So on Ubuntu Community Appreciation Day I want to toss a big thanks out to the Michigan Loco. It's a great bunch of guys and gals that I talk with online every day and have helped keep me sane, taught me new things, and overall have just made this community thing work for me. If it wasn't for them, I'd not be running Ubuntu and working on Launchpad today. So hats off to everyone in the Loco and here's to all the other great people making this community rock!

An updated email config, 2 offlineimap, mutt, and dovecot ftw!

Since joining the Launchpad team my email has been flooded. I've always been pretty careful to keep my email clean and I've been a bit overwhelmed with all the new mailing lists. There are a bunch of people working on things, as you can imagine. So the email never stops. I'm still working on figuring out what I need to know, what I can ignore, and what should be filed away for later.

Another thing I'm finding is that I've got emails in both of my accounts around a single topic. For instance, I have to do some traveling. I've got emails on both my Gmail (personal) and Canonical (work) accounts that I really want to keep together in a single travel bucket.

I currently have offlineimap pull both of work and personal accounts down into a single folder on my machine ~/.email/. So I've got a ~/.email/work and a ~/.email/personal. I use mutt then to open the root there and to work through email. It works pretty well. Since I really wanted a global "travel" folder, I figured I'd just created one. So that works. I end up with a directory structure like:

  • personal
  • travel
  • work

The problem

Of course the issue here is that when offlineimap runs again it sees the email is no longer in the personal or work accounts and removes them from the server. And the travel folder isn't a part of any server side account so it's not backed up or synced anywhere. This means Gmail no longer sees things, my phone no longer sees them, and I've got no backups. Oops!

Solution start

So to fix that, my new directory structure needs to become an account. So I setup dovecot on my colo server. This way I could have an imap account that I could do whatever with. To get my email into there, I setup offlineimap on my colo to pull personal and work down as I had on my laptop. So I still have things in a ~/.email that's from the accounts and then dovecot is keeping all of my email in ~/email (not a hidden dir). To get my email into there, I symlinked the ~/.email/personal/INBOX to ~/email/personal and did the same with the work account. Now the two accounts are just extra folders in my dovecot setup.

So there we go, colo is pulling my email, and I changed my laptop to offlineimap sync with the new dovecot server. In this way, I've got a single combined email account on my laptop using mutt. I then also setup my phone with an imap client to talk directly to the dovecot server. Sweet, this is getting closer to what I really want.

Issues start, who am I

Of course, once this started working I realized I had to find a way to make sure I sent email as the right person. I'd previously just told mutt if I was in the personal account to use that address and if in the work account use that one. Fortunately, we can help make mutt a bit more intelligent about things.

First, we want to have mutt check the To/CC headers to determine who this email was to, if it was me, then use that address as a From during replies.

mutt config:

# I have to set these defaults because when you first startup mutt
# it's not running folder hooks. It just starts in a folder
set from="rharding@mitechie.com"
# Reply with the address used in the TO/CC header
set reverse_name=yes
alternates "rick.stuff@canonical.com|deuce868@gmail.com"

This is a start, but it fails when sending new email. It's not sure who I should be still. So I want a way to manually switch who the active From use is. These macros give me the ability to swap using the keybindings Alt-1 and Alt-2.

mutt config:

macro index,pager \e1 ":set from=rharding@mitechie.com\n:set status_format=\"-%r-rharding@mitechie.com: %f [Msgs:%?M?%M/?%m%?n? New:%n?%?o? Old:%o?%?d? Del:%d?%?F? Flag:%F?%?t? Tag:%t?%?p? Post:%p?%?b? Inc:%b?%?l? %l?]---(%s/%S)-%>-(%P)---\"\n" "Switch to rharding@mitechie.com"
macro index,pager \e2 ":set from=rick.stuff@canonical.com\n:set status_format=\"-%r-rick.stuff@canonical.com: %f [Msgs:%?M?%M/?%m%?n? New:%n?%?o? Old:%o?%?d? Del:%d?%?F? Flag:%F?%?t? Tag:%t?%?p? Post:%p?%?b? Inc:%b?%?l? %l?]---(%s/%S)-%>-(%P)---\"\n" "Switch to rick.stuff@canonical.com"

That's kind of cool, and it shows in the top of my window who I am set to. Hmm, even that fails if I've started an email and want to switch who I am on the fly. There is a way to change that though, so another macro to the rescue, this time for the compose ui in mutt.

mutt config:

macro compose \e1 "<esc>f ^U Rick Harding <rharding@mitechie.com>\n"
macro compose \e2 "<esc>f ^U Rick Harding <rick.stuff@canonical.com>\n"

There, now even if I'm in the middle of creating an email I can switch who it's sent as. It's not perfect, and I know I'll screw up at some point, but hopefully this is close enough.

Firming up with folder hooks

Finally, if I know the folder I'm in is ONLY for one account or the other, I can use folder hooks to fix that up for me.

mutt config:

folder-hook +personal.* set from="rharding@mitechie.com"
folder-hook +personal.* set signature=$HOME/.mutt/signature-mitechie
folder-hook +personal.* set query_command='"goobook query \'%s\'"'

So there, if I'm in my personal account, set the from, the signature, and change mutt to complete my addresses from goobook instead of the ldap completion I use for work addresses.

Not all roses

There are still a few issues. I lose webmail. After all, mail goes into my Gmail Inbox and then from there into various folders of my dovecot server. Honestly though, I don't think this will be an issue. I tend to use my phone more and more for email management so as long as that works, I can get at things.

I also lose Gmail search for a large portion of my email. Again, it's not killer. On my laptop I've been using notmuch (Xapian backed) for fulltext search and it's been doing a pretty good job for me. However, I can't run that on my phone. So searching for mail on there is going to get harder. Hopefully having a decent folder structure will help though.

I've also noticed that the K-9 mail client is a bit flaky with syncing changes up on things. Gmail, mutt, and I've also setup Thunderbird all seem to sync up ok without issue, so I think this is K-9 specific.

That brings up the issue of creating new folders. Offlineimap won't pick up new folders I create from within mutt. It won't push those up as new imap folders for some reason. I have to first create them using thunderbird, which sets up the folder server side for me. Then everything works ok. It's a PITA, but hopefully I can find a better way to do this. Maybe even a Python script to hook into a mutt macro or something.

Wrap Up

So there we are. Next up is to setup imapfilter to help me pre-filter the email as it comes in. Now that all email is in one place that should be nice and easy. I can run that on my colo server and it'll be quick.

This is obviously more trouble than most people want to go through to setup email, but hey, maybe someone will find this interesting or have some of their own ideas to share.

CoffeeHouseCoders 11/23/11: YUI Theater group viewing

Just a heads up, this week's CoffeeHouseCoders (CHC) Detroit-ish will be a bit different. One of the goals of moving the location to the new Caribou was that we get access to the meeting room. This opens up the opportunity for us to have some group discussion and such around various topics. We're going to give that a shot this week with a group viewing of YUI Theater video viewings and JavaScript discussion.

Most of us do at least some JavaScript in our work and projects so I think it's relevant and should be fun to geek out before the holidays start up. I'll have a little projector and speaker and with the new videos from YUIConf 2011 going up, it'll be nice to set aside some time to catch up on some of the recorded presentations. Take a peek and set aside one or two "must watch" videos for Wed night! Not all of the videos are YUI specific, so it should be useful for all of us doing JavaScript.

Launchpad Team: Day 1 complete

Phew, well one day down. I dove head first into Canonical and Launchpad today. It's a bit amazing the amount of information and parts there are to everything. Everyone welcoming me throughout the day was great, but my head is still spinning a bit for sure.

I managed to get a nice starter walk-through of Launchpad and find my way through a superficial bugfix and merge request. So hey, that wasn't so bad heh. It's kind of exciting to throw out all my usual tools I've been mastering for a while and start over. Make files, zpt files, ZCA, and YUI run the show. Time to see how people get things done without Fabric, Mako, and SqlAlchemy.

I'm really excited to get to some real change and hope to pick things up quickly. I know a while ago I was disappointed that Launchpad wasn't taking advantage of some of the Javascript driven UI enhancements that we can do these days. The change of that is already in full swing and my team is looking to land a nice chunk on the bugs UI shortly. Let's get to work!