Think Cloud portable Let Applications drive the Model

In our last intro to Modeling with Juju post we didn’t pay any attention to the hardware needed to run our workloads. We ran with Juju default values for what the hardware characteristics of the cloud instances should be. Rather than define a bunch of YAML about each machine needed to run the infrastructure, Juju picks sane defaults that allow you to prototype and test things out with a lot less work. 

The reason this is important is that Juju is a modeling tool. When it comes time to making decisions about your hardware infrastructure, it’s key to focus on the needs of the Applications in your Model. If we’re going to operate PostgreSQL in production then we need to look at the needs of PostgreSQL. How much disk space do we need to provide to adequately store our data and provide space for backups and the like? How much CPU power do we need in our PostgreSQL instance? How much memory is going to be required to make sure PostgreSQL has the best chance of achieving hits to data already in memory?

All of these questions fall under characteristics required to run the specific Application. It’ll be different for each Application we operate as part of our infrastructure. Juju shines at this because of the way that it’s build around the Cloud idea of infrastructure as an API. There’s no plugins required. Juju natively is put together so that whenever you deploy an Application that you want to run the API calls are made to make sure you get the right type of hardware using the underlying Cloud API calls. Infrastructure is loaded as-needed and there's no requirement to pre-load software on the machines before you use them.

Juju provides a tool called “Constraints” to manage these details in the Model. In their simplest form, a Constraint is just an option you can pass during the deploy command. Let’s compare the outcome of these two commands:

$ juju deploy postgresql 
juju deploy postgresql pgsql-constrained --constraints mem=32G

$ juju status
…
App                Version  Status  Scale  Charm       Store       Rev  OS      Notes
pgsql-constrained  9.5.9    active      1  postgresql  jujucharms  163  ubuntu
postgresql         9.5.9    active      1  postgresql  jujucharms  163  ubuntu

Unit                  Workload  Agent  Machine  Public address   Ports     Message
pgsql-constrained/0*  active    idle   1        35.190.190.119   5432/tcp  Live master (9.5.9)
postgresql/0*         active    idle   0        104.196.135.211  5432/tcp  Live master (9.5.9)

Machine  State    DNS              Inst id        Series  AZ          Message
0        started  104.196.135.211  juju-dd8186-0  xenial  us-east1-b  RUNNING
1        started  35.190.190.119   juju-dd8186-1  xenial  us-east1-c  RUNNING

From juju status these commands don’t look like they did a whole lot different. They’ve both introduced a PostgreSQL server into our Model and it’s available to be part of Relations to other Applications where it will provide them with a database to use. Juju encourages focusing on what’s running on the Application level over the minutia of hardware. However, if we compare the machine details we can see these commands definitely changed PostgreSQL’s ability to perform for us. 

$ juju show-machine 0
machines:
  "0":
    ...
    hardware: arch=amd64 cores=1 cpu-power=138 mem=1700M root-disk=10240M availability-zone=us-east1-b

$ juju show-machine 1
machines:
  "1":
    ...
    constraints: mem=32768M
    hardware: arch=amd64 cores=8 cpu-power=2200 mem=52000M root-disk=10240M availability-zone=us-east1-c

Notice the different output? The constrained machine knows that the Model states that there’s an active constraint. We want 32G of memory for this PostgreSQL Application and if I were to run a juju add-unit command the next machine would know there’s an active Constraint and respect it. In this way Constraints are part of the active Model. If we were to remove the Unit pgsql-constrained the Constraint is still in there as part of the Model. 

The next thing to note is that we’ve got more memory than I asked for. We wanted to make sure my PostgreSQL had 32G of RAM vs the default instance type which has 1.7G. In order to do that Juju had to request a larger instance which also brought in 8 cores and we ended up with 52G of ram. Looking at the GCE instances available, their instance n1-standard-8 only has 30G of memory. So Juju went looking for a better match and pulled up the n1-highmem-8 instance type. In this way we receive the best matching unit that meets or exceeds my Constraints we’ve requested. Juju walks through all the various instance types to help do this. 

As a user, we focus on what our Application needs to run at its best and Juju takes care of finding the most cost-effective instance type that will meet these goals. This is key to allowing my Model to remain cloud agnostic and maintain performance expectations, regardless of what Cloud we use this Model on.  Let’s try this same deploy on AWS and see what happens. 

$ juju add-model pgsql-aws aws
$ juju deploy postgresql pgsql-on-aws --constraints mem=32G
$ juju show-machine 0
machines:
  "0":
    ...
    constraints: mem=32768M
    hardware: arch=amd64 cores=8 cpu-power=2800 mem=62464M root-disk=8192M availability-zone=us-east-1a

Notice here we ended up with an instance that’s got 8 cores and 62G of ram. That’s the best-fit instance-type on AWS for our Constraints we need to make sure our system performs at its best. 

Other Constraints

As you’d expect we can set other constraints to customize the characteristics of the hardware our Applications leverage. We can start with the number of CPU cores. This is vital in today’s world of applications that are able to put those extra cores to use. It might also be key when we’re going to run LXD machine containers or other VM technology on a machine. Keeping a core per VM is a great way to leverage the hardware available.

$ juju deploy postgresql pgsql-4core --constraints cores=4
$ juju show-machine 2
machines:
  "2":
    ...
    constraints: cores=4
    hardware: arch=amd64 cores=4 cpu-power=1100 mem=3600M root-disk=10240M availability-zone=us-east1-d

Voila, Juju helps get the best instance for those requirements. The most cost-effective instance available with 4 cores provides 3.6G of memory.

root-disk sizing

$ juju deploy postgresql pgsql-bigdisk --constraints root-disk=1T
$ show-machine 3 
machines:
  "3":
    ...
    constraints: root-disk=1048576M
    hardware: arch=amd64 cores=1 cpu-power=138 mem=1700M root-disk=1048576M availability-zone=us-east1-b

One thing that’s not setup as part of the instance-type is the disk space allocated to the instance. Some Applications focus on CPU usage but others will want to track state and store that on disk. We can specify the size of the disk that the instances are allocated through the root-disk Constraint.

The root-disk Constraints only mean that Juju will make sure that the instance has a Volume of that size when it comes up. This is especially useful for Big Data Applications and Databases. 

Remembering Constrains in the model

From here let’s combine our Constraints and produce a true production grade PostgreSQL server. Then we’ll create the bundle that will deploy this in a repeatable fashion that we can reuse on any Cloud that we want to take it to. 

$ juju deploy postgresql pgsql-prod --constraints "cores=4 mem=24G root-disk=1T"

series: xenial
applications: 
  "pgsql-prod": 
    charm: "cs:postgresql-163"
    num_units: 1
    constraints: "cores=4 mem=24576 root-disk=1048576"

Constraints on Controllers

Once we understand how Constraints work one area we’ll find it really valuable is in the sizing of your own Juju Controllers. If we’re operating a true multi-user Controller with many models to the folks in your business we’ll want to move up from the default instances sizes. Juju is conservative so that folks testing out and experimenting with Juju don’t end up running much larger instances than expected. There are a lot more test or trial controllers out there than Controllers that run true production. It is much like any product really. 

Controller Constraints are supplied during bootstrap using the flag bootstrap-constraints. For example: 

$ juju bootstrap --bootstrap-constraints="mem=16G cores=4 root-disk=400G" google

Some considerations when you’re operating a controller include: 

  • root-disk - Juju leverages persistent database and tracks logs and such in that database. You want to make sure to provide space for running Juju backups, storage of Charms and Resources used in Models (they’re cached on the controller), and logs that are rotated over time, etc. 
  • mem - as with any database, the more it can fit into memory the better. It’ll respond quicker to events through the system. During events such as upgrades and migrations there can be a lot of memory consumption as entire models are processed as part of those actions. 
  • cores - speed, as you have more clients and units talking over the API the more cores and faster it can respond the better. Here cores is more important than raw CPU as most things are more about fetching/processing documents vs doing pure computation. 

Cloud specific Constraints

There are cloud specific constraints that are supported. MAAS, for instance, allows deploying via tags because MAAS supports tags as a way of classifying bare metal machines where you might optimize a hardware purchase for network applications, storage applications, etc. Make sure to check out the documentation to see all available Constraints and what Clouds they are supported on. 

The great thing about the Constraint system is that it keeps the focus on requirements on the Applications and what they actually need to get their jobs done. We’re not trying to force Applications onto existing hardware that might not fit it’s operating profile. It’s an idea that really excels in the IaaS Cloud world. 

Give it a try and if you have any questions or hit any problems let us know. You can find us in the #juju IRC channel on Freenode or on the Juju mailing list. You can reach me on Twitter. Don’t hesitate to reach out. 

Learning to speak Juju

Learning to speak Juju

One of my favorite quotes is that there are two hard problems in tech, cache invalidation and naming things. Naming things is fun and you quickly realize that in any tech community there’s a vocabulary you need to understand in order to participate. Programming languages, technical tools, and even just communities all build entire languages you need to understand to participate. Juju is no different here. When you look at Juju you’ll find there’s a small vocabulary that it really helps to understand. 

There are only two hard things in Computer Science: cache invalidation and naming things.
— Phil Karlton

The Juju Client

$ juju --version

We start with Juju itself. Now honestly, there’s a few layers of Juju you end up working with but let’s start with the command line client. When you invoke Juju from the CLI you’re running a local client that communicates with APIs that perform the real work. Clients are available on all major systems and you can even think of things like the Juju GUI as a web based client. You might be asked what version of Juju you’re running and the best place to start is with the version of your client. 

Cloud

$ juju clouds

Cloud        Regions  Default        Type        Description
aws               14  us-east-1      ec2         Amazon Web Services
azure             26  centralus      azure       Microsoft Azure
google             8  us-east1       gce         Google Cloud Platform
localhost          1  localhost      lxd         LXD Container Hypervisor
guimaas            0                 maas        Metal As A Service
...

Just what is the Cloud? In our ecosystem a Cloud is any API that will provision machines for running software on. There are a number of public clouds and each has an APIs, such as AWS, GCE, and Azure. There are private clouds that you can operate software on, such as OpenStack or MAAS. There’s even a local cloud using the LXD API to provide a cloud experience on your local system controlling LXD containers. The key thing is that Juju is primarily intended to abstract away the details of the various cloud APIs so that the operations work you need to perform is repeatable and consistent regardless on which cloud you choose to use. 

Controller

$ juju controllers

Controller   Model        User               Access     Cloud/Region   Models  Machines    HA  Version
guimaas      -            admin              superuser  guimaas             2         1  none  2.2.4
jaas*        k8-test      rharding@external  (unknown)                      -         -     -  2.2.2
jujuteamops  teamwebsite  admin              superuser  aws/us-east-1       2         1  none  2.2.4

A controller is the main brain of Juju. It holds the database of what is deployed, what configuration is expected, what is the current status of everything that’s been and being deployed and running over time. The controller houses the users and permissions and it’s the API endpoint that the Juju Client communicates with. The controller runs on the same cloud as the workloads that will be operated and can be scaled for HA purposes as it’s important infrastructure in the Juju system. You can see, in the above snippet, I’m running controllers on my local MAAS, AWS, and using the JAAS hosted controller. 

Model

juju models

Controller: jujuteamops

Model            Cloud/Region   Status     Machines  Cores  Access  Last connection
ci-everything    aws/us-east-1  available         0      -  admin   never connected
ricks-test-case  aws/us-east-1  available         3      3  admin   2017-09-28
teamwebsite*     aws/us-east-1  available         1      1  admin   2017-09-28

In Juju we talk a lot about modeling and models. Models are namespaces that work is done within. A single Controller can manage many many Models. Each Model tracks the state of what’s going on inside of it, it’s able to be ACL’d using the users known to the Juju Controller, and you can dump out the contents of a Model into a reusable format. In the above snippet you can see I have several Models running on a Controller in AWS right now. One is handling some CI work, another I’m using to test out a bug fix, and a third is the Model where we operate the software required to provide our website. 

Users are able to switch between models and it really helps provide a great level of focus on what a user is working on at any given moment in time. 

Charm

$ juju deploy mysql

Located charm "cs:mysql-57".
Deploying charm "cs:mysql-57".

A Charm is the component that users use to deploy software. Each software component is deployed using a ZIP file that contains all of the scripts, configuration information, event handling, and more that might be required to operate the software over time. The URL of the Charm that’s been found breaks down like so:

  • cs: = “charm store” (vs local file on disk)
  • mysql = name of the charm
  • 57 = what revision of the charm did we find. (latest when not specified)

Charms are just chunks of software themselves. They are the set of YAML, scripts, and templates that makes it possible to operate things. In the above case the Charm provides all the software required to operate MySQL over time. This includes configuration information, backup scripts, and code that handles the provisioning of databases to other software in the Model. 

Application, Unit, and Machine

$ juju status mysql

...
App    Version  Status   Scale  Charm  Store       Rev  OS      Notes
mysql           waiting    0/1  mysql  jujucharms   57  ubuntu

Unit     Workload  Agent       Machine  Public address  Ports  Message
mysql/0  waiting   allocating  1                               waiting for machine

Machine  State    DNS  Inst id              Series  AZ          Message
1        pending       i-0524bda18a2bb0cb9  xenial  us-east-1b  Start instance attempt 1
...

When you deploy MySQL you are asking that a single Machine is provisioned by the Controller using the Cloud API and that the Charm is placed on that machine and, once there, it’s executed. However, once that Charm added to the Model, we call MySQL an Application. The Application is an abstraction that is used to control how we operate our MySQL. This includes adding additional MySQL servers and building a MySQL cluster. If we state that the data for MySQL should be in /srv vs /opt we want that to be set on the Application and then each MySQL in the cluster will update and follow suit. Each member of the cluster is a Unit of the MySQL Application. We can talk to each one individually when necessary, but really we want them to treat them as a single Application in our Model. 

In this way we say that an Application consists of one or more Units that are located on a Machine. The Machine is the actual hardware or VM allocated for the Unit to run on. If we were to remove the Machine #1 in the above example, we’d also lose our unit mysql/0 but the Application would still be in the model. The state details are still in the model and any new Units added would pick up where the first one left off. 

Relation

juju relate mysql:db gypsy-danger

Having MySQL servers running is great but I have some Python software that would love to store some data into that MySQL cluster. In order to do this I need to write out the MySQL DSN to a config file that my application knows how to read. To facilitate this I’ll create a Charm for my gypsy-danger Application and in my Charm I’ll declare that I know how to speak the “db” language. Funny enough, the MySQL charm already knows how to speak the “db” language as well. This means that I can create a relation in the model that states that MySQL and my python project gypsy-danger can communicate using the “db” language. 

This allows Juju to facilitate a conversation between Units of both Applications. gypsy-danger will ask MySQL for a database. MySQL will then create a fresh new database with a unique username and password and send those details back to the gypsy-danger Application. The scripts contained in the Charm will then make sure each Unit of the Application gypsy-danger updates their Python configuration files with those MySQL connection details. 

As an operator, I don’t want to get into the business of specifying things that are transient or different from one Model to another. That just reduces the reusability of what I’ve built. Relations allow Applications to communicate and self-coordinate to perform actions needed to go from independent software components into a working collection of software that performs a useful function. What’s neat is that other Charms can claim to speak “db” and so one MySQL cluster can be used to server out databases to many other Applications in a Model. You can even have different servers provide those databases, such as Percona or MariaDB. By speaking a common protocol Relations provide a great way for Models to stay agile. 

 

Bundle

applications: 
  mysql: 
    charm: "cs:mysql-57"
    num_units: 1
  "gypsy-danger": 
    charm: "cs:~rharding/gypsy-danger-5"
    num_units: 1
relations: 
  - - "mysql:db"
    - "gypsy-danger:db"
$ juju add-model testbundle
$ juju deploy mybundle.yaml

A bundle is basically a static dump of a Model. It takes all the Applications that are running, how many Units are there, what configuration is specified, what Relations exist in the Model, and dumps it to a clean reusable form. In this way, you can replicate this Model in a new Model, or on another Cloud, or just save it for later. It deploys just like a single Charm would, except that it is going to be performing a lot more work. We have bundles of all shapes and sizes and when you see examples of easily deploying a Kubernetes cluster or an OpenStack installation it’s typically done using a reusable Bundle. 

There is our crash course in the language of Juju. Every project has a language, and we all get really creative naming things in the tech world. Juju is no exception. Hopefully this will help you better understand how the parts fit together as you dive into using Juju to operate your own software. 

Terms to remember:

  • Juju (and the client)
  • Cloud
  • Controller
  • Model
  • Charm
  • Application
  • Unit
  • Machine
  • Relation
  • Bundle

Modeling the infrastructure while keeping security and flexibility in mind

Juju allows the user to model their infrastructure in a clean and simple repeatable way. Often deployments are repeated across different clouds and regions. Sometimes it's repeated from dev to staging to production. Regardless of the way it's repeated, there are some solid practices that users need to follow when taking a model and reusing it. Some bits need to be unique to each deployment. Most of these are security details that need to be different from deployment to deployment. There might also be some specific bits of configuration that regularly vary. In staging the url in the apache config might be staging.jujucharm.com and in production it's jujucharms.com. You need to be able to reuse the model of how the applications, constraints, and common configuration work but make sure there's a clean and simple method of providing the extra unique bits each time you bring up another model.

Let's walk through an example model I've created. I'm going to monitor a pair of Ubuntu machines with Telegraf feeding system details to Prometheus. We'll then use Grafana to visualize those metrics. Finally, we want to setup an HaProxy front end for the Grafana so we can provide a proper SSL terminated web site. After it's up and running it looks a bit like this.

bundle-view.jpg

The first thing we need to do is use the Juju GUI to export a bundle that will be a dump of our model. That's an EXACT dump of everything we've got. We need to edit out the bits of the model that need to be unique from deployment to deployment. Once it's all cleaned up it looks a bit like this.

applications:
  ubuntu:
    charm: "cs:ubuntu"
    num_units: 2
  telegraf:
    charm: "cs:telegraf"
  prometheus:
    charm: "cs:prometheus"
  grafana:
    charm: "cs:grafana"
    options:
      admin_password: CHANGEME
  haproxy:
    charm: "cs:haproxy"
    expose: true
relations:
  - - "ubuntu:juju-info"
    - "telegraf:juju-info"
  - - "prometheus:target"
    - "telegraf:prometheus-client"
  - - "prometheus:grafana-source"
    - "grafana:grafana-source"
  - - "grafana:website"
    - "haproxy:reverseproxy"

A couple of things to note in there are the config values for the Grafana admin password. We want that to be clear that it should be changed. Other than that though, it's a pretty plain model. Where it gets fun is when we leverage new bundle features in Juju 2.2.3.

Overriding config values at deploy time

Juju 2.2.3 provides a new argument to the deploy command, --bundle-config. This flag allows you to pass a filename where that file will override config for the applications in the bundle file that you're deploying. You might use it like this:

juju deploy ./bundle.yaml --bundle-config=production.yaml

So what can we use this for? Well, let's set a unique password for our Grafana admin user. To provide a file with updated config we just mirror the bundle format and point at the application we're targeting like so. Let's edit the production.yaml file to look like this.

applications:
  grafana:
    options:
      admin_password: ImChanged

Note it looks just like the bundle file above with the same keys and we're just setting an admin password of "ImChanged" to prove it's set. We can then deploy the bundle with the --bundle-config argument and when it's done and brought up we can check it was set.

$ juju config grafana admin_password
ImChanged

Reading complex data from a file

That's handy, but sometimes you don't want to just set a new string value but read content from a file. Prometheus can be used to scrape custom jobs. We've used this in the past to scrape prometheus data from Juju controllers themselves. To set this up we need to add a YAML declaration about the job that Prometheus will process. Let's find out what the IP of our Juju controller is and add that job using another new bundle feature; include-file://

Using include-file:// you can specify a path on disk that will be read and passed to the config value in your bundle. In this way you can easily sent complicated multi-line data (like YAML) to a config value in a clean and easy way. First let's setup our new scape job definition.

juju show-machine -m controller 0
...
ip-addresses:
    - 10.0.0.8
    
vim scapejobs.yaml

  metrics_path: /introspection/metrics
  scheme: https
  static_configs:
    - targets: ['10.0.0.8:17070']
  basic_auth:
    username: user-prometheus
    password: testing

Now let's update our production.yaml file to also read this new scrapejobs.yaml file during deployment.

applications:
  grafana:
    options:
      admin_password: ImChanged
  prometheus:
    options:
      scrape-jobs: include-file://scrapejobs.yaml

In order for this to work the file is defined to be in the current working directory. If you want it elsewhere we'll need to define a full path to the file. Now when we run our deploy command we'll both set the grafana password as well as read the new job for Prometheus. 

juju deploy ./bundle.yaml --bundle-config=production.yaml
...
juju config prometheus scrape-jobs
  - job_name: juju
    metrics_path: /introspection/metrics
    scheme: https
    static_configs:
      - targets: ['10.0.0.8:17070']
    basic_auth:
      username: user-prometheus
      password: testing

Awesome, now we can do some work with templating out the file and reusing it providing unique IP address for targets as well as custom usernames and passwords as needed from deployment to deployment while keeping the basics of the model intact and reusable.

base64 the included files

There's a third option for including into this production.yaml and that's include-base64://. This allows reading of a local file and base64'ing the contents before getting set into the config. This is helpful for things like ssl keys and such that are unique to different deployments. In our demo case I want to pass in an SSL key to be used with HAProxy so that I can provide HTTPS for accessing the Grafana dashboard. To do this we need to set the ssl_key and ssl_cert config values int he HAProxy charm. Let's update the production.yaml file for this final bit of configuration overriding. 

applications:
  grafana:
    options:
      admin_password: ImChanged
  prometheus:
    options:
      scrape-jobs: include-file://scrapejobs.yaml
        haproxy:
    options:
      ssl_key: include-base64://ssl.key
      ssl_cert: include-base64://ssl.crt

With this in place the next time we deploy we get the config values updated with base64'd values. 

juju deploy ./bundle.yaml --bundle-config=production.yaml
...wait a bit...
juju config haproxy ssl_key
LS0tLS1CRUdJTiBSU0EgUFJJV...

Now we've constructed a sharable model that can be reused yet easily follow best practices for not putting our passwords and keys into the model which might leak out in some way. These tools provide you the best ways of collaborating on the operations of software at scale and I can't wait to hear about how you're using this to build out the next level of your operations best practices. 

Hit a question or want to share a story? Tell us about it in IRC, on the mailing list, or just bug me on twitter @mitechie.

IRC: #juju on Freenode
Mailing list: https://lists.ubuntu.com/mailman/listinfo/juju

Thinking up new interesting interview questions

Doing interviews with folks is something that I enjoy doing. Talking shop is just something I love. You get to learn from others, relate to challenges, and maybe it’s my version of chatting sports at a bar. Interviews have different parts. There’s the basics of what the position is, what it involves, questions about working at Canonical, etc. My favorite part is the general back and forth banter that comes in the middle there. 

When interviewing there’s go to questions that folks like to use to bring about a conversation and get a feel for the candidate. “Describe your perfect day at work.” or “Tell me about some work you’re most proud of.” are two you hear a lot. In a recently interview I had an idea to get the conversation going in a different way. 

 “Tell me about the stakeholders of your last project.”  This really gives me insight as to if the applicant can identify who’s driving the project that they’re working on. Once they identify the stakeholders we can then drive the conversation into things like “How did you find you had to work with those stakeholders differently?” and “Tell me about how the different stakeholders needs would clash and how you worked through it?”. Finally then you can get at feedback on how things might have been done better. 

There’s a lot more that goes into finding good candidates than show me your Github. I personally really enjoy working with folks that can look beyond the code a bit and help communicate about why folks are doing what they’re doing. What’s your favorite thought proving interview question and why? 

testing the future of Juju with snaps

Juju 2.3 is under heavy development and one thing we all want when we're working on the next big release of our software product is to get feedback from users. Are you solving the problems your user has? Are there bugs in the corner cases that a user can find before the release? Are the performance improvements you made working for everyone like you expect? The more folks that test the software before it's out the better off your software will be!

With the recent calls for testing out the Cross Model Relations and Storage Improvements coming in Juju 2.3 I think it'd be good to point out how we can leverage the power of channels in snaps to test out the upcoming features in Juju. 

To get Juju via snaps you can search the snap store and install it like so:

$ snap find juju
$ sudo snap install --classic juju

This then drops the Juju binary in the /snap/bin directory. 

$ /snap/bin/juju --version
2.2.2-zesty-amd64

That's great that we've got the latest stable version of Juju. Let's see what other versions we can get access to. 

Let's try to use the new storage flag on the deploy command that Andrew points out in his blog post.

$ /snap/bin/juju deploy --attach-storage
ERROR flag provided but not defined: --attach-storage

Bummer! That isn't in the stable release of Juju yet. Note that it calls out the flag as not being defined. Let's see if we can get access to a more bleeding edge Juju.

$ snap info juju
name:      juju
summary:   "juju client"
publisher: canonical
contact:   http://jujucharms.com
description: |
  Through the use of charms, juju provides you with shareable, re-usable, and
  repeatable expressions of devops best practices.
commands:
  - juju
tracking:    stable
installed:   2.2.2 (2142) 25MB classic
refreshed:   2017-07-13 16:20:52 -0400 EDT
channels:                                      
  stable:    2.2.2                      (2142) 25MB classic
  candidate: 2.2.2                      (2142) 25MB classic
  beta:      2.2.3+2.2-9909aa4          (2180) 43MB classic
  edge:      2.3-alpha1+develop-1f3f66e (2187) 43MB classic

There we can see that in the edge channel has an upcoming 2.3-alpha release in there. Let's switch to it and test out what's coming in Juju 2.3. 

$ sudo snap refresh --edge juju
juju (edge) 2.3-alpha1+develop-1f3f66e from 'canonical' refreshed

$ /snap/bin/juju --version
2.3-alpha1-zesty-amd64

Now let's check out that command Andrew was talking about with the storage feature in Juju 2.3. 

$ /snap/bin/juju deploy --attach-storage
ERROR flag needs an argument: --attach-storage

There we go, now we've got access to the upcoming storage features in Juju 2.3 and we can provide great feedback to the dev team. 

After we're done testing and providing that feedback we can easily switch back to using the stable release for our normal work. 

$ sudo snap refresh --stable juju
juju 2.2.2 from 'canonical' refreshed

Give it a try, check out the latest in the upcoming 2.3 work and file bugs, send feedback, and be ready to leverage the great work being faster. 

Call for testing: Shared services with Juju

Juju has long provided the model as the best way to describe, in a cross cloud and repeatable way, your infrastructure. Often times, your infrastructure consists of shared resources outside of the different models that are operated. Examples might be a shared object storage service providing space for everyone to backup important data, or perhaps a shared nagios resource providing the single pane of glass that operators need to make sure that all is well in the data center. 

Juju 2.2 provides a new feature behind a feature flag that we’d like to ask folks to test. It’s called Cross Model Relations and builds upon the great unique feature of Juju known as relations. Relations allow components of your architecture to self-coordinate across each other passing information required to operate. This could just be the IP addresses of each other so that config files can be written such that a front end application can speak to the back end service correctly. It could also be as complicated as passing actual payloads of data back and forth. 

Cross Model Relations allows these relations to take place beyond the boundary of the current model. The idea is that I might have a centrally operated service that is made available to other models. Let’s walk through an example of this by providing a centrally operated MySQL service to other folks in the company. As the MySQL expert in our hypothetical company I’m going to create a model that has a scaled out, monitored, and properly sized MySQL deployment. 

First, we need to enable the CMR (Cross Model Relations) feature flag. To use a feature flag in Juju we export an environment variable JUJU_DEV_FEATURE_FLAGS.

$ export JUJU_DEV_FEATURE_FLAGS=cross-model

Next we need to bootstrap the controller we’re going to test this out on. I’m going to use AWS for our company today.

$ juju bootstrap aws crossing-models

Once that’s done let’s setup our production grade MySQL service.

$ juju add-model prod-db
$ juju deploy mysql --constraints "mem=16G root-disk=1T"
$ juju deploy nrpe......and more to make this a scale out mysql model

Now that we’ve got a proper scaled MySQL service going let’s look at offering that database to other models.  Now we’re able to use a new Juju command juju offer.

$ juju offer mysql:db mysqlservice
Application "mysql" endpoints [db] available at "admin/prod-db.mysqlservice"

We’ve offered to other models the db endpoint that the MySQL application provides. The only bit of our entire prod-db model that’s exposed to other folks is the endpoint we’ve selected to provide. You might provide a proxy or load balancer endpoint to other models in the case of a web application, or you might provide both a db and a Nagios web endpoint out to other models if you want them to be able to query the current status of your monitored MySQL instance. There’s nothing preventing multiple endpoints from one or more applications from being offered out there. 

Also note that there’s a URL generated to reference this endpoint. We can ask Juju to tell us about offers that are available for use. 

$ juju find-endpoints
URL                         Access  Interfaces
admin/prod-db.mysqlservice  admin   mysql:db

Now that we’ve got a database let’s find some uses for it. We’ll setup a blog for the engineering team using Wordpress which leverages a MySQL db back end. Let’s setup the blog model and give them a user account for managing it. 

$ juju add-model engineering-blog
$ juju add-user engineering-folks
$ juju grant engineering-folks write engineering-blog

Now they’ve got their own model for managing their blog. If they’d like, they can setup caching, load balancing, etc. However, we’ll let them know to use our database where we’ll manage db backups, scaling, and monitoring. 

$ juju deploy wordpress
$ juju expose wordpress
$ juju relate wordpress:db admin/prod-db.mysqlservice

This now sets up some interesting things in the status output. 

$ juju status
Model              Controller       Cloud/Region   Version  SLA
engineering-blog   crossing-models  aws/us-east-1  2.2.1    unsupported

SAAS name     Status   Store  URL
mysqlservice  unknown  local  admin/prod-db.mysqlservice

App        Version  Status  Scale  Charm      Store       Rev  OS      Notes
wordpress           active      1  wordpress  jujucharms    5  ubuntu

Unit          Workload  Agent  Machine  Public address  Ports   Message
wordpress/0*  active    idle   0        54.237.120.126  80/tcp

Machine  State    DNS             Inst id              Series  AZ          Message
0        started  54.237.120.126  i-0cd638e443cb8441b  trusty  us-east-1a  running

Relation      Provides      Consumes   Type
db            mysqlservice  wordpress  regular
loadbalancer  wordpress     wordpress  peer

Notice the new section above App called SAAS. What we’ve done is provided a SAAS-like offering of a MySQL service to users. The end users can see they’re leveraging the offered service. On top of that the relation is noted down in the Relation section. With that our blog is up and running. 

We can repeat the same process for a team wiki using Mediawiki which will also use a MySQL database backend. While setting it up notice how the Mediawiki unit complains about a database is required in the first status output. Once we add the relation to the offered service it heads to ready status. 

$ juju add-model wiki
$ juju deploy mediawiki
$ juju status
...
Unit          Workload  Agent  Machine  Public address  Ports  Message
mediawiki/0*  blocked   idle   0        54.160.86.216          Database required

$ juju relate mediawiki:db admin/prod-db.mysqlservice
$ juju status
...
SAAS name     Status   Store  URL
mysqlservice  unknown  local  admin/prod-db.mysqlservice

App        Version  Status  Scale  Charm      Store       Rev  OS      Notes
mediawiki  1.19.14  active      1  mediawiki  jujucharms    9  ubuntu
...

Relation  Provides   Consumes      Type
db        mediawiki  mysqlservice  regular

We can prove things are working by actually checking out the databases in our MySQL instance. Let’s just go peek and see they’re real. 

$ juju switch prod-db
$ juju ssh mysql/0
mysql> show databases;
+-----------------------------------------+
| Database                                |
+-----------------------------------------+
| information_schema                      |
| mysql                                   |
| performance_schema                      |
| remote-05bd1dca1bf54e7889b485a7b29c4dcd |
| remote-45dd0a769feb4ebb8d841adf359206c8 |
| sys                                     |
+-----------------------------------------+
6 rows in set (0.00 sec)

There we go. Two remote-xxxx databases for each of our models that are using our shared service. This is going to make operating our infrastructure at scale so much better!

Please go out and test this. Let us know what use cases you find for it and what the next steps should be as we prepare this feature for general use. You can find us in the #juju irc channel on Freenode, the Juju mailing list, and you can find me at @mitechie

Current limitations

As this is a new feature it’s limited to working within a single Juju Controller. It’s also a current work in progress so please watch out for bugs as they get fixed, UX that might get tweaked as we get feedback, and note that upgrading a controller with CMR to a newer version of Juju is not currently supported. 

 

Upgrading Juju using model migrations

Since Juju 2.0 there's a new feature, model migrations, intended to help provide a bulletproof upgrade process. The operator stays in control throughout and has numerous sanity checks to help provide confidence along the upgrade path. Model migrations allow an operator to bring up a new controller on a new version of Juju and to then migrate models from an older controller one at a time. These migrations go through the process of putting agents into a quiet state and queue'ing any changes that want to take place. The state is then dumped out into a standard format and shipped to the new controller. The new controller then loads the state and verifies it matches by checking it against the state from the older controller. Finally, the agents on each managed machine are checked to make sure they can communicate with the new controller and that any state matches expectations before those agents update themselves to report to the new controller for duty.

Once this is all complete the handoff is finished and the old controller can be taken down once the last model is migrated away. In order to show how this works I've got a controller running Juju 2.1.3 and we're going to upgrade my models running on that controller by migrating them to a brand new Juju 2.2 controller. 

One thing to remember is that Juju controllers are the kings of state. Juju is an event based system and on each machine or cloud instance managed an agent runs that communicates with the controller. Events from those agents are processed and the controller updates the state of applications, triggers future events, or just takes note of messages in the system. When we talk about migrating the model, we're only moving where the state system is communicating. None of the workloads are moved. All instances and machines stay exactly where they're at and there's no impact on the workloads themselves. 

$ juju models -c juju2-1
Controller: juju2-1

Model       Cloud/Region   Status     Machines  Cores  Access  Last connection
controller  aws/us-east-1  available         1      1  admin   just now
gitlab      aws/us-east-1  available         2      2  admin   49 seconds ago
k8s*        aws/us-east-1  available         3      2  admin   39 seconds ago

This is our controller running Juju 2.1.3 and it has on it a pair of models running important workloads. One is a model running a Kubernetes workload and the other is running gitlab hosting our team's source code. Lets upgrade to the new Juju 2.2 release. The first thing we need to do is to bootstrap a new controller to move the models to. 

Gitlab running in the gitlab model to host my team's source code.

Gitlab running in the gitlab model to host my team's source code.

First we upgrade our local Juju client to Juju 2.2 by getting the updated snap from the stable channel. 

sudo snap refresh juju --classic

Now we can bootstrap a new controller making sure to match up the cloud and region of the models we want to migrate. They were in AWS in the us-east-1 region so we'll need to make sure to bootstrap there.

$ juju bootstrap aws/us-east-1 juju2-2

Looking at this model we have the two, out of the box, models a new controller always does. 

$ juju models
Controller: juju2-2

Model       Cloud/Region   Status     Machines  Cores  Access  Last connection
controller  aws/us-east-1  available         1      1  admin   2 seconds ago
default*    aws/us-east-1  available         0      -  admin   8 seconds ago

To migrate models to this new controller we need to be on the older controller. 

$ juju switch juju2-1

With that done we can now ask Juju to migrate the gitlab model to our new controller. 

$ juju migrate gitlab juju2-2
Migration started with ID "44d8626e-a829-48f0-85a8-bd5c8b7997bb:0"

The migration kicks off and the UUID is provided as a way of tracking among potentially several migrations going on. If a migration were to fail for any reason and resume its previous state we could make corrections and try again. Watching the output of juju status while the migration processes is a interesting. Once done you'll find that status errors because the model is no longer there. 

$ juju list-models -c juju2-1
Controller: juju2-1
    
Model       Cloud/Region   Status     Machines  Cores  Access  Last connection
controller  aws/us-east-1  available         1      1  admin   just now
k8s         aws/us-east-1  available         3      2  admin   54 minutes ago

Here we can see our model is gone! Where did it go?

$ juju list-models -c juju2-2                                                                                        (rharding@dingy:)
Controller: juju2-2

Model       Cloud/Region   Status     Machines  Cores  Access  Last connection
controller  aws/us-east-1  available         1      1  admin   just now
default*    aws/us-east-1  available         0      -  admin   45 minutes ago
gitlab      aws/us-east-1  available         2      2  admin   1 minute ago

There we go, it's over on the new juju2-2 controller. The controller managing state is on Juju 2.2, but now it's time for us to update the agents running on each of the machines/instances to also be the new version.

$ juju upgrade-juju -m juju2-2:gitlab
started upgrade to 2.2

$ juju status | head -n 2
Model    Controller  Cloud/Region   Version    SLA
default  juju2-2     aws/us-east-1  2.2        unsupported

Our model is now all running on Juju 2.2 and we can use any of the new features that are useful to us against this model and the applications in it. With that upgraded let's go ahead and finish up by migrating the Kubernetes model. The process is exactly the same as the gitlab model. 

$ juju migrate k8s juju2-2
Migration started with ID "2e090212-7b08-494a-859f-96639928fb02:0"

...one, two, skip a few...

$ juju models -c juju2-2
Controller: juju2-2

Model       Cloud/Region   Status     Machines  Cores  Access  Last connection
controller  aws/us-east-1  available         1      1  admin   just now
default*    aws/us-east-1  available         0      -  admin   47 minutes ago
gitlab      aws/us-east-1  available         2      2  admin   1 minute ago
k8s         aws/us-east-1  available         3      2  admin   57 minutes ago

$ juju upgrade-juju -m juju2-2:k8s
started upgrade to 2.2

The controller juju2-1 is no longer required since it's not in control of any models. There's no state for it to track and manage any longer. 

$ juju destroy-controller juju2-1

Give model migrations a try and keep your Juju up to date. If you hit any issues the best place to look is in the juju debug-log from the controller model. 

$ juju switch controller
$ juju debug-log

Make sure to reach out and let us know if it works well or if you hit a bug in the system. You can also use model migrations to move models among the same version of Juju to help balance load or for maintenance purposes. You can find the team in #juju on Freenode or send a message to the mailing list. 

Designing for long term operations, upgrades, and rebalancing

When you're building a tool for a user there's a huge amount of design, decision making, and focus put on getting the user started. There's good reason for this. Users have an ocean full of tools at their disposal, and if you're building something for others to use you need to win the first five minute test.

You've got about a five minute window for a user to make useful headway in understanding your tool and visualizing how it will aid them in their work. Often this manifests in tools that demo well, but that then have to be left behind when the real work hits. I like to use text editors as an example of this. So many users start out with notepad, nano, or some other really light editor. In time, they learn more and the learning curve of something like VIM, Eclipse, or Emacs really pays off. There's a big gap in folks that make that leap though. If you want to get the most folks in the door it needs to have that hit the ground running feel to it.

When you're designing a tool to help users deploy and run software there's a lot of focus on the install process. Nearly every hosting provider has worked with some sort of "one button install" tool. They're great because it gets users started quickly in that five minute window. Over time though, users find that those one click tools end up being very shallow. They need to add users, create new databases, back up data, restart daemons when appropriate, the list goes on and on. Two operational tasks which are particularly interesting are upgrades and rebalancing.

Juju is an operations tool that can track the state for many different models, or state of operated applications,  potentially run by many different users. These models evolve over time and are running production workloads for years. We know of many models that are nearly a year and a half old. In that same time Juju has had five 1.2x minor versions and three 2.x minor versions. That means you’d want to keep up with improvements in performance, features, and security by upgrading nearly every other month. Aiding in this the Juju 2.1 release includes a new feature, known as model migrations, specifically to help operators managing their infrastructure over the long term.

The general idea is that the largest danger is in doing complex upgrade steps such as database migrations and making sure that everything running is able to communicate on the new version of the software. In Juju 2.1, the Juju infrastructure that manages the state of the models as well as API connections from clients (known as the “controller”) uses model migrations allowing an operator to bring up a new controller and migrate the model state over to a new version. In that process the data is shipped over, it’s gone through any migrations that need to occur, the data is sanity checked between old and new controller, and the system is put together in such a way that if anything fails to check out it can roll back and use the previously running controller. That's some reassurance to lean on when you're doing an important upgrade. Since one controller operates many models, the operator is in control of which models get migrated at what time and allows for a very controlled rollout to the new software in a way that permits safely checking that all remains in the green as the new version of the Juju software is adopted.

Another big use case for model migrations is to balance out the load on Juju controllers over time. As an organization we all grow and change in our needs and over time it's important to be able to move to new hardware, shift services that generate heavy load to dedicated hardware, etc. Juju is tested to perform at thousands of managed machines, however there are dependencies such as the size of the controller machines that track state and over time a normal part of a growing organization is to put into play machines with newer CPUs, more memory, or just flat out beefier hardware with more cores.

I'd love to hear about the tool you use and which ones have fallen short on aiding you in managing your long term operational needs and what other types of long term operations you want your tools to assist with. Hit me up on twitter @mitechie

In a follow up post I'll walk through exactly how to perform a migration to the new Juju 2.2 release using the built in model migration feature.

Open Source software and operational usability

A friend of mine linked me to Yaron Haviv's article "Did Amazon Just Kill Open Source" and I can't help but want to shout a reply

NO!

There is something in premise though, and it's something I keep trying to push as it's directly related to the work I do in my day job at Canonical.

Clouds have been shifting from IaaS to SaaS as sure as can be. They've gone from just getting your quick access to hardware on a "pay for what you use" system to providing you full self service access to applications you used to have to run on that hardware. They've been doing this by taking Open Source software and wrapping it with their own API layers and charging you for their operating of your essential services.

When I look at RDS I see lovely images for PostgreSQL, MySQL, and MariaDB. I can find Elasticsearch, Hadoop, and more. It's a good carrot. Use our APIs and you don't have to operate that software on the EC2 platform any more. You don't have to worry about software upgrades, backups, or scale out operational concerns. Amazon isn't the only cloud doing this either. Each cloud if finding the services it can provide directly to the developers to build products on. 

They've taken Open Source software and fixed delivery to make it easy to consume for most folks. That directly ties into the trend we have been working on at Canonical. As software has moved more and more to Open Source and the cost of software in the average IT budget drops, the costs of finding folks that can operate it at production scale has gotten much more expensive. How many folks can really say they can run Hadoop and Kubernetes and Elasticsearch at professional production levels? How hard is it to find them and how expensive is it to retain them? 

We need to focus on how we can provide this same service, but straight from the Open Source communities. If you want users of your software to be users of YOUR software, and not some wrapped API service, then you need to take those operational concerns into account. We can do more than Open Source the code that is running, we can work together as a community to Open Source the best practices for operating that software over the full life cycle of it. Too often, projects stop short at how to install it. The user has to worry about it long after it's installed. 

I have hope that tools like Juju and the Kubernetes can provide a way for the community around the software we use and love to contribute and avoid the lock in of some vendored solution of the Open Source projects we participate in today. 

Relations and the benefits of coding to an interface

Interfaces are an awesome idea. It’s a tale that all programmers have come across. If you program to a protocol then everyone gets to say “Hey! I can speak that” and join in the fun. TCP/IP, HTTP, my api I created for Bookie. What’s interesting is that I don’t feel like it’s been completely bought into the operations side of the world. There’s a few I can think of off the top of my head; snmp, rrd, and I supposed prometheus is finding some popularity lately. It's one of the more powerful ideas built into Juju and I was hit over the head when doing my latest tinkering with Gitlab

In that blog post I used a new charm done by a member of the community that enables you to proxy anything that speaks the http interface and secure it with Let's Encrypt. At first I went "Cool, this means I can easily set this Gitlab up as https://code.bookie.io and be awesome.". Now that was true, but then I started thinking bigger. Wait a minute, we've got a ton of applications in the Juju Charm Store that all speak the http interface. So I went to work. I wanted to setup everything I might want for my open source project. A handful of juju deploy commands followed by a handful of juju relate commands and my org is up and running. I setup the follow project stack on GCE with JAAS

  • code hosting (Gitlab) - https://code.bookie.io
  • wiki (Mediawiki) - https://wiki.bookie.io
  • continuous testing (Jenkins) - https://ci.bookie.io
  • blog (Ghost) - https://blog.bookie.io
  • mailing lists (Mailman) - https://lists.bookie.io

What impressed me here is that with one simple chunk of work, the Tengu team enabled so many other applications to benefit. I suppose I'd seen this before with things like the HAProxy charm. It enables any of these applications to be placed and scaled behind HAProxy, but this one feels a bit easier to use and more use facing as it's providing that https endpoint with the clean DNS names. 

This is the type of idea that I feel like make Juju a much more interesting idea than other tools folks tend to compare it to. There's a lot of people writing Puppet modules, Chef cookbooks, or adding "one click deploy" features, but I don't see that there's this idea of standing on each other's work as built in as it is in Juju. I've worked in OpenSource a long time now and there's nothing better than finding folks that are smarter than you, and leveraging their brains in your own work. You can do this, but I find that the design that Juju has put together encourages that things are modular and solutions are stood up as a series of parts each doing their thing well in a very portable way. 

The http interface is really common, but you can imagine others that could be as impactful. I'd love to brainstorm with folks on what are some of the biggest bang for the buck ideas out there that enables sharing of the operational best practices across many software applications. I can think of a handful in monitoring, logging, and metrics. Let me know what you can think of @mitechie.

Giving Gitlab an afternoon spin

I've been meaning to check out Gitlab lately. I hear a lot about folks replacing their internal systems with it. It has an awesome checkbox of features for managing your internal code with public and private repos,  and enables you to build the best practices out there around code reviews, automated testing and gated landing. I'm lucky and work in Open Source for my work, but until recently all the code I worked on sat on some internal Git or SVN system. A better application for performing that task is exciting to a lot of users out there. Of course, the best way for me to test it out was to grab the Gitlab Juju charm and toss it up on GCE with JAAS

While looking at that I noticed that a member of the community had actually created a bundle of two applications that made the Gitlab experience even better. It includes a new charm from another community member that allows using Let's Encrypt. Cool! This means that I not only get a Gitlab instance to test with, but I can use it with others with SSL encryption so that the login credentials and such aren't passed around in clear text. 

Let's get testing by adding a new model in JAAS and deploying this bundle into GCE.

juju add-model gitlab google
juju deploy cs:~spiculecharms/bundle/gitlab-ssl
juju show-machine 0

The show-machine command here is useful because I need to go add a new DNS entry for Let's Encrypt to use. Since I'm testing out a scenario of hosting all of my Open Source project's code here I added an A record for code.bookie.io to point to the public IP address of the ssl termination application. With that DNS entry setup I need to tell the termination application what its URL is meant to be. 

juju config ssl-termination-proxy fqdn=code.bookie.io

A few minutes later and I had a working Gitlab deploy to play with. I imported all of my old Bookie code projects from Github and tested out cloning repos and such. One issue I did hit was that since my Gitlab application was proxied I needed to tell Gitlab about the URL it's actually under for end users. Do do that you need to edit a config file on the Gitlab application. 

juju ssh gitlab-server/0

sudo vim /etc/gitlab/gitlab.rb

# Change this external_url config
external_url 'http://code.bookie.io:80'
    
sudo gitlab-ctl reconfigure

I can now move forward with testing out the gitlab supplied tools for testing and building releases. If it works out I can then deploy this in my private MAAS or internal OpenStack with the same ease because Juju provides such a wide array of options. 

I want to thank the great folks at Spicule that put together the Gitlab charm and the folks at Tengu for the ssl-termination-proxy charm. I find that second one really interesting and will cover that in a follow up blog post. 

When others put your thoughts into clear words.

You have to love how when some people speak, everyone stops and listens. A shareholder letter from Jeff Bezos is making the rounds and I love how each of us reads the letter and internalizes it differently. We all find ways to relate our own world to different bits of the shared wisdom. 

Myself, there were two ideas that have been spinning in the back of my head lately that were put together in a really great way by Jeff Bezos. 

When user testing isn't the whole story

Good inventors and designers deeply understand their customer. They spend tremendous energy developing that intuition. They study and understand many anecdotes rather than only the averages you’ll find on surveys. They live with the design.
...
A remarkable customer experience starts with heart, intuition, curiosity, play, guts, taste. You won’t find any of it in a survey.
— https://www.sec.gov/Archives/edgar/data/1018724/000119312517120198/d373368dex991.htm

This is one that I've been working so hard to process. We're all focused on measuring and data and so we work hard to include user testing in the design process. It's 100% correct to measure and work to understand what actually happens out there in the real world.

Everything can't be a simple a/b test. I have seen and fought getting too carried away with doing what the testing says. It's so easy to let go of responsibility and respond "the user testing says ...". I think that user testing is great when you're looking for small tweaks and changes or general feedback on if something appears useful and interesting. If I want to improve the click through rate, rearranging the content, the design of call to action buttons, or even the information architecture all might help. NONE of these though are breakthrough ideas that provide something new and excited for users. They're not truly things that are part of the identity of a product.

I often find that in Juju users will nearly always provide feedback that's the shortest path from their task to having it work the way they want. It makes sense from where they're sitting. A user might request some fast path to making a quick change after a deployment in a one off way. Juju is built on a consistent model. Anything that's done that's not part of that model, the state, is lost for future decisions and understanding. We often find we need to take the user feedback and get into what they want to do in this "one off" and see where we have an opportunity to improve the model so that the user is able to leverage Juju but the model is still consistently the focus on the product. I feel like the key word I want to use here is intent. You can filter ideas, test results, and then put some true intent behind them to build something of value. 

Getting others to agree by allowing them to disagree

...use the phrase “disagree and commit.” This phrase will save a lot of time. If you have conviction on a particular direction even though there’s no consensus, it’s helpful to say, “Look, I know we disagree on this but will you gamble with me on it? Disagree and commit?” By the time you’re at this point, no one can know the answer for sure, and you’ll probably get a quick yes.
— https://www.sec.gov/Archives/edgar/data/1018724/000119312517120198/d373368dex991.htm

This is a lesson I first learned myself in code reviews. I'd see some code and completely hate it. However, if I stood back, I'd realize what I hated was that it's not "how I would have done it". I would have to watch myself and not be negative just because someone else thought differently.

I like how Bezos's letter makes disagreement a reason to get together. "but will you gamble with me?" is a really great way of putting that. I find that normally this is a lesson for myself. To make myself willing to gamble on someone else's work or ideas. I obviously am already sold when it's my work. 

Sometimes, in order to get folks on board, they just need permission to not be held accountable if it fails. In code review, your goal isn't to be perfect all the time. In decision making, some will be good and some will not work out. However, you need permission to let the team move forward on something and learn from the outcome. None of us will be right all the time. 

What do you find interesting or personally meaningful in the letter? Do any of the ideas really speak to something you've been noodling on in the back of your mind. Let me know. @mitechie

Three reasons you need a quick VPN in your pocket.

Recent news that the government has repealed regulations preventing the sale of customer browsing habits has some folks thinking about their internet use and privacy a bit more than usual. I think that most of us assume that the things we do in our home on our own devices are pretty safe from becoming shared with others. This has caused a rash of articles about running your own VPN. As these kept crossing my RSS feeds I got thinking that this is the perfect use case for Juju and JAAS.

Good news! The Tengu team has made it really easy to use Juju to setup your own VPN server. It's nearly as fast as you can get an instance from a cloud provider. As I sit here at the coffee shop I timed it and it took me six minutes, including adding it to my client and hitting connect. 

 

The 6 minute VPN setup

How do we do this? We use JAAS since it's a great way to deploy something into any public cloud and especially different regions. I personally have my personal VPN in the AWS us-east-2 region since it's the closest physically to where I am in Southern Michigan. 

juju add-model myvpn aws/us-east-2
juju deploy openvpn
juju expose openvpn
juju config openvpn clients="rick"
juju scp openvpn/0:~/rick.ovpn myvpn.ovpn

This deploys the OpenVPN charm and sets up a config file for "rick" that I can use to connect with a VPN client. On my MAC I use Viscosity and on Ubuntu I use the Network Manager vpn plugin. Both of these clients can load the .ovpn file that you download from the deployed server. 

Once connected you can see all of your traffic routed through the VPN securely. 

% ping ubuntu.com
PING ubuntu.com (91.189.94.40): 56 data bytes
64 bytes from 91.189.94.40: icmp_seq=0 ttl=47 time=193.328 ms
64 bytes from 91.189.94.40: icmp_seq=1 ttl=47 time=178.245 ms
64 bytes from 91.189.94.40: icmp_seq=2 ttl=47 time=140.312 ms

% traceroute ubuntu.com
traceroute to ubuntu.com (91.189.94.40), 64 hops max, 52 byte packets
 1  ip-10-200-200-1.us-east-2.compute.internal (10.200.200.1)  48.860 ms  47.036 ms  58.141 ms
 2  ec2-52-15-0-2.us-east-2.compute.amazonaws.com (52.15.0.2)  103.381 ms  64.848 ms
    ec2-52-15-0-6.us-east-2.compute.amazonaws.com (52.15.0.6)  69.651 ms

What is even better is that you can shorten this by automating the deploy, expose, and config with a Juju Bundle. I created one that sets up two clients out of the box. One for myself and one for a "guest". If I ever want to add additional clients I could just update the config in the charm. 

A few lines of yaml and a "charm push . cs:~rharding/rickvpn" and I've got a one line deploy of a VPN at my fingertips. If I deploy before I order my coffee the VPN is up and ready for use by the time it's done. 

Reason #2 - blocked ports on the shared wifi

I promised some other reasons for a VPN and blocked ports at a shared wifi location is #2. This Starbucks I'm sitting has had the wifi configured to block port 22 which can be a pain in the rear when you're attempting to work with a lot of cloud instances over SSH. A quick VPN and suddenly the world of SSH is opened back up. Yes, some folks will tell me to change my SSH ports, but when you're working on cloud servers across different clouds it's definitely much more a pain to change SSH everywhere than to just launch this VPN. 

Reason #3 - testing end user experience

I've also found myself working with others across the world. What's always fun is when they're having issues I just can't replicate. We have large numbers of our team in Europe and down in New Zealand and Australia. As you can imagine, their load times for things is a bit different than my midwest connection to things coming out of US based networks. Given the breadth of cloud regions these days, it's actually not as hard as it seems to replicate the experience that the remote users are seeing. I can easily throw up a Europe based VPN and force myself to test things through it. Suddenly I can see that the timeout we have doesn't work well for users whose bytes go through undersea cables. 

I'm sure you can think of some other uses that a quick VPN would come in handy. Let me know what your favorite uses for the OpenVPN charm are. Reach out on Twitter @mitechie. And thank you Tengu Team for this great OpenVPN charm!

Giving everyone a little data science, Juju and Zeppelin

Everyone has some data sitting somewhere that they're not putting to good use. They've got some script pulling web logs, or perhaps a database feeding some customer or employee data. We at Canonical also have data sitting around, and lately I was poking at some of it and figured there has to be a better way to enable us to collaborate and pull meaning out of that data sitting there. 

Fortunately we've got tools to help us do just that. I spoke to a member of our big data team and he says to me "Rick, what you want is to hook Zeppelin up to that data." He was right!

What is Zeppelin? Well it's a big data query and visualization tool that's part of the Apache project. Normally folks use it to sit in front of streaming back ends like Spark. It also supports SQL so just about anyone with a MySQL, Postgresql, or SQLite database can stick this in front of it and build out dashboards of useful info. The best part is that we can give others access and different parts of the company can draw their own insights from a shared collection of data. 

We can show this off really quickly using Juju and JAAS. To help demonstrate, I'll use the famous Northwind data set. Let's stick the data in Postgresql so that we can query it from our new fancy Zeppelin dashboard tool. First goal, deploy Postgresql and Zeppelin with Juju Charms that are available in the store. 

$ juju register jimm.jujucharms.com
$ juju add-model zeppelin-demo google
$ juju deploy postgresql
$ juju config postgresql admin_addresses="127.0.0.1"
$ juju deploy zeppelin --to 0
$ juju expose zeppelin

What we've got here is setting up Postgres on GCE along with our Zeppelin we'll wire up to the database in order to put our data in there. If there was already data available then it's not required here. Also note that we could put it in MySQL or we could copy a sqlite database up to the Zeppelin instance and wire it up to query that file instead. 

From here, let's clone the Northwind database and copy the files over to be able to load the data into Postgresql. We can then run the `create_db.sh` script to create the database, load it with data, and setup our user we'll use to connect.

git clone git@github.com:pthom/northwind_psql.git && cd northwind_psql
juju scp -v create_db.sh postgresql/0:/home/ubuntu/
juju scp -v northwind.sql postgresql/0:/home/ubuntu/
juju run --unit postgresql/0 -- \
    "sudo su postgres -c 'cd /home/ubuntu && sh create_db.sh'"

Now we can look up where our Zeppelin is sitting with `juju status zeppelin` and browse to the `Public address`.  Note that the ports exposed are also listed. In my case it's http://104.196.195.224:9080

Once there we get a nice little welcome screen. From here we need to wire Zeppelin up to our database. Zeppelin does this via `Interpreters` that need to be configured. Click on the navigation menu for that and we'll click the "+ Create" button to setup our SQL interpreter. 

The postgresql database, user, and password are all setup by the Northwind script we ran so we copy those values over. 

Now that the interpreter is setup we can create a new notebook and test out a sample query. Head to the "Notebook" and select "+ Create new note". We'll call it "HR Dashboard". What would an HR person be interested in? Let's see who has a birthday coming up. In the textbox we need to tell Zeppelin to use our `psql` interpreter. 

%psql

CREATE OR REPLACE FUNCTION indexable_month_day(date) RETURNS TEXT as $BODY$
  SELECT to_char($1, 'MM-DD');
$BODY$ language 'sql' IMMUTABLE STRICT;


SELECT lastname, firstname, title, birthdate 
FROM employees 
WHERE 
    indexable_month_day(birthdate) >= to_char(current_date, 'MM-DD')
AND  
    indexable_month_day(birthdate) < to_char(
        current_date + interval '120 days', 'MM-DD');

There's a slight trick here in that we need to add a function and then we need to run a query. Zeppelin will only show the result of the first thing run, so we first run the function, and then move it under the query and run it again. We can execute a query with the play button in the upper right or use shift-enter.

Look at that, Michael and Robert have birthdays coming up in the coming months. You can easily see how you might create a sales dashboard that would tie to the same data, but instead look at things such as our most popular products, best customers, or items that are on sale currently. 

Dashboards can be exported and imported. If you run a conference you might run a view of signups over time and track things like the number of sponsors that have committed. Then after the conference back up your dashboards and tear down your Zeppelin until the next conference goes live. 

Try to find your data hanging around and make it more productive by putting a dashboard and web ui in front of it. You'll be surprised by the ways that others put it to work. Thanks Zeppelin and Juju for making this easy to do. 

References:

Setting up Zeppelin and MySQL

Operations in Production

Operations in Production

As we setup our important infrastructure we set up monitoring, alerting, and keep a close eye on them. As the services we provide grow and need to expand, or failures in hardware attempt to wreck havoc, we’re ready because of the due diligence that’s gone into monitoring the infrastructure and applications deployed. To do this monitoring, we often bake it into our deployment and configuration management tooling. One thing I often see is that folks forget to monitor the tools that coordinate all of that deployment and configuration management. It’s a bit of a case of “who watches the watcher?”

2016 kick off

Photo by pownibe/iStock / Getty Images

Photo by pownibe/iStock / Getty Images

2016 has arrived and most New Years I don't feel the need to really do or think much on the coming year.Things change too much. A year ago, how much of today would I have foreseen? 

This year though, well the last few weeks at least, I've felt the year coming more. I feel the need to write out goals for myself in the coming year. One of which is to get back a website presence. I've done that by moving my old WordPress content over here to a new SquareSpace site that I'll hopefully put together to better capture me. For the most part, this is just for myself. However, I struggle with a WordPress, G+, Tumblr, Flickr, and Twitter accounts and find myself split based on the part that make up me. There's my work, my hobbies, my family, and my rants!  I'll be trying to consolidate that mess here and make some sense of it so I start to feel like I've got a creative outlet that fits me. 

All that said, here be dragons, construction, and hopefully a revitalization of my brain down to word form. 

The great goals of 2016

Write more

This one seems like the no brainer so far. I'd like to write more and improve my writing skills, ergo this website. 

Get out for more planned photography adventures

I'm an opportunistic photographer. I take a camera with me, try to get a few snapshots, and hopefully some of them look good and make me happy. I need to put forth more effort into getting out for some planned ideas. Go get some sunrise shots by setting an alarm and having a place I want to go to capture that sunrise. Keep an eye on cool local events to do some street photography during local fairs and such. Plan ahead when a work trip comes up, and make sure I've planned out some of the pictures I want to make and come home with. 

Get outside more

I work from home, long hours, and really need to get outside more. I love camping, hiking, fishing, etc. I just don't seem to prioritize it enough so that I make sure to do it enough. As my son gets older, including him in these excursions should help kill off excuses for not doing it. 

Travel more, and make sure some of it is for fun

I have been travelling more for work, but as I travel more it's for shorter 'hop over for a couple of meetings' travel and there's not really much fun to it. I really need to get some more travel for myself going. Ever since I've joined Canonical and started traveling I've gotten bit more and more by that darned travel bug. I also feel like traveling well, vs just showing up for work, is amazingly good for expanding your horizons. I need more of that.

Sell a LOT of stuff

Folks know I have a problem. I'm continually optimizing, I enjoy nice things, and I like to try out new things. All of this leads to me collecting many items around the house and many I don't use any more because I found better tools, different ways of doing things, etc. I've just loaded 4 30gal bags of clothes to donate into the back of the car. I really wanted to add more. I think over this year you'll see a concerted effort to do more with less and the travel bug has a lot to do with that. 

Finally: figure out this job thing

This one seems like it should be easy, but wow it's been an interesting year at work. All I ever wanted was to write code and build things. However, I don't think I've written much of any code over the last year. Instead I've gotten myself into the responsibilities of product owner, product manager, and director of engineering. I care about building cool things, talking about them, and coordinating a dozen moving parts to make it happen. I'm not perfect, but I feel like my brain fits much of this work. I've just not figured out exactly how it's all going to shake out. I'm still in interim directory. I really think the part I enjoy is the product work though. Can you do one and not the other? Every year since joining Canonical the unexpected has reigned and I have a feeling this year's ride will be crazier than ever.

Working at Canonical, three years in. a.k.a wtf just happened?

A couple of people have reached out to me via LinkedIn and reminded me that my three year work anniversary happened last Friday. Three years since I left my job at a local place to go work for the Canonical where I got the chance to be paid to work on open source software and better my Python skills with the team working on Launchpad. My wife wasn't quite sure. "You've only been at your job a year and a half, and your last one was only two years. What makes this different?" What's amazing, looking back, is just how *right* the decision turned out to be. I was nervous at the time. I really wasn't Launchpad's biggest fan. However, the team I interviewed with held this promise of making me a better developer. They were doing code reviews of every branch that went up to land. They had automated testing, and they firmly believed in unit and functional tests of the code. It was a case of the product didn't excite me, but the environment, working with smart developers from across the globe, was exactly what I felt like I needed to move forward with my career, my craft.

2013-09-02 18.17.47

I joined my team on Launchpad in a squad of four other developers. It was funny. When I joined I felt so lost. Launchpad is an amazing and huge bit of software, and I knew I was in over my head. I talked with my manager at the time, Deryck, and he told me "Don't worry, it'll take you about a year to get really productive working on Launchpad." A year! Surely you jest, and if you're not jesting...wtf did I just get myself into?

It was a long road and over time I learned how to take a code review (a really hard skill for many of us), how to do one, and how to talk with other smart and opinionated developers. I learned the value of the daily standup, how to manage work across a kanban board. I learned to really learn from others. Up until this point I'd always been the big fish in a small pond and suddenly I was the minnow hiding in the shallows. Forget books on how to code, just look at the diff in the code review you're reading right now. Learn!

My boss was right, it was nearly ten months before I really felt like I could be asked to do most things in Launchpad and get them done in an efficient way. Soon our team was moved on from Launchpad to other projects. It was actually pretty great. On the one hand, "Hey! I just got the hang of this thing" but, on the other hand, we were moving on to new things. Development life here has never been one of sitting still. We sit down and work on the Ubuntu cycle of six month plans, and it's funny because even that is such a long time. Do you really know what you'll be doing six months from now?

P1100197.jpg

Since that time in Launchpad I've gotten work on several different projects and I ended up switching teams to work on the Juju Gui. I didn't really know a lot about this Juju thing, but the Gui was a fascinating project. It's a really large scale JavaScript application. This is no "toss some jQuery on a web page" thing here.

I also moved to work under a new manager Gary. As my second manager since starting at Canonical and I was amazed at my luck. Here I've had two great mentors that made huge strides in teaching me how to work with other developers, how to do the fun stuff, the mundane, and how to take pride in the accomplishments of the team. I sit down at my computer every day and I've got the brain power of amazing people at my disposal over irc, Google Hangouts, email, and more. It's amazing to think that at these sprints we do, I'm pretty much never the smartest person in the room. However, that's what's so great. It's never boring and when there's a problem the key is that we put our joint brilliant minds to the problem. In every hard problem we've faced I've never found that a single person had the one true solution. What we come up with together is always better than what any of us had apart.

When Gary left and there was a void for team lead and it was something I was interested in. I really can't say enough awesome things about the team of folks I work with. I wanted to keep us all together and I felt like it would be great for us to try to keep things going. It was kind of a "well I'll just try not to $#@$@# it up" situation. That was more than nine months ago now. Gary and Deryck taught me so much, and I still have to bite my tongue and ask myself "What would Gary do" at times. I've kept some things the same, but I've also brought my own flavor into the team a bit, at least I like to think so. These days my Github profile doesn't show me landing a branch a day, but I take great pride in the progress of the team as a whole each and every week.

The team I run now is as awesome a group of people, the best I could hope to work for. I do mean that, I work for my team. It's never the other way around and that's one lesson I definitely picked up from my previous leads. The projects we're working on are exciting and new and are really important to Canonical. I get to sit in and have discussions and planning meetings with Canonical super genius veterans like Kapil, Gustavo, and occasionally Mark Shuttleworth himself.

Looking back I've spent the last three years becoming a better developer, getting an on the job training course on leading a team of brilliant people, and crash course on thinking about the project, not just as the bugs or features for the week, but for the project as it needs to exist in three to six months. I've spent three years bouncing between "what have I gotten myself into, this is beyond my abilities" to "I've got this. You can't find someone else to do this better". I always tell people that if you're not swimming as hard as you can to keep up, find another job. I feel like three years ago I did that and I've been swimming ever since.

P1040511.jpg

Three years is a long time in a career these days. It's been a wild ride and I can't thank the folks that let me in the door, taught me, and have given me the power to do great things with my work enough. I've worked by butt off in Budapest, Copenhagen, Cape Town, Brussels, North Carolina, London, Vegas, and the bay area a few times. Will I be here three years from now? Who knows, but I know I've got an awesome team to work with on Monday and we'll be building an awesome product to keep building. I'm going to really enjoy doing work that's challenging and fulfilling every step of the way.

DSC00329

Bookie meets Google Summer of Code 2014

Today the Google Summer of Code student selections were announced, and with that announcement Bookie revealed our selections for the slots allocated for each of our two mentors. This announcement highlights an amazing round of participation in Bookie as an open source project. Twenty people participated and landed over 110 commits worth of patches in Bookie since the opening of GSoC. That is AMAZING! In less than a week every bite-sized bug evaporated from the issue tracker. Also amazing is the quality and effort that everyone put into their work. Everyone was eager to learn how to add tests to their patches, and they worked so hard to get their code landed. Bookie emerges a better open source tool for managing bookmarks than it was 2 months ago, and that is because of the hard work and dedication of all of the participating students.

Students did more than land branches; they invigorated the community. We had many users jump into IRC to answer questions and guide students through the process. They also performed QA and did code reviews of their work. The enthusiasm the students brought to Bookie motivated me to make the time to help move things forward. After all: if a student spent 3 days figuring out how to fix a bug, write a test for it, commit the fixes to git, and get it up for code review; then I can manage to find the 30 minutes to pull the commit, review the code, and QA the work. This period motivated me to update documentation and ensure the install process worked for a wider audience. Additional motivation came from knowing that Bookie is interesting as a tool to other people besides myself.

I want every student not selected for Google Summer of Code to know their work and effort is greatly appreciated. I and the other members of the Bookie community enjoyed working with everyone who participated. Bookie had 32 applications for 2 available spots. In conversations with other organizations Bookie had a comparatively crazy amount of competition for few allocated spaces. I wish Bookie had a dozen or so more mentors and slots as over half of Bookie's proposals would have easily been accepted. Culling the dozens of great proposals into two positions was a very difficult process for us. It's hard to say "not right now" when there's so many great offers by so many eager and capable students.

Regardless of whether you were selected for Google Summer of Code the fun doesn't have to end. If you found the time contributing to Bookie valuable; if you learned something new, gained some material for that resume, or just had a good time: PLEASE DON'T STOP! Bookie isn't going anywhere or closing up shop; we're more than happy to continue mentoring and working with you all. We worked hard during this process to ensure all students were given the best chance to take something positive away from this application process. With your continued participation in the Bookie project we'd like to continue to mentor and provide guidance for you.

One area of guidance we owe all students relates to your proposal. Should you want any explanation of what you could do differently with your proposal / application please let us know. I'll be honest though: most of the applications we received were quite good, so there's little to critique. The scoring method we used put most of the applications within a few points of each other. But if you'd like to know more please feel free to ping me in irc and ask me anything you'd like.

Finally we'd like to congratulate Sambuddha and Pradyumna for their outstanding work leading up to this announcement, and we look forward to the results of their proposals for adding great features to Bookie over the summer. If you find the work interesting, please come help them out. Feel free to get involved, help with the work, the code reviews, and the testing of the new features. Maybe you'll be helping mentor Bookie next year? Who knows? :)

This was our first year participating in Google Summer of Code, but you can be assured it will not be the last.We'd like to thank all of the students for flooding our channels and making this not only an amazingly crazy and busy time but also an immensely rewarding period in Bookie's history. You are all part of Bookie's history and we look forward to seeing you as part of Bookie's future. Thank you.

Juju Quickstart and the power of bundles

The Juju UI team has been hard at work making it even easier for you to get started with Juju. We've got a new tool for everyone that is appropriately named Juju Quickstart and when you combine it with the power of Juju bundles you're in for something special.

Quickstart is a Juju plugin that aims to help you get up and running with Juju faster than any set of commands you can copy and paste. First, to use Quickstart you need to install it. If you're on the upcoming Ubuntu Trusty release it's already there for you. If you're on an older version of Ubuntu you need to get the Juju stable ppa

sudo add-apt-repository ppa:juju/stable sudo apt-get update

Installing Quickstart is then just:

sudo apt-get install juju-quickstart

Once you've got Quickstart installed you are ready to use it to deploy Juju environments. Just run it with `juju-quickstart`. Quickstart will then open a window to help walk you through setting up your first cloud environment using Juju.

Quickstart can help you configure and setup clouds using LXC (for local environments), OpenStack (which is used for HP Cloud), Windows Azure, and Amazon EC2. It knows what configuration data is required for each cloud provider and provides hints on where to find the information you’ll need.

Once you've configured  your cloud provider, Quickstart will bootstrap a Juju environment on it for you. This takes a while on live clouds as this is bringing up instances.

Quickstart does a couple of things to make the environment nicer than your typical bootstrap. First, it will automatically install the Juju GUI for you. It does this on the first machine brought up in the environment so that it's co-located, which means it comes up much faster and does not incur the cost of a separate machine.  Once the GUI is up and running, Quickstart will automatically launch your browser and log you into the GUI. This saves you from having to copy and paste your admin secret to log in.

If you would like to setup additional environments you can re-launch Quickstart at any time. Use juju-quickstart -i to get back to the guided setup.

Once the environment is up Quickstart still helps you out by providing a shortcut to get back to the Juju GUI running. It will auto launch your browser, find the right IP address of the GUI, and auto log you in. Come back the next day and Quickstart is still the fastest way to get back into your environment.

Finally, Quickstart works great with the new Juju charm bundles feature. A bundle is a set of services with a specific configuration and their corresponding relations that can be deployed together via a single step. Instead of deploying a single service, they can be used to deploy an entire workload, with working relations and configuration. The use of bundles allows for easy repeatability and for sharing of complex, multi-service deployments. Quickstart can accept a bundle and will deploy that bundle for you. If the environment is not bootstrapped it will bring up the environment, install the GUI, and then deploy the bundle.

For instance, here is the one command needed to deploy a bundle that we’ve created and shared:

juju-quickstart bundle:~jorge/mongodb-cluster/1/mongodb-cluster

If the environment is already bootstrapped and running then Quickstart will just deploy the bundle. The two features together work great for testing repeatable deployments. What's great is that the power of Juju means you can test this deployment on multiple clouds effortlessly.  For instance you can design and configure your bundle locally via LXC and, when satisfied, deploy it to a real environment, simply by changing the environment command-line option when launching Quickstart.

Try out Quickstart and bundles and let us know what you think. Feel free to hop into our irc channel #juju on Freenode if you've got any questions. We're happy to help.

Make sure to check out Mat's great YouTube video walk through as well over on the Juju GUI blog.

Bookie Sprint - Aug 31st

It's time for another Bookie sprint! When - Saturday August 31st

What time - Starts at 11am

Where - my house! Ping me for address/map info if you're coming along. Map out to Clarkston, MI.

What will we be working on?

The goal is to work on test coverage and breadability article parsing. Are you new to application testing? Come out and learn while helping out an open source project.

If you want to participate online please join our irc channel #bookie on freenode.net. If there's something else you'd rather work on then please let me know and I'll be happy to do whatever I can to aid in participation.