2.2

Call for testing: Shared services with Juju

Juju has long provided the model as the best way to describe, in a cross cloud and repeatable way, your infrastructure. Often times, your infrastructure consists of shared resources outside of the different models that are operated. Examples might be a shared object storage service providing space for everyone to backup important data, or perhaps a shared nagios resource providing the single pane of glass that operators need to make sure that all is well in the data center. 

Juju 2.2 provides a new feature behind a feature flag that we’d like to ask folks to test. It’s called Cross Model Relations and builds upon the great unique feature of Juju known as relations. Relations allow components of your architecture to self-coordinate across each other passing information required to operate. This could just be the IP addresses of each other so that config files can be written such that a front end application can speak to the back end service correctly. It could also be as complicated as passing actual payloads of data back and forth. 

Cross Model Relations allows these relations to take place beyond the boundary of the current model. The idea is that I might have a centrally operated service that is made available to other models. Let’s walk through an example of this by providing a centrally operated MySQL service to other folks in the company. As the MySQL expert in our hypothetical company I’m going to create a model that has a scaled out, monitored, and properly sized MySQL deployment. 

First, we need to enable the CMR (Cross Model Relations) feature flag. To use a feature flag in Juju we export an environment variable JUJU_DEV_FEATURE_FLAGS.

$ export JUJU_DEV_FEATURE_FLAGS=cross-model

Next we need to bootstrap the controller we’re going to test this out on. I’m going to use AWS for our company today.

$ juju bootstrap aws crossing-models

Once that’s done let’s setup our production grade MySQL service.

$ juju add-model prod-db
$ juju deploy mysql --constraints "mem=16G root-disk=1T"
$ juju deploy nrpe......and more to make this a scale out mysql model

Now that we’ve got a proper scaled MySQL service going let’s look at offering that database to other models.  Now we’re able to use a new Juju command juju offer.

$ juju offer mysql:db mysqlservice
Application "mysql" endpoints [db] available at "admin/prod-db.mysqlservice"

We’ve offered to other models the db endpoint that the MySQL application provides. The only bit of our entire prod-db model that’s exposed to other folks is the endpoint we’ve selected to provide. You might provide a proxy or load balancer endpoint to other models in the case of a web application, or you might provide both a db and a Nagios web endpoint out to other models if you want them to be able to query the current status of your monitored MySQL instance. There’s nothing preventing multiple endpoints from one or more applications from being offered out there. 

Also note that there’s a URL generated to reference this endpoint. We can ask Juju to tell us about offers that are available for use. 

$ juju find-endpoints
URL                         Access  Interfaces
admin/prod-db.mysqlservice  admin   mysql:db

Now that we’ve got a database let’s find some uses for it. We’ll setup a blog for the engineering team using Wordpress which leverages a MySQL db back end. Let’s setup the blog model and give them a user account for managing it. 

$ juju add-model engineering-blog
$ juju add-user engineering-folks
$ juju grant engineering-folks write engineering-blog

Now they’ve got their own model for managing their blog. If they’d like, they can setup caching, load balancing, etc. However, we’ll let them know to use our database where we’ll manage db backups, scaling, and monitoring. 

$ juju deploy wordpress
$ juju expose wordpress
$ juju relate wordpress:db admin/prod-db.mysqlservice

This now sets up some interesting things in the status output. 

$ juju status
Model              Controller       Cloud/Region   Version  SLA
engineering-blog   crossing-models  aws/us-east-1  2.2.1    unsupported

SAAS name     Status   Store  URL
mysqlservice  unknown  local  admin/prod-db.mysqlservice

App        Version  Status  Scale  Charm      Store       Rev  OS      Notes
wordpress           active      1  wordpress  jujucharms    5  ubuntu

Unit          Workload  Agent  Machine  Public address  Ports   Message
wordpress/0*  active    idle   0        54.237.120.126  80/tcp

Machine  State    DNS             Inst id              Series  AZ          Message
0        started  54.237.120.126  i-0cd638e443cb8441b  trusty  us-east-1a  running

Relation      Provides      Consumes   Type
db            mysqlservice  wordpress  regular
loadbalancer  wordpress     wordpress  peer

Notice the new section above App called SAAS. What we’ve done is provided a SAAS-like offering of a MySQL service to users. The end users can see they’re leveraging the offered service. On top of that the relation is noted down in the Relation section. With that our blog is up and running. 

We can repeat the same process for a team wiki using Mediawiki which will also use a MySQL database backend. While setting it up notice how the Mediawiki unit complains about a database is required in the first status output. Once we add the relation to the offered service it heads to ready status. 

$ juju add-model wiki
$ juju deploy mediawiki
$ juju status
...
Unit          Workload  Agent  Machine  Public address  Ports  Message
mediawiki/0*  blocked   idle   0        54.160.86.216          Database required

$ juju relate mediawiki:db admin/prod-db.mysqlservice
$ juju status
...
SAAS name     Status   Store  URL
mysqlservice  unknown  local  admin/prod-db.mysqlservice

App        Version  Status  Scale  Charm      Store       Rev  OS      Notes
mediawiki  1.19.14  active      1  mediawiki  jujucharms    9  ubuntu
...

Relation  Provides   Consumes      Type
db        mediawiki  mysqlservice  regular

We can prove things are working by actually checking out the databases in our MySQL instance. Let’s just go peek and see they’re real. 

$ juju switch prod-db
$ juju ssh mysql/0
mysql> show databases;
+-----------------------------------------+
| Database                                |
+-----------------------------------------+
| information_schema                      |
| mysql                                   |
| performance_schema                      |
| remote-05bd1dca1bf54e7889b485a7b29c4dcd |
| remote-45dd0a769feb4ebb8d841adf359206c8 |
| sys                                     |
+-----------------------------------------+
6 rows in set (0.00 sec)

There we go. Two remote-xxxx databases for each of our models that are using our shared service. This is going to make operating our infrastructure at scale so much better!

Please go out and test this. Let us know what use cases you find for it and what the next steps should be as we prepare this feature for general use. You can find us in the #juju irc channel on Freenode, the Juju mailing list, and you can find me at @mitechie

Current limitations

As this is a new feature it’s limited to working within a single Juju Controller. It’s also a current work in progress so please watch out for bugs as they get fixed, UX that might get tweaked as we get feedback, and note that upgrading a controller with CMR to a newer version of Juju is not currently supported. 

 

Upgrading Juju using model migrations

Since Juju 2.0 there's a new feature, model migrations, intended to help provide a bulletproof upgrade process. The operator stays in control throughout and has numerous sanity checks to help provide confidence along the upgrade path. Model migrations allow an operator to bring up a new controller on a new version of Juju and to then migrate models from an older controller one at a time. These migrations go through the process of putting agents into a quiet state and queue'ing any changes that want to take place. The state is then dumped out into a standard format and shipped to the new controller. The new controller then loads the state and verifies it matches by checking it against the state from the older controller. Finally, the agents on each managed machine are checked to make sure they can communicate with the new controller and that any state matches expectations before those agents update themselves to report to the new controller for duty.

Once this is all complete the handoff is finished and the old controller can be taken down once the last model is migrated away. In order to show how this works I've got a controller running Juju 2.1.3 and we're going to upgrade my models running on that controller by migrating them to a brand new Juju 2.2 controller. 

One thing to remember is that Juju controllers are the kings of state. Juju is an event based system and on each machine or cloud instance managed an agent runs that communicates with the controller. Events from those agents are processed and the controller updates the state of applications, triggers future events, or just takes note of messages in the system. When we talk about migrating the model, we're only moving where the state system is communicating. None of the workloads are moved. All instances and machines stay exactly where they're at and there's no impact on the workloads themselves. 

$ juju models -c juju2-1
Controller: juju2-1

Model       Cloud/Region   Status     Machines  Cores  Access  Last connection
controller  aws/us-east-1  available         1      1  admin   just now
gitlab      aws/us-east-1  available         2      2  admin   49 seconds ago
k8s*        aws/us-east-1  available         3      2  admin   39 seconds ago

This is our controller running Juju 2.1.3 and it has on it a pair of models running important workloads. One is a model running a Kubernetes workload and the other is running gitlab hosting our team's source code. Lets upgrade to the new Juju 2.2 release. The first thing we need to do is to bootstrap a new controller to move the models to. 

Gitlab running in the gitlab model to host my team's source code.

Gitlab running in the gitlab model to host my team's source code.

First we upgrade our local Juju client to Juju 2.2 by getting the updated snap from the stable channel. 

sudo snap refresh juju --classic

Now we can bootstrap a new controller making sure to match up the cloud and region of the models we want to migrate. They were in AWS in the us-east-1 region so we'll need to make sure to bootstrap there.

$ juju bootstrap aws/us-east-1 juju2-2

Looking at this model we have the two, out of the box, models a new controller always does. 

$ juju models
Controller: juju2-2

Model       Cloud/Region   Status     Machines  Cores  Access  Last connection
controller  aws/us-east-1  available         1      1  admin   2 seconds ago
default*    aws/us-east-1  available         0      -  admin   8 seconds ago

To migrate models to this new controller we need to be on the older controller. 

$ juju switch juju2-1

With that done we can now ask Juju to migrate the gitlab model to our new controller. 

$ juju migrate gitlab juju2-2
Migration started with ID "44d8626e-a829-48f0-85a8-bd5c8b7997bb:0"

The migration kicks off and the UUID is provided as a way of tracking among potentially several migrations going on. If a migration were to fail for any reason and resume its previous state we could make corrections and try again. Watching the output of juju status while the migration processes is a interesting. Once done you'll find that status errors because the model is no longer there. 

$ juju list-models -c juju2-1
Controller: juju2-1
    
Model       Cloud/Region   Status     Machines  Cores  Access  Last connection
controller  aws/us-east-1  available         1      1  admin   just now
k8s         aws/us-east-1  available         3      2  admin   54 minutes ago

Here we can see our model is gone! Where did it go?

$ juju list-models -c juju2-2                                                                                        (rharding@dingy:)
Controller: juju2-2

Model       Cloud/Region   Status     Machines  Cores  Access  Last connection
controller  aws/us-east-1  available         1      1  admin   just now
default*    aws/us-east-1  available         0      -  admin   45 minutes ago
gitlab      aws/us-east-1  available         2      2  admin   1 minute ago

There we go, it's over on the new juju2-2 controller. The controller managing state is on Juju 2.2, but now it's time for us to update the agents running on each of the machines/instances to also be the new version.

$ juju upgrade-juju -m juju2-2:gitlab
started upgrade to 2.2

$ juju status | head -n 2
Model    Controller  Cloud/Region   Version    SLA
default  juju2-2     aws/us-east-1  2.2        unsupported

Our model is now all running on Juju 2.2 and we can use any of the new features that are useful to us against this model and the applications in it. With that upgraded let's go ahead and finish up by migrating the Kubernetes model. The process is exactly the same as the gitlab model. 

$ juju migrate k8s juju2-2
Migration started with ID "2e090212-7b08-494a-859f-96639928fb02:0"

...one, two, skip a few...

$ juju models -c juju2-2
Controller: juju2-2

Model       Cloud/Region   Status     Machines  Cores  Access  Last connection
controller  aws/us-east-1  available         1      1  admin   just now
default*    aws/us-east-1  available         0      -  admin   47 minutes ago
gitlab      aws/us-east-1  available         2      2  admin   1 minute ago
k8s         aws/us-east-1  available         3      2  admin   57 minutes ago

$ juju upgrade-juju -m juju2-2:k8s
started upgrade to 2.2

The controller juju2-1 is no longer required since it's not in control of any models. There's no state for it to track and manage any longer. 

$ juju destroy-controller juju2-1

Give model migrations a try and keep your Juju up to date. If you hit any issues the best place to look is in the juju debug-log from the controller model. 

$ juju switch controller
$ juju debug-log

Make sure to reach out and let us know if it works well or if you hit a bug in the system. You can also use model migrations to move models among the same version of Juju to help balance load or for maintenance purposes. You can find the team in #juju on Freenode or send a message to the mailing list.