cloud

Think Cloud portable Let Applications drive the Model

In our last intro to Modeling with Juju post we didn’t pay any attention to the hardware needed to run our workloads. We ran with Juju default values for what the hardware characteristics of the cloud instances should be. Rather than define a bunch of YAML about each machine needed to run the infrastructure, Juju picks sane defaults that allow you to prototype and test things out with a lot less work. 

The reason this is important is that Juju is a modeling tool. When it comes time to making decisions about your hardware infrastructure, it’s key to focus on the needs of the Applications in your Model. If we’re going to operate PostgreSQL in production then we need to look at the needs of PostgreSQL. How much disk space do we need to provide to adequately store our data and provide space for backups and the like? How much CPU power do we need in our PostgreSQL instance? How much memory is going to be required to make sure PostgreSQL has the best chance of achieving hits to data already in memory?

All of these questions fall under characteristics required to run the specific Application. It’ll be different for each Application we operate as part of our infrastructure. Juju shines at this because of the way that it’s build around the Cloud idea of infrastructure as an API. There’s no plugins required. Juju natively is put together so that whenever you deploy an Application that you want to run the API calls are made to make sure you get the right type of hardware using the underlying Cloud API calls. Infrastructure is loaded as-needed and there's no requirement to pre-load software on the machines before you use them.

Juju provides a tool called “Constraints” to manage these details in the Model. In their simplest form, a Constraint is just an option you can pass during the deploy command. Let’s compare the outcome of these two commands:

$ juju deploy postgresql 
juju deploy postgresql pgsql-constrained --constraints mem=32G

$ juju status
…
App                Version  Status  Scale  Charm       Store       Rev  OS      Notes
pgsql-constrained  9.5.9    active      1  postgresql  jujucharms  163  ubuntu
postgresql         9.5.9    active      1  postgresql  jujucharms  163  ubuntu

Unit                  Workload  Agent  Machine  Public address   Ports     Message
pgsql-constrained/0*  active    idle   1        35.190.190.119   5432/tcp  Live master (9.5.9)
postgresql/0*         active    idle   0        104.196.135.211  5432/tcp  Live master (9.5.9)

Machine  State    DNS              Inst id        Series  AZ          Message
0        started  104.196.135.211  juju-dd8186-0  xenial  us-east1-b  RUNNING
1        started  35.190.190.119   juju-dd8186-1  xenial  us-east1-c  RUNNING

From juju status these commands don’t look like they did a whole lot different. They’ve both introduced a PostgreSQL server into our Model and it’s available to be part of Relations to other Applications where it will provide them with a database to use. Juju encourages focusing on what’s running on the Application level over the minutia of hardware. However, if we compare the machine details we can see these commands definitely changed PostgreSQL’s ability to perform for us. 

$ juju show-machine 0
machines:
  "0":
    ...
    hardware: arch=amd64 cores=1 cpu-power=138 mem=1700M root-disk=10240M availability-zone=us-east1-b

$ juju show-machine 1
machines:
  "1":
    ...
    constraints: mem=32768M
    hardware: arch=amd64 cores=8 cpu-power=2200 mem=52000M root-disk=10240M availability-zone=us-east1-c

Notice the different output? The constrained machine knows that the Model states that there’s an active constraint. We want 32G of memory for this PostgreSQL Application and if I were to run a juju add-unit command the next machine would know there’s an active Constraint and respect it. In this way Constraints are part of the active Model. If we were to remove the Unit pgsql-constrained the Constraint is still in there as part of the Model. 

The next thing to note is that we’ve got more memory than I asked for. We wanted to make sure my PostgreSQL had 32G of RAM vs the default instance type which has 1.7G. In order to do that Juju had to request a larger instance which also brought in 8 cores and we ended up with 52G of ram. Looking at the GCE instances available, their instance n1-standard-8 only has 30G of memory. So Juju went looking for a better match and pulled up the n1-highmem-8 instance type. In this way we receive the best matching unit that meets or exceeds my Constraints we’ve requested. Juju walks through all the various instance types to help do this. 

As a user, we focus on what our Application needs to run at its best and Juju takes care of finding the most cost-effective instance type that will meet these goals. This is key to allowing my Model to remain cloud agnostic and maintain performance expectations, regardless of what Cloud we use this Model on.  Let’s try this same deploy on AWS and see what happens. 

$ juju add-model pgsql-aws aws
$ juju deploy postgresql pgsql-on-aws --constraints mem=32G
$ juju show-machine 0
machines:
  "0":
    ...
    constraints: mem=32768M
    hardware: arch=amd64 cores=8 cpu-power=2800 mem=62464M root-disk=8192M availability-zone=us-east-1a

Notice here we ended up with an instance that’s got 8 cores and 62G of ram. That’s the best-fit instance-type on AWS for our Constraints we need to make sure our system performs at its best. 

Other Constraints

As you’d expect we can set other constraints to customize the characteristics of the hardware our Applications leverage. We can start with the number of CPU cores. This is vital in today’s world of applications that are able to put those extra cores to use. It might also be key when we’re going to run LXD machine containers or other VM technology on a machine. Keeping a core per VM is a great way to leverage the hardware available.

$ juju deploy postgresql pgsql-4core --constraints cores=4
$ juju show-machine 2
machines:
  "2":
    ...
    constraints: cores=4
    hardware: arch=amd64 cores=4 cpu-power=1100 mem=3600M root-disk=10240M availability-zone=us-east1-d

Voila, Juju helps get the best instance for those requirements. The most cost-effective instance available with 4 cores provides 3.6G of memory.

root-disk sizing

$ juju deploy postgresql pgsql-bigdisk --constraints root-disk=1T
$ show-machine 3 
machines:
  "3":
    ...
    constraints: root-disk=1048576M
    hardware: arch=amd64 cores=1 cpu-power=138 mem=1700M root-disk=1048576M availability-zone=us-east1-b

One thing that’s not setup as part of the instance-type is the disk space allocated to the instance. Some Applications focus on CPU usage but others will want to track state and store that on disk. We can specify the size of the disk that the instances are allocated through the root-disk Constraint.

The root-disk Constraints only mean that Juju will make sure that the instance has a Volume of that size when it comes up. This is especially useful for Big Data Applications and Databases. 

Remembering Constrains in the model

From here let’s combine our Constraints and produce a true production grade PostgreSQL server. Then we’ll create the bundle that will deploy this in a repeatable fashion that we can reuse on any Cloud that we want to take it to. 

$ juju deploy postgresql pgsql-prod --constraints "cores=4 mem=24G root-disk=1T"

series: xenial
applications: 
  "pgsql-prod": 
    charm: "cs:postgresql-163"
    num_units: 1
    constraints: "cores=4 mem=24576 root-disk=1048576"

Constraints on Controllers

Once we understand how Constraints work one area we’ll find it really valuable is in the sizing of your own Juju Controllers. If we’re operating a true multi-user Controller with many models to the folks in your business we’ll want to move up from the default instances sizes. Juju is conservative so that folks testing out and experimenting with Juju don’t end up running much larger instances than expected. There are a lot more test or trial controllers out there than Controllers that run true production. It is much like any product really. 

Controller Constraints are supplied during bootstrap using the flag bootstrap-constraints. For example: 

$ juju bootstrap --bootstrap-constraints="mem=16G cores=4 root-disk=400G" google

Some considerations when you’re operating a controller include: 

  • root-disk - Juju leverages persistent database and tracks logs and such in that database. You want to make sure to provide space for running Juju backups, storage of Charms and Resources used in Models (they’re cached on the controller), and logs that are rotated over time, etc. 
  • mem - as with any database, the more it can fit into memory the better. It’ll respond quicker to events through the system. During events such as upgrades and migrations there can be a lot of memory consumption as entire models are processed as part of those actions. 
  • cores - speed, as you have more clients and units talking over the API the more cores and faster it can respond the better. Here cores is more important than raw CPU as most things are more about fetching/processing documents vs doing pure computation. 

Cloud specific Constraints

There are cloud specific constraints that are supported. MAAS, for instance, allows deploying via tags because MAAS supports tags as a way of classifying bare metal machines where you might optimize a hardware purchase for network applications, storage applications, etc. Make sure to check out the documentation to see all available Constraints and what Clouds they are supported on. 

The great thing about the Constraint system is that it keeps the focus on requirements on the Applications and what they actually need to get their jobs done. We’re not trying to force Applications onto existing hardware that might not fit it’s operating profile. It’s an idea that really excels in the IaaS Cloud world. 

Give it a try and if you have any questions or hit any problems let us know. You can find us in the #juju IRC channel on Freenode or on the Juju mailing list. You can reach me on Twitter. Don’t hesitate to reach out. 

Call for testing: Shared services with Juju

Juju has long provided the model as the best way to describe, in a cross cloud and repeatable way, your infrastructure. Often times, your infrastructure consists of shared resources outside of the different models that are operated. Examples might be a shared object storage service providing space for everyone to backup important data, or perhaps a shared nagios resource providing the single pane of glass that operators need to make sure that all is well in the data center. 

Juju 2.2 provides a new feature behind a feature flag that we’d like to ask folks to test. It’s called Cross Model Relations and builds upon the great unique feature of Juju known as relations. Relations allow components of your architecture to self-coordinate across each other passing information required to operate. This could just be the IP addresses of each other so that config files can be written such that a front end application can speak to the back end service correctly. It could also be as complicated as passing actual payloads of data back and forth. 

Cross Model Relations allows these relations to take place beyond the boundary of the current model. The idea is that I might have a centrally operated service that is made available to other models. Let’s walk through an example of this by providing a centrally operated MySQL service to other folks in the company. As the MySQL expert in our hypothetical company I’m going to create a model that has a scaled out, monitored, and properly sized MySQL deployment. 

First, we need to enable the CMR (Cross Model Relations) feature flag. To use a feature flag in Juju we export an environment variable JUJU_DEV_FEATURE_FLAGS.

$ export JUJU_DEV_FEATURE_FLAGS=cross-model

Next we need to bootstrap the controller we’re going to test this out on. I’m going to use AWS for our company today.

$ juju bootstrap aws crossing-models

Once that’s done let’s setup our production grade MySQL service.

$ juju add-model prod-db
$ juju deploy mysql --constraints "mem=16G root-disk=1T"
$ juju deploy nrpe......and more to make this a scale out mysql model

Now that we’ve got a proper scaled MySQL service going let’s look at offering that database to other models.  Now we’re able to use a new Juju command juju offer.

$ juju offer mysql:db mysqlservice
Application "mysql" endpoints [db] available at "admin/prod-db.mysqlservice"

We’ve offered to other models the db endpoint that the MySQL application provides. The only bit of our entire prod-db model that’s exposed to other folks is the endpoint we’ve selected to provide. You might provide a proxy or load balancer endpoint to other models in the case of a web application, or you might provide both a db and a Nagios web endpoint out to other models if you want them to be able to query the current status of your monitored MySQL instance. There’s nothing preventing multiple endpoints from one or more applications from being offered out there. 

Also note that there’s a URL generated to reference this endpoint. We can ask Juju to tell us about offers that are available for use. 

$ juju find-endpoints
URL                         Access  Interfaces
admin/prod-db.mysqlservice  admin   mysql:db

Now that we’ve got a database let’s find some uses for it. We’ll setup a blog for the engineering team using Wordpress which leverages a MySQL db back end. Let’s setup the blog model and give them a user account for managing it. 

$ juju add-model engineering-blog
$ juju add-user engineering-folks
$ juju grant engineering-folks write engineering-blog

Now they’ve got their own model for managing their blog. If they’d like, they can setup caching, load balancing, etc. However, we’ll let them know to use our database where we’ll manage db backups, scaling, and monitoring. 

$ juju deploy wordpress
$ juju expose wordpress
$ juju relate wordpress:db admin/prod-db.mysqlservice

This now sets up some interesting things in the status output. 

$ juju status
Model              Controller       Cloud/Region   Version  SLA
engineering-blog   crossing-models  aws/us-east-1  2.2.1    unsupported

SAAS name     Status   Store  URL
mysqlservice  unknown  local  admin/prod-db.mysqlservice

App        Version  Status  Scale  Charm      Store       Rev  OS      Notes
wordpress           active      1  wordpress  jujucharms    5  ubuntu

Unit          Workload  Agent  Machine  Public address  Ports   Message
wordpress/0*  active    idle   0        54.237.120.126  80/tcp

Machine  State    DNS             Inst id              Series  AZ          Message
0        started  54.237.120.126  i-0cd638e443cb8441b  trusty  us-east-1a  running

Relation      Provides      Consumes   Type
db            mysqlservice  wordpress  regular
loadbalancer  wordpress     wordpress  peer

Notice the new section above App called SAAS. What we’ve done is provided a SAAS-like offering of a MySQL service to users. The end users can see they’re leveraging the offered service. On top of that the relation is noted down in the Relation section. With that our blog is up and running. 

We can repeat the same process for a team wiki using Mediawiki which will also use a MySQL database backend. While setting it up notice how the Mediawiki unit complains about a database is required in the first status output. Once we add the relation to the offered service it heads to ready status. 

$ juju add-model wiki
$ juju deploy mediawiki
$ juju status
...
Unit          Workload  Agent  Machine  Public address  Ports  Message
mediawiki/0*  blocked   idle   0        54.160.86.216          Database required

$ juju relate mediawiki:db admin/prod-db.mysqlservice
$ juju status
...
SAAS name     Status   Store  URL
mysqlservice  unknown  local  admin/prod-db.mysqlservice

App        Version  Status  Scale  Charm      Store       Rev  OS      Notes
mediawiki  1.19.14  active      1  mediawiki  jujucharms    9  ubuntu
...

Relation  Provides   Consumes      Type
db        mediawiki  mysqlservice  regular

We can prove things are working by actually checking out the databases in our MySQL instance. Let’s just go peek and see they’re real. 

$ juju switch prod-db
$ juju ssh mysql/0
mysql> show databases;
+-----------------------------------------+
| Database                                |
+-----------------------------------------+
| information_schema                      |
| mysql                                   |
| performance_schema                      |
| remote-05bd1dca1bf54e7889b485a7b29c4dcd |
| remote-45dd0a769feb4ebb8d841adf359206c8 |
| sys                                     |
+-----------------------------------------+
6 rows in set (0.00 sec)

There we go. Two remote-xxxx databases for each of our models that are using our shared service. This is going to make operating our infrastructure at scale so much better!

Please go out and test this. Let us know what use cases you find for it and what the next steps should be as we prepare this feature for general use. You can find us in the #juju irc channel on Freenode, the Juju mailing list, and you can find me at @mitechie

Current limitations

As this is a new feature it’s limited to working within a single Juju Controller. It’s also a current work in progress so please watch out for bugs as they get fixed, UX that might get tweaked as we get feedback, and note that upgrading a controller with CMR to a newer version of Juju is not currently supported. 

 

Open Source software and operational usability

A friend of mine linked me to Yaron Haviv's article "Did Amazon Just Kill Open Source" and I can't help but want to shout a reply

NO!

There is something in premise though, and it's something I keep trying to push as it's directly related to the work I do in my day job at Canonical.

Clouds have been shifting from IaaS to SaaS as sure as can be. They've gone from just getting your quick access to hardware on a "pay for what you use" system to providing you full self service access to applications you used to have to run on that hardware. They've been doing this by taking Open Source software and wrapping it with their own API layers and charging you for their operating of your essential services.

When I look at RDS I see lovely images for PostgreSQL, MySQL, and MariaDB. I can find Elasticsearch, Hadoop, and more. It's a good carrot. Use our APIs and you don't have to operate that software on the EC2 platform any more. You don't have to worry about software upgrades, backups, or scale out operational concerns. Amazon isn't the only cloud doing this either. Each cloud if finding the services it can provide directly to the developers to build products on. 

They've taken Open Source software and fixed delivery to make it easy to consume for most folks. That directly ties into the trend we have been working on at Canonical. As software has moved more and more to Open Source and the cost of software in the average IT budget drops, the costs of finding folks that can operate it at production scale has gotten much more expensive. How many folks can really say they can run Hadoop and Kubernetes and Elasticsearch at professional production levels? How hard is it to find them and how expensive is it to retain them? 

We need to focus on how we can provide this same service, but straight from the Open Source communities. If you want users of your software to be users of YOUR software, and not some wrapped API service, then you need to take those operational concerns into account. We can do more than Open Source the code that is running, we can work together as a community to Open Source the best practices for operating that software over the full life cycle of it. Too often, projects stop short at how to install it. The user has to worry about it long after it's installed. 

I have hope that tools like Juju and the Kubernetes can provide a way for the community around the software we use and love to contribute and avoid the lock in of some vendored solution of the Open Source projects we participate in today. 

Three reasons you need a quick VPN in your pocket.

Recent news that the government has repealed regulations preventing the sale of customer browsing habits has some folks thinking about their internet use and privacy a bit more than usual. I think that most of us assume that the things we do in our home on our own devices are pretty safe from becoming shared with others. This has caused a rash of articles about running your own VPN. As these kept crossing my RSS feeds I got thinking that this is the perfect use case for Juju and JAAS.

Good news! The Tengu team has made it really easy to use Juju to setup your own VPN server. It's nearly as fast as you can get an instance from a cloud provider. As I sit here at the coffee shop I timed it and it took me six minutes, including adding it to my client and hitting connect. 

 

The 6 minute VPN setup

How do we do this? We use JAAS since it's a great way to deploy something into any public cloud and especially different regions. I personally have my personal VPN in the AWS us-east-2 region since it's the closest physically to where I am in Southern Michigan. 

juju add-model myvpn aws/us-east-2
juju deploy openvpn
juju expose openvpn
juju config openvpn clients="rick"
juju scp openvpn/0:~/rick.ovpn myvpn.ovpn

This deploys the OpenVPN charm and sets up a config file for "rick" that I can use to connect with a VPN client. On my MAC I use Viscosity and on Ubuntu I use the Network Manager vpn plugin. Both of these clients can load the .ovpn file that you download from the deployed server. 

Once connected you can see all of your traffic routed through the VPN securely. 

% ping ubuntu.com
PING ubuntu.com (91.189.94.40): 56 data bytes
64 bytes from 91.189.94.40: icmp_seq=0 ttl=47 time=193.328 ms
64 bytes from 91.189.94.40: icmp_seq=1 ttl=47 time=178.245 ms
64 bytes from 91.189.94.40: icmp_seq=2 ttl=47 time=140.312 ms

% traceroute ubuntu.com
traceroute to ubuntu.com (91.189.94.40), 64 hops max, 52 byte packets
 1  ip-10-200-200-1.us-east-2.compute.internal (10.200.200.1)  48.860 ms  47.036 ms  58.141 ms
 2  ec2-52-15-0-2.us-east-2.compute.amazonaws.com (52.15.0.2)  103.381 ms  64.848 ms
    ec2-52-15-0-6.us-east-2.compute.amazonaws.com (52.15.0.6)  69.651 ms

What is even better is that you can shorten this by automating the deploy, expose, and config with a Juju Bundle. I created one that sets up two clients out of the box. One for myself and one for a "guest". If I ever want to add additional clients I could just update the config in the charm. 

A few lines of yaml and a "charm push . cs:~rharding/rickvpn" and I've got a one line deploy of a VPN at my fingertips. If I deploy before I order my coffee the VPN is up and ready for use by the time it's done. 

Reason #2 - blocked ports on the shared wifi

I promised some other reasons for a VPN and blocked ports at a shared wifi location is #2. This Starbucks I'm sitting has had the wifi configured to block port 22 which can be a pain in the rear when you're attempting to work with a lot of cloud instances over SSH. A quick VPN and suddenly the world of SSH is opened back up. Yes, some folks will tell me to change my SSH ports, but when you're working on cloud servers across different clouds it's definitely much more a pain to change SSH everywhere than to just launch this VPN. 

Reason #3 - testing end user experience

I've also found myself working with others across the world. What's always fun is when they're having issues I just can't replicate. We have large numbers of our team in Europe and down in New Zealand and Australia. As you can imagine, their load times for things is a bit different than my midwest connection to things coming out of US based networks. Given the breadth of cloud regions these days, it's actually not as hard as it seems to replicate the experience that the remote users are seeing. I can easily throw up a Europe based VPN and force myself to test things through it. Suddenly I can see that the timeout we have doesn't work well for users whose bytes go through undersea cables. 

I'm sure you can think of some other uses that a quick VPN would come in handy. Let me know what your favorite uses for the OpenVPN charm are. Reach out on Twitter @mitechie. And thank you Tengu Team for this great OpenVPN charm!