testing the future of Juju with snaps

Juju 2.3 is under heavy development and one thing we all want when we're working on the next big release of our software product is to get feedback from users. Are you solving the problems your user has? Are there bugs in the corner cases that a user can find before the release? Are the performance improvements you made working for everyone like you expect? The more folks that test the software before it's out the better off your software will be!

With the recent calls for testing out the Cross Model Relations and Storage Improvements coming in Juju 2.3 I think it'd be good to point out how we can leverage the power of channels in snaps to test out the upcoming features in Juju. 

To get Juju via snaps you can search the snap store and install it like so:

$ snap find juju
$ sudo snap install --classic juju

This then drops the Juju binary in the /snap/bin directory. 

$ /snap/bin/juju --version
2.2.2-zesty-amd64

That's great that we've got the latest stable version of Juju. Let's see what other versions we can get access to. 

Let's try to use the new storage flag on the deploy command that Andrew points out in his blog post.

$ /snap/bin/juju deploy --attach-storage
ERROR flag provided but not defined: --attach-storage

Bummer! That isn't in the stable release of Juju yet. Note that it calls out the flag as not being defined. Let's see if we can get access to a more bleeding edge Juju.

$ snap info juju
name:      juju
summary:   "juju client"
publisher: canonical
contact:   http://jujucharms.com
description: |
  Through the use of charms, juju provides you with shareable, re-usable, and
  repeatable expressions of devops best practices.
commands:
  - juju
tracking:    stable
installed:   2.2.2 (2142) 25MB classic
refreshed:   2017-07-13 16:20:52 -0400 EDT
channels:                                      
  stable:    2.2.2                      (2142) 25MB classic
  candidate: 2.2.2                      (2142) 25MB classic
  beta:      2.2.3+2.2-9909aa4          (2180) 43MB classic
  edge:      2.3-alpha1+develop-1f3f66e (2187) 43MB classic

There we can see that in the edge channel has an upcoming 2.3-alpha release in there. Let's switch to it and test out what's coming in Juju 2.3. 

$ sudo snap refresh --edge juju
juju (edge) 2.3-alpha1+develop-1f3f66e from 'canonical' refreshed

$ /snap/bin/juju --version
2.3-alpha1-zesty-amd64

Now let's check out that command Andrew was talking about with the storage feature in Juju 2.3. 

$ /snap/bin/juju deploy --attach-storage
ERROR flag needs an argument: --attach-storage

There we go, now we've got access to the upcoming storage features in Juju 2.3 and we can provide great feedback to the dev team. 

After we're done testing and providing that feedback we can easily switch back to using the stable release for our normal work. 

$ sudo snap refresh --stable juju
juju 2.2.2 from 'canonical' refreshed

Give it a try, check out the latest in the upcoming 2.3 work and file bugs, send feedback, and be ready to leverage the great work being faster. 

Call for testing: Shared services with Juju

Juju has long provided the model as the best way to describe, in a cross cloud and repeatable way, your infrastructure. Often times, your infrastructure consists of shared resources outside of the different models that are operated. Examples might be a shared object storage service providing space for everyone to backup important data, or perhaps a shared nagios resource providing the single pane of glass that operators need to make sure that all is well in the data center. 

Juju 2.2 provides a new feature behind a feature flag that we’d like to ask folks to test. It’s called Cross Model Relations and builds upon the great unique feature of Juju known as relations. Relations allow components of your architecture to self-coordinate across each other passing information required to operate. This could just be the IP addresses of each other so that config files can be written such that a front end application can speak to the back end service correctly. It could also be as complicated as passing actual payloads of data back and forth. 

Cross Model Relations allows these relations to take place beyond the boundary of the current model. The idea is that I might have a centrally operated service that is made available to other models. Let’s walk through an example of this by providing a centrally operated MySQL service to other folks in the company. As the MySQL expert in our hypothetical company I’m going to create a model that has a scaled out, monitored, and properly sized MySQL deployment. 

First, we need to enable the CMR (Cross Model Relations) feature flag. To use a feature flag in Juju we export an environment variable JUJU_DEV_FEATURE_FLAGS.

$ export JUJU_DEV_FEATURE_FLAGS=cross-model

Next we need to bootstrap the controller we’re going to test this out on. I’m going to use AWS for our company today.

$ juju bootstrap aws crossing-models

Once that’s done let’s setup our production grade MySQL service.

$ juju add-model prod-db
$ juju deploy mysql --constraints "mem=16G root-disk=1T"
$ juju deploy nrpe......and more to make this a scale out mysql model

Now that we’ve got a proper scaled MySQL service going let’s look at offering that database to other models.  Now we’re able to use a new Juju command juju offer.

$ juju offer mysql:db mysqlservice
Application "mysql" endpoints [db] available at "admin/prod-db.mysqlservice"

We’ve offered to other models the db endpoint that the MySQL application provides. The only bit of our entire prod-db model that’s exposed to other folks is the endpoint we’ve selected to provide. You might provide a proxy or load balancer endpoint to other models in the case of a web application, or you might provide both a db and a Nagios web endpoint out to other models if you want them to be able to query the current status of your monitored MySQL instance. There’s nothing preventing multiple endpoints from one or more applications from being offered out there. 

Also note that there’s a URL generated to reference this endpoint. We can ask Juju to tell us about offers that are available for use. 

$ juju find-endpoints
URL                         Access  Interfaces
admin/prod-db.mysqlservice  admin   mysql:db

Now that we’ve got a database let’s find some uses for it. We’ll setup a blog for the engineering team using Wordpress which leverages a MySQL db back end. Let’s setup the blog model and give them a user account for managing it. 

$ juju add-model engineering-blog
$ juju add-user engineering-folks
$ juju grant engineering-folks write engineering-blog

Now they’ve got their own model for managing their blog. If they’d like, they can setup caching, load balancing, etc. However, we’ll let them know to use our database where we’ll manage db backups, scaling, and monitoring. 

$ juju deploy wordpress
$ juju expose wordpress
$ juju relate wordpress:db admin/prod-db.mysqlservice

This now sets up some interesting things in the status output. 

$ juju status
Model              Controller       Cloud/Region   Version  SLA
engineering-blog   crossing-models  aws/us-east-1  2.2.1    unsupported

SAAS name     Status   Store  URL
mysqlservice  unknown  local  admin/prod-db.mysqlservice

App        Version  Status  Scale  Charm      Store       Rev  OS      Notes
wordpress           active      1  wordpress  jujucharms    5  ubuntu

Unit          Workload  Agent  Machine  Public address  Ports   Message
wordpress/0*  active    idle   0        54.237.120.126  80/tcp

Machine  State    DNS             Inst id              Series  AZ          Message
0        started  54.237.120.126  i-0cd638e443cb8441b  trusty  us-east-1a  running

Relation      Provides      Consumes   Type
db            mysqlservice  wordpress  regular
loadbalancer  wordpress     wordpress  peer

Notice the new section above App called SAAS. What we’ve done is provided a SAAS-like offering of a MySQL service to users. The end users can see they’re leveraging the offered service. On top of that the relation is noted down in the Relation section. With that our blog is up and running. 

We can repeat the same process for a team wiki using Mediawiki which will also use a MySQL database backend. While setting it up notice how the Mediawiki unit complains about a database is required in the first status output. Once we add the relation to the offered service it heads to ready status. 

$ juju add-model wiki
$ juju deploy mediawiki
$ juju status
...
Unit          Workload  Agent  Machine  Public address  Ports  Message
mediawiki/0*  blocked   idle   0        54.160.86.216          Database required

$ juju relate mediawiki:db admin/prod-db.mysqlservice
$ juju status
...
SAAS name     Status   Store  URL
mysqlservice  unknown  local  admin/prod-db.mysqlservice

App        Version  Status  Scale  Charm      Store       Rev  OS      Notes
mediawiki  1.19.14  active      1  mediawiki  jujucharms    9  ubuntu
...

Relation  Provides   Consumes      Type
db        mediawiki  mysqlservice  regular

We can prove things are working by actually checking out the databases in our MySQL instance. Let’s just go peek and see they’re real. 

$ juju switch prod-db
$ juju ssh mysql/0
mysql> show databases;
+-----------------------------------------+
| Database                                |
+-----------------------------------------+
| information_schema                      |
| mysql                                   |
| performance_schema                      |
| remote-05bd1dca1bf54e7889b485a7b29c4dcd |
| remote-45dd0a769feb4ebb8d841adf359206c8 |
| sys                                     |
+-----------------------------------------+
6 rows in set (0.00 sec)

There we go. Two remote-xxxx databases for each of our models that are using our shared service. This is going to make operating our infrastructure at scale so much better!

Please go out and test this. Let us know what use cases you find for it and what the next steps should be as we prepare this feature for general use. You can find us in the #juju irc channel on Freenode, the Juju mailing list, and you can find me at @mitechie

Current limitations

As this is a new feature it’s limited to working within a single Juju Controller. It’s also a current work in progress so please watch out for bugs as they get fixed, UX that might get tweaked as we get feedback, and note that upgrading a controller with CMR to a newer version of Juju is not currently supported. 

Upgrading Juju using model migrations

Since Juju 2.0 there's a new feature, model migrations, intended to help provide a bulletproof upgrade process. The operator stays in control throughout and has numerous sanity checks to help provide confidence along the upgrade path. Model migrations allow an operator to bring up a new controller on a new version of Juju and to then migrate models from an older controller one at a time. These migrations go through the process of putting agents into a quiet state and queue'ing any changes that want to take place. The state is then dumped out into a standard format and shipped to the new controller. The new controller then loads the state and verifies it matches by checking it against the state from the older controller. Finally, the agents on each managed machine are checked to make sure they can communicate with the new controller and that any state matches expectations before those agents update themselves to report to the new controller for duty.

Once this is all complete the handoff is finished and the old controller can be taken down once the last model is migrated away. In order to show how this works I've got a controller running Juju 2.1.3 and we're going to upgrade my models running on that controller by migrating them to a brand new Juju 2.2 controller. 

One thing to remember is that Juju controllers are the kings of state. Juju is an event based system and on each machine or cloud instance managed an agent runs that communicates with the controller. Events from those agents are processed and the controller updates the state of applications, triggers future events, or just takes note of messages in the system. When we talk about migrating the model, we're only moving where the state system is communicating. None of the workloads are moved. All instances and machines stay exactly where they're at and there's no impact on the workloads themselves. 

$ juju models -c juju2-1
Controller: juju2-1

Model       Cloud/Region   Status     Machines  Cores  Access  Last connection
controller  aws/us-east-1  available         1      1  admin   just now
gitlab      aws/us-east-1  available         2      2  admin   49 seconds ago
k8s*        aws/us-east-1  available         3      2  admin   39 seconds ago

This is our controller running Juju 2.1.3 and it has on it a pair of models running important workloads. One is a model running a Kubernetes workload and the other is running gitlab hosting our team's source code. Lets upgrade to the new Juju 2.2 release. The first thing we need to do is to bootstrap a new controller to move the models to. 

Gitlab running in the gitlab model to host my team's source code.

Gitlab running in the gitlab model to host my team's source code.

First we upgrade our local Juju client to Juju 2.2 by getting the updated snap from the stable channel. 

sudo snap refresh juju --classic

Now we can bootstrap a new controller making sure to match up the cloud and region of the models we want to migrate. They were in AWS in the us-east-1 region so we'll need to make sure to bootstrap there.

$ juju bootstrap aws/us-east-1 juju2-2

Looking at this model we have the two, out of the box, models a new controller always does. 

$ juju models
Controller: juju2-2

Model       Cloud/Region   Status     Machines  Cores  Access  Last connection
controller  aws/us-east-1  available         1      1  admin   2 seconds ago
default*    aws/us-east-1  available         0      -  admin   8 seconds ago

To migrate models to this new controller we need to be on the older controller. 

$ juju switch juju2-1

With that done we can now ask Juju to migrate the gitlab model to our new controller. 

$ juju migrate gitlab juju2-2
Migration started with ID "44d8626e-a829-48f0-85a8-bd5c8b7997bb:0"

The migration kicks off and the UUID is provided as a way of tracking among potentially several migrations going on. If a migration were to fail for any reason and resume its previous state we could make corrections and try again. Watching the output of juju status while the migration processes is a interesting. Once done you'll find that status errors because the model is no longer there. 

$ juju list-models -c juju2-1
Controller: juju2-1
    
Model       Cloud/Region   Status     Machines  Cores  Access  Last connection
controller  aws/us-east-1  available         1      1  admin   just now
k8s         aws/us-east-1  available         3      2  admin   54 minutes ago

Here we can see our model is gone! Where did it go?

$ juju list-models -c juju2-2                                                                                        (rharding@dingy:)
Controller: juju2-2

Model       Cloud/Region   Status     Machines  Cores  Access  Last connection
controller  aws/us-east-1  available         1      1  admin   just now
default*    aws/us-east-1  available         0      -  admin   45 minutes ago
gitlab      aws/us-east-1  available         2      2  admin   1 minute ago

There we go, it's over on the new juju2-2 controller. The controller managing state is on Juju 2.2, but now it's time for us to update the agents running on each of the machines/instances to also be the new version.

$ juju upgrade-juju -m juju2-2:gitlab
started upgrade to 2.2

$ juju status | head -n 2
Model    Controller  Cloud/Region   Version    SLA
default  juju2-2     aws/us-east-1  2.2        unsupported

Our model is now all running on Juju 2.2 and we can use any of the new features that are useful to us against this model and the applications in it. With that upgraded let's go ahead and finish up by migrating the Kubernetes model. The process is exactly the same as the gitlab model. 

$ juju migrate k8s juju2-2
Migration started with ID "2e090212-7b08-494a-859f-96639928fb02:0"

...one, two, skip a few...

$ juju models -c juju2-2
Controller: juju2-2

Model       Cloud/Region   Status     Machines  Cores  Access  Last connection
controller  aws/us-east-1  available         1      1  admin   just now
default*    aws/us-east-1  available         0      -  admin   47 minutes ago
gitlab      aws/us-east-1  available         2      2  admin   1 minute ago
k8s         aws/us-east-1  available         3      2  admin   57 minutes ago

$ juju upgrade-juju -m juju2-2:k8s
started upgrade to 2.2

The controller juju2-1 is no longer required since it's not in control of any models. There's no state for it to track and manage any longer. 

$ juju destroy-controller juju2-1

Give model migrations a try and keep your Juju up to date. If you hit any issues the best place to look is in the juju debug-log from the controller model. 

$ juju switch controller
$ juju debug-log

Make sure to reach out and let us know if it works well or if you hit a bug in the system. You can also use model migrations to move models among the same version of Juju to help balance load or for maintenance purposes. You can find the team in #juju on Freenode or send a message to the mailing list. 

Designing for long term operations, upgrades, and rebalancing

When you're building a tool for a user there's a huge amount of design, decision making, and focus put on getting the user started. There's good reason for this. Users have an ocean full of tools at their disposal, and if you're building something for others to use you need to win the first five minute test.

You've got about a five minute window for a user to make useful headway in understanding your tool and visualizing how it will aid them in their work. Often this manifests in tools that demo well, but that then have to be left behind when the real work hits. I like to use text editors as an example of this. So many users start out with notepad, nano, or some other really light editor. In time, they learn more and the learning curve of something like VIM, Eclipse, or Emacs really pays off. There's a big gap in folks that make that leap though. If you want to get the most folks in the door it needs to have that hit the ground running feel to it.

When you're designing a tool to help users deploy and run software there's a lot of focus on the install process. Nearly every hosting provider has worked with some sort of "one button install" tool. They're great because it gets users started quickly in that five minute window. Over time though, users find that those one click tools end up being very shallow. They need to add users, create new databases, back up data, restart daemons when appropriate, the list goes on and on. Two operational tasks which are particularly interesting are upgrades and rebalancing.

Juju is an operations tool that can track the state for many different models, or state of operated applications,  potentially run by many different users. These models evolve over time and are running production workloads for years. We know of many models that are nearly a year and a half old. In that same time Juju has had five 1.2x minor versions and three 2.x minor versions. That means you’d want to keep up with improvements in performance, features, and security by upgrading nearly every other month. Aiding in this the Juju 2.1 release includes a new feature, known as model migrations, specifically to help operators managing their infrastructure over the long term.

The general idea is that the largest danger is in doing complex upgrade steps such as database migrations and making sure that everything running is able to communicate on the new version of the software. In Juju 2.1, the Juju infrastructure that manages the state of the models as well as API connections from clients (known as the “controller”) uses model migrations allowing an operator to bring up a new controller and migrate the model state over to a new version. In that process the data is shipped over, it’s gone through any migrations that need to occur, the data is sanity checked between old and new controller, and the system is put together in such a way that if anything fails to check out it can roll back and use the previously running controller. That's some reassurance to lean on when you're doing an important upgrade. Since one controller operates many models, the operator is in control of which models get migrated at what time and allows for a very controlled rollout to the new software in a way that permits safely checking that all remains in the green as the new version of the Juju software is adopted.

Another big use case for model migrations is to balance out the load on Juju controllers over time. As an organization we all grow and change in our needs and over time it's important to be able to move to new hardware, shift services that generate heavy load to dedicated hardware, etc. Juju is tested to perform at thousands of managed machines, however there are dependencies such as the size of the controller machines that track state and over time a normal part of a growing organization is to put into play machines with newer CPUs, more memory, or just flat out beefier hardware with more cores.

I'd love to hear about the tool you use and which ones have fallen short on aiding you in managing your long term operational needs and what other types of long term operations you want your tools to assist with. Hit me up on twitter @mitechie

In a follow up post I'll walk through exactly how to perform a migration to the new Juju 2.2 release using the built in model migration feature.

Open Source software and operational usability

A friend of mine linked me to Yaron Haviv's article "Did Amazon Just Kill Open Source" and I can't help but want to shout a reply

NO!

There is something in premise though, and it's something I keep trying to push as it's directly related to the work I do in my day job at Canonical.

Clouds have been shifting from IaaS to SaaS as sure as can be. They've gone from just getting your quick access to hardware on a "pay for what you use" system to providing you full self service access to applications you used to have to run on that hardware. They've been doing this by taking Open Source software and wrapping it with their own API layers and charging you for their operating of your essential services.

When I look at RDS I see lovely images for PostgreSQL, MySQL, and MariaDB. I can find Elasticsearch, Hadoop, and more. It's a good carrot. Use our APIs and you don't have to operate that software on the EC2 platform any more. You don't have to worry about software upgrades, backups, or scale out operational concerns. Amazon isn't the only cloud doing this either. Each cloud if finding the services it can provide directly to the developers to build products on. 

They've taken Open Source software and fixed delivery to make it easy to consume for most folks. That directly ties into the trend we have been working on at Canonical. As software has moved more and more to Open Source and the cost of software in the average IT budget drops, the costs of finding folks that can operate it at production scale has gotten much more expensive. How many folks can really say they can run Hadoop and Kubernetes and Elasticsearch at professional production levels? How hard is it to find them and how expensive is it to retain them? 

We need to focus on how we can provide this same service, but straight from the Open Source communities. If you want users of your software to be users of YOUR software, and not some wrapped API service, then you need to take those operational concerns into account. We can do more than Open Source the code that is running, we can work together as a community to Open Source the best practices for operating that software over the full life cycle of it. Too often, projects stop short at how to install it. The user has to worry about it long after it's installed. 

I have hope that tools like Juju and the Kubernetes can provide a way for the community around the software we use and love to contribute and avoid the lock in of some vendored solution of the Open Source projects we participate in today. 

Relations and the benefits of coding to an interface

Interfaces are an awesome idea. It’s a tale that all programmers have come across. If you program to a protocol then everyone gets to say “Hey! I can speak that” and join in the fun. TCP/IP, HTTP, my api I created for Bookie. What’s interesting is that I don’t feel like it’s been completely bought into the operations side of the world. There’s a few I can think of off the top of my head; snmp, rrd, and I supposed prometheus is finding some popularity lately. It's one of the more powerful ideas built into Juju and I was hit over the head when doing my latest tinkering with Gitlab

In that blog post I used a new charm done by a member of the community that enables you to proxy anything that speaks the http interface and secure it with Let's Encrypt. At first I went "Cool, this means I can easily set this Gitlab up as https://code.bookie.io and be awesome.". Now that was true, but then I started thinking bigger. Wait a minute, we've got a ton of applications in the Juju Charm Store that all speak the http interface. So I went to work. I wanted to setup everything I might want for my open source project. A handful of juju deploy commands followed by a handful of juju relate commands and my org is up and running. I setup the follow project stack on GCE with JAAS

  • code hosting (Gitlab) - https://code.bookie.io
  • wiki (Mediawiki) - https://wiki.bookie.io
  • continuous testing (Jenkins) - https://ci.bookie.io
  • blog (Ghost) - https://blog.bookie.io
  • mailing lists (Mailman) - https://lists.bookie.io

What impressed me here is that with one simple chunk of work, the Tengu team enabled so many other applications to benefit. I suppose I'd seen this before with things like the HAProxy charm. It enables any of these applications to be placed and scaled behind HAProxy, but this one feels a bit easier to use and more use facing as it's providing that https endpoint with the clean DNS names. 

This is the type of idea that I feel like make Juju a much more interesting idea than other tools folks tend to compare it to. There's a lot of people writing Puppet modules, Chef cookbooks, or adding "one click deploy" features, but I don't see that there's this idea of standing on each other's work as built in as it is in Juju. I've worked in OpenSource a long time now and there's nothing better than finding folks that are smarter than you, and leveraging their brains in your own work. You can do this, but I find that the design that Juju has put together encourages that things are modular and solutions are stood up as a series of parts each doing their thing well in a very portable way. 

The http interface is really common, but you can imagine others that could be as impactful. I'd love to brainstorm with folks on what are some of the biggest bang for the buck ideas out there that enables sharing of the operational best practices across many software applications. I can think of a handful in monitoring, logging, and metrics. Let me know what you can think of @mitechie.

Giving Gitlab an afternoon spin

I've been meaning to check out Gitlab lately. I hear a lot about folks replacing their internal systems with it. It has an awesome checkbox of features for managing your internal code with public and private repos,  and enables you to build the best practices out there around code reviews, automated testing and gated landing. I'm lucky and work in Open Source for my work, but until recently all the code I worked on sat on some internal Git or SVN system. A better application for performing that task is exciting to a lot of users out there. Of course, the best way for me to test it out was to grab the Gitlab Juju charm and toss it up on GCE with JAAS

While looking at that I noticed that a member of the community had actually created a bundle of two applications that made the Gitlab experience even better. It includes a new charm from another community member that allows using Let's Encrypt. Cool! This means that I not only get a Gitlab instance to test with, but I can use it with others with SSL encryption so that the login credentials and such aren't passed around in clear text. 

Let's get testing by adding a new model in JAAS and deploying this bundle into GCE.

juju add-model gitlab google
juju deploy cs:~spiculecharms/bundle/gitlab-ssl
juju show-machine 0

The show-machine command here is useful because I need to go add a new DNS entry for Let's Encrypt to use. Since I'm testing out a scenario of hosting all of my Open Source project's code here I added an A record for code.bookie.io to point to the public IP address of the ssl termination application. With that DNS entry setup I need to tell the termination application what its URL is meant to be. 

juju config ssl-termination-proxy fqdn=code.bookie.io

A few minutes later and I had a working Gitlab deploy to play with. I imported all of my old Bookie code projects from Github and tested out cloning repos and such. One issue I did hit was that since my Gitlab application was proxied I needed to tell Gitlab about the URL it's actually under for end users. Do do that you need to edit a config file on the Gitlab application. 

juju ssh gitlab-server/0

sudo vim /etc/gitlab/gitlab.rb

# Change this external_url config
external_url 'http://code.bookie.io:80'
    
sudo gitlab-ctl reconfigure

I can now move forward with testing out the gitlab supplied tools for testing and building releases. If it works out I can then deploy this in my private MAAS or internal OpenStack with the same ease because Juju provides such a wide array of options. 

I want to thank the great folks at Spicule that put together the Gitlab charm and the folks at Tengu for the ssl-termination-proxy charm. I find that second one really interesting and will cover that in a follow up blog post. 

When others put your thoughts into clear words.

You have to love how when some people speak, everyone stops and listens. A shareholder letter from Jeff Bezos is making the rounds and I love how each of us reads the letter and internalizes it differently. We all find ways to relate our own world to different bits of the shared wisdom. 

Myself, there were two ideas that have been spinning in the back of my head lately that were put together in a really great way by Jeff Bezos. 

When user testing isn't the whole story

Good inventors and designers deeply understand their customer. They spend tremendous energy developing that intuition. They study and understand many anecdotes rather than only the averages you’ll find on surveys. They live with the design.
...
A remarkable customer experience starts with heart, intuition, curiosity, play, guts, taste. You won’t find any of it in a survey.
— https://www.sec.gov/Archives/edgar/data/1018724/000119312517120198/d373368dex991.htm

This is one that I've been working so hard to process. We're all focused on measuring and data and so we work hard to include user testing in the design process. It's 100% correct to measure and work to understand what actually happens out there in the real world.

Everything can't be a simple a/b test. I have seen and fought getting too carried away with doing what the testing says. It's so easy to let go of responsibility and respond "the user testing says ...". I think that user testing is great when you're looking for small tweaks and changes or general feedback on if something appears useful and interesting. If I want to improve the click through rate, rearranging the content, the design of call to action buttons, or even the information architecture all might help. NONE of these though are breakthrough ideas that provide something new and excited for users. They're not truly things that are part of the identity of a product.

I often find that in Juju users will nearly always provide feedback that's the shortest path from their task to having it work the way they want. It makes sense from where they're sitting. A user might request some fast path to making a quick change after a deployment in a one off way. Juju is built on a consistent model. Anything that's done that's not part of that model, the state, is lost for future decisions and understanding. We often find we need to take the user feedback and get into what they want to do in this "one off" and see where we have an opportunity to improve the model so that the user is able to leverage Juju but the model is still consistently the focus on the product. I feel like the key word I want to use here is intent. You can filter ideas, test results, and then put some true intent behind them to build something of value. 

Getting others to agree by allowing them to disagree

...use the phrase “disagree and commit.” This phrase will save a lot of time. If you have conviction on a particular direction even though there’s no consensus, it’s helpful to say, “Look, I know we disagree on this but will you gamble with me on it? Disagree and commit?” By the time you’re at this point, no one can know the answer for sure, and you’ll probably get a quick yes.
— https://www.sec.gov/Archives/edgar/data/1018724/000119312517120198/d373368dex991.htm

This is a lesson I first learned myself in code reviews. I'd see some code and completely hate it. However, if I stood back, I'd realize what I hated was that it's not "how I would have done it". I would have to watch myself and not be negative just because someone else thought differently.

I like how Bezos's letter makes disagreement a reason to get together. "but will you gamble with me?" is a really great way of putting that. I find that normally this is a lesson for myself. To make myself willing to gamble on someone else's work or ideas. I obviously am already sold when it's my work. 

Sometimes, in order to get folks on board, they just need permission to not be held accountable if it fails. In code review, your goal isn't to be perfect all the time. In decision making, some will be good and some will not work out. However, you need permission to let the team move forward on something and learn from the outcome. None of us will be right all the time. 

What do you find interesting or personally meaningful in the letter? Do any of the ideas really speak to something you've been noodling on in the back of your mind. Let me know. @mitechie

Three reasons you need a quick VPN in your pocket.

Recent news that the government has repealed regulations preventing the sale of customer browsing habits has some folks thinking about their internet use and privacy a bit more than usual. I think that most of us assume that the things we do in our home on our own devices are pretty safe from becoming shared with others. This has caused a rash of articles about running your own VPN. As these kept crossing my RSS feeds I got thinking that this is the perfect use case for Juju and JAAS.

Good news! The Tengu team has made it really easy to use Juju to setup your own VPN server. It's nearly as fast as you can get an instance from a cloud provider. As I sit here at the coffee shop I timed it and it took me six minutes, including adding it to my client and hitting connect. 

 

The 6 minute VPN setup

How do we do this? We use JAAS since it's a great way to deploy something into any public cloud and especially different regions. I personally have my personal VPN in the AWS us-east-2 region since it's the closest physically to where I am in Southern Michigan. 

juju add-model myvpn aws/us-east-2
juju deploy openvpn
juju expose openvpn
juju config openvpn clients="rick"
juju scp openvpn/0:~/rick.ovpn myvpn.ovpn

This deploys the OpenVPN charm and sets up a config file for "rick" that I can use to connect with a VPN client. On my MAC I use Viscosity and on Ubuntu I use the Network Manager vpn plugin. Both of these clients can load the .ovpn file that you download from the deployed server. 

Once connected you can see all of your traffic routed through the VPN securely. 

% ping ubuntu.com
PING ubuntu.com (91.189.94.40): 56 data bytes
64 bytes from 91.189.94.40: icmp_seq=0 ttl=47 time=193.328 ms
64 bytes from 91.189.94.40: icmp_seq=1 ttl=47 time=178.245 ms
64 bytes from 91.189.94.40: icmp_seq=2 ttl=47 time=140.312 ms

% traceroute ubuntu.com
traceroute to ubuntu.com (91.189.94.40), 64 hops max, 52 byte packets
 1  ip-10-200-200-1.us-east-2.compute.internal (10.200.200.1)  48.860 ms  47.036 ms  58.141 ms
 2  ec2-52-15-0-2.us-east-2.compute.amazonaws.com (52.15.0.2)  103.381 ms  64.848 ms
    ec2-52-15-0-6.us-east-2.compute.amazonaws.com (52.15.0.6)  69.651 ms

What is even better is that you can shorten this by automating the deploy, expose, and config with a Juju Bundle. I created one that sets up two clients out of the box. One for myself and one for a "guest". If I ever want to add additional clients I could just update the config in the charm. 

A few lines of yaml and a "charm push . cs:~rharding/rickvpn" and I've got a one line deploy of a VPN at my fingertips. If I deploy before I order my coffee the VPN is up and ready for use by the time it's done. 

Reason #2 - blocked ports on the shared wifi

I promised some other reasons for a VPN and blocked ports at a shared wifi location is #2. This Starbucks I'm sitting has had the wifi configured to block port 22 which can be a pain in the rear when you're attempting to work with a lot of cloud instances over SSH. A quick VPN and suddenly the world of SSH is opened back up. Yes, some folks will tell me to change my SSH ports, but when you're working on cloud servers across different clouds it's definitely much more a pain to change SSH everywhere than to just launch this VPN. 

Reason #3 - testing end user experience

I've also found myself working with others across the world. What's always fun is when they're having issues I just can't replicate. We have large numbers of our team in Europe and down in New Zealand and Australia. As you can imagine, their load times for things is a bit different than my midwest connection to things coming out of US based networks. Given the breadth of cloud regions these days, it's actually not as hard as it seems to replicate the experience that the remote users are seeing. I can easily throw up a Europe based VPN and force myself to test things through it. Suddenly I can see that the timeout we have doesn't work well for users whose bytes go through undersea cables. 

I'm sure you can think of some other uses that a quick VPN would come in handy. Let me know what your favorite uses for the OpenVPN charm are. Reach out on Twitter @mitechie. And thank you Tengu Team for this great OpenVPN charm!

Giving everyone a little data science, Juju and Zeppelin

Everyone has some data sitting somewhere that they're not putting to good use. They've got some script pulling web logs, or perhaps a database feeding some customer or employee data. We at Canonical also have data sitting around, and lately I was poking at some of it and figured there has to be a better way to enable us to collaborate and pull meaning out of that data sitting there. 

Fortunately we've got tools to help us do just that. I spoke to a member of our big data team and he says to me "Rick, what you want is to hook Zeppelin up to that data." He was right!

What is Zeppelin? Well it's a big data query and visualization tool that's part of the Apache project. Normally folks use it to sit in front of streaming back ends like Spark. It also supports SQL so just about anyone with a MySQL, Postgresql, or SQLite database can stick this in front of it and build out dashboards of useful info. The best part is that we can give others access and different parts of the company can draw their own insights from a shared collection of data. 

We can show this off really quickly using Juju and JAAS. To help demonstrate, I'll use the famous Northwind data set. Let's stick the data in Postgresql so that we can query it from our new fancy Zeppelin dashboard tool. First goal, deploy Postgresql and Zeppelin with Juju Charms that are available in the store. 

$ juju register jimm.jujucharms.com
$ juju add-model zeppelin-demo google
$ juju deploy postgresql
$ juju config postgresql admin_addresses="127.0.0.1"
$ juju deploy zeppelin --to 0
$ juju expose zeppelin

What we've got here is setting up Postgres on GCE along with our Zeppelin we'll wire up to the database in order to put our data in there. If there was already data available then it's not required here. Also note that we could put it in MySQL or we could copy a sqlite database up to the Zeppelin instance and wire it up to query that file instead. 

From here, let's clone the Northwind database and copy the files over to be able to load the data into Postgresql. We can then run the `create_db.sh` script to create the database, load it with data, and setup our user we'll use to connect.

git clone git@github.com:pthom/northwind_psql.git && cd northwind_psql
juju scp -v create_db.sh postgresql/0:/home/ubuntu/
juju scp -v northwind.sql postgresql/0:/home/ubuntu/
juju run --unit postgresql/0 -- \
    "sudo su postgres -c 'cd /home/ubuntu && sh create_db.sh'"

Now we can look up where our Zeppelin is sitting with `juju status zeppelin` and browse to the `Public address`.  Note that the ports exposed are also listed. In my case it's http://104.196.195.224:9080

Once there we get a nice little welcome screen. From here we need to wire Zeppelin up to our database. Zeppelin does this via `Interpreters` that need to be configured. Click on the navigation menu for that and we'll click the "+ Create" button to setup our SQL interpreter. 

The postgresql database, user, and password are all setup by the Northwind script we ran so we copy those values over. 

Now that the interpreter is setup we can create a new notebook and test out a sample query. Head to the "Notebook" and select "+ Create new note". We'll call it "HR Dashboard". What would an HR person be interested in? Let's see who has a birthday coming up. In the textbox we need to tell Zeppelin to use our `psql` interpreter. 

%psql

CREATE OR REPLACE FUNCTION indexable_month_day(date) RETURNS TEXT as $BODY$
  SELECT to_char($1, 'MM-DD');
$BODY$ language 'sql' IMMUTABLE STRICT;


SELECT lastname, firstname, title, birthdate 
FROM employees 
WHERE 
    indexable_month_day(birthdate) >= to_char(current_date, 'MM-DD')
AND  
    indexable_month_day(birthdate) < to_char(
        current_date + interval '120 days', 'MM-DD');

There's a slight trick here in that we need to add a function and then we need to run a query. Zeppelin will only show the result of the first thing run, so we first run the function, and then move it under the query and run it again. We can execute a query with the play button in the upper right or use shift-enter.

Look at that, Michael and Robert have birthdays coming up in the coming months. You can easily see how you might create a sales dashboard that would tie to the same data, but instead look at things such as our most popular products, best customers, or items that are on sale currently. 

Dashboards can be exported and imported. If you run a conference you might run a view of signups over time and track things like the number of sponsors that have committed. Then after the conference back up your dashboards and tear down your Zeppelin until the next conference goes live. 

Try to find your data hanging around and make it more productive by putting a dashboard and web ui in front of it. You'll be surprised by the ways that others put it to work. Thanks Zeppelin and Juju for making this easy to do. 

References:

Setting up Zeppelin and MySQL

Operations in Production

Operations in Production

As we setup our important infrastructure we set up monitoring, alerting, and keep a close eye on them. As the services we provide grow and need to expand, or failures in hardware attempt to wreck havoc, we’re ready because of the due diligence that’s gone into monitoring the infrastructure and applications deployed. To do this monitoring, we often bake it into our deployment and configuration management tooling. One thing I often see is that folks forget to monitor the tools that coordinate all of that deployment and configuration management. It’s a bit of a case of “who watches the watcher?”

2016 kick off

Photo by pownibe/iStock / Getty Images

Photo by pownibe/iStock / Getty Images

2016 has arrived and most New Years I don't feel the need to really do or think much on the coming year.Things change too much. A year ago, how much of today would I have foreseen? 

This year though, well the last few weeks at least, I've felt the year coming more. I feel the need to write out goals for myself in the coming year. One of which is to get back a website presence. I've done that by moving my old WordPress content over here to a new SquareSpace site that I'll hopefully put together to better capture me. For the most part, this is just for myself. However, I struggle with a WordPress, G+, Tumblr, Flickr, and Twitter accounts and find myself split based on the part that make up me. There's my work, my hobbies, my family, and my rants!  I'll be trying to consolidate that mess here and make some sense of it so I start to feel like I've got a creative outlet that fits me. 

All that said, here be dragons, construction, and hopefully a revitalization of my brain down to word form. 

The great goals of 2016

Write more

This one seems like the no brainer so far. I'd like to write more and improve my writing skills, ergo this website. 

Get out for more planned photography adventures

I'm an opportunistic photographer. I take a camera with me, try to get a few snapshots, and hopefully some of them look good and make me happy. I need to put forth more effort into getting out for some planned ideas. Go get some sunrise shots by setting an alarm and having a place I want to go to capture that sunrise. Keep an eye on cool local events to do some street photography during local fairs and such. Plan ahead when a work trip comes up, and make sure I've planned out some of the pictures I want to make and come home with. 

Get outside more

I work from home, long hours, and really need to get outside more. I love camping, hiking, fishing, etc. I just don't seem to prioritize it enough so that I make sure to do it enough. As my son gets older, including him in these excursions should help kill off excuses for not doing it. 

Travel more, and make sure some of it is for fun

I have been travelling more for work, but as I travel more it's for shorter 'hop over for a couple of meetings' travel and there's not really much fun to it. I really need to get some more travel for myself going. Ever since I've joined Canonical and started traveling I've gotten bit more and more by that darned travel bug. I also feel like traveling well, vs just showing up for work, is amazingly good for expanding your horizons. I need more of that.

Sell a LOT of stuff

Folks know I have a problem. I'm continually optimizing, I enjoy nice things, and I like to try out new things. All of this leads to me collecting many items around the house and many I don't use any more because I found better tools, different ways of doing things, etc. I've just loaded 4 30gal bags of clothes to donate into the back of the car. I really wanted to add more. I think over this year you'll see a concerted effort to do more with less and the travel bug has a lot to do with that. 

Finally: figure out this job thing

This one seems like it should be easy, but wow it's been an interesting year at work. All I ever wanted was to write code and build things. However, I don't think I've written much of any code over the last year. Instead I've gotten myself into the responsibilities of product owner, product manager, and director of engineering. I care about building cool things, talking about them, and coordinating a dozen moving parts to make it happen. I'm not perfect, but I feel like my brain fits much of this work. I've just not figured out exactly how it's all going to shake out. I'm still in interim directory. I really think the part I enjoy is the product work though. Can you do one and not the other? Every year since joining Canonical the unexpected has reigned and I have a feeling this year's ride will be crazier than ever.

Working at Canonical, three years in. a.k.a wtf just happened?

A couple of people have reached out to me via LinkedIn and reminded me that my three year work anniversary happened last Friday. Three years since I left my job at a local place to go work for the Canonical where I got the chance to be paid to work on open source software and better my Python skills with the team working on Launchpad. My wife wasn't quite sure. "You've only been at your job a year and a half, and your last one was only two years. What makes this different?" What's amazing, looking back, is just how *right* the decision turned out to be. I was nervous at the time. I really wasn't Launchpad's biggest fan. However, the team I interviewed with held this promise of making me a better developer. They were doing code reviews of every branch that went up to land. They had automated testing, and they firmly believed in unit and functional tests of the code. It was a case of the product didn't excite me, but the environment, working with smart developers from across the globe, was exactly what I felt like I needed to move forward with my career, my craft.

2013-09-02 18.17.47

I joined my team on Launchpad in a squad of four other developers. It was funny. When I joined I felt so lost. Launchpad is an amazing and huge bit of software, and I knew I was in over my head. I talked with my manager at the time, Deryck, and he told me "Don't worry, it'll take you about a year to get really productive working on Launchpad." A year! Surely you jest, and if you're not jesting...wtf did I just get myself into?

It was a long road and over time I learned how to take a code review (a really hard skill for many of us), how to do one, and how to talk with other smart and opinionated developers. I learned the value of the daily standup, how to manage work across a kanban board. I learned to really learn from others. Up until this point I'd always been the big fish in a small pond and suddenly I was the minnow hiding in the shallows. Forget books on how to code, just look at the diff in the code review you're reading right now. Learn!

My boss was right, it was nearly ten months before I really felt like I could be asked to do most things in Launchpad and get them done in an efficient way. Soon our team was moved on from Launchpad to other projects. It was actually pretty great. On the one hand, "Hey! I just got the hang of this thing" but, on the other hand, we were moving on to new things. Development life here has never been one of sitting still. We sit down and work on the Ubuntu cycle of six month plans, and it's funny because even that is such a long time. Do you really know what you'll be doing six months from now?

P1100197.jpg

Since that time in Launchpad I've gotten work on several different projects and I ended up switching teams to work on the Juju Gui. I didn't really know a lot about this Juju thing, but the Gui was a fascinating project. It's a really large scale JavaScript application. This is no "toss some jQuery on a web page" thing here.

I also moved to work under a new manager Gary. As my second manager since starting at Canonical and I was amazed at my luck. Here I've had two great mentors that made huge strides in teaching me how to work with other developers, how to do the fun stuff, the mundane, and how to take pride in the accomplishments of the team. I sit down at my computer every day and I've got the brain power of amazing people at my disposal over irc, Google Hangouts, email, and more. It's amazing to think that at these sprints we do, I'm pretty much never the smartest person in the room. However, that's what's so great. It's never boring and when there's a problem the key is that we put our joint brilliant minds to the problem. In every hard problem we've faced I've never found that a single person had the one true solution. What we come up with together is always better than what any of us had apart.

When Gary left and there was a void for team lead and it was something I was interested in. I really can't say enough awesome things about the team of folks I work with. I wanted to keep us all together and I felt like it would be great for us to try to keep things going. It was kind of a "well I'll just try not to $#@$@# it up" situation. That was more than nine months ago now. Gary and Deryck taught me so much, and I still have to bite my tongue and ask myself "What would Gary do" at times. I've kept some things the same, but I've also brought my own flavor into the team a bit, at least I like to think so. These days my Github profile doesn't show me landing a branch a day, but I take great pride in the progress of the team as a whole each and every week.

The team I run now is as awesome a group of people, the best I could hope to work for. I do mean that, I work for my team. It's never the other way around and that's one lesson I definitely picked up from my previous leads. The projects we're working on are exciting and new and are really important to Canonical. I get to sit in and have discussions and planning meetings with Canonical super genius veterans like Kapil, Gustavo, and occasionally Mark Shuttleworth himself.

Looking back I've spent the last three years becoming a better developer, getting an on the job training course on leading a team of brilliant people, and crash course on thinking about the project, not just as the bugs or features for the week, but for the project as it needs to exist in three to six months. I've spent three years bouncing between "what have I gotten myself into, this is beyond my abilities" to "I've got this. You can't find someone else to do this better". I always tell people that if you're not swimming as hard as you can to keep up, find another job. I feel like three years ago I did that and I've been swimming ever since.

P1040511.jpg

Three years is a long time in a career these days. It's been a wild ride and I can't thank the folks that let me in the door, taught me, and have given me the power to do great things with my work enough. I've worked by butt off in Budapest, Copenhagen, Cape Town, Brussels, North Carolina, London, Vegas, and the bay area a few times. Will I be here three years from now? Who knows, but I know I've got an awesome team to work with on Monday and we'll be building an awesome product to keep building. I'm going to really enjoy doing work that's challenging and fulfilling every step of the way.

DSC00329

Bookie meets Google Summer of Code 2014

Today the Google Summer of Code student selections were announced, and with that announcement Bookie revealed our selections for the slots allocated for each of our two mentors. This announcement highlights an amazing round of participation in Bookie as an open source project. Twenty people participated and landed over 110 commits worth of patches in Bookie since the opening of GSoC. That is AMAZING! In less than a week every bite-sized bug evaporated from the issue tracker. Also amazing is the quality and effort that everyone put into their work. Everyone was eager to learn how to add tests to their patches, and they worked so hard to get their code landed. Bookie emerges a better open source tool for managing bookmarks than it was 2 months ago, and that is because of the hard work and dedication of all of the participating students.

Students did more than land branches; they invigorated the community. We had many users jump into IRC to answer questions and guide students through the process. They also performed QA and did code reviews of their work. The enthusiasm the students brought to Bookie motivated me to make the time to help move things forward. After all: if a student spent 3 days figuring out how to fix a bug, write a test for it, commit the fixes to git, and get it up for code review; then I can manage to find the 30 minutes to pull the commit, review the code, and QA the work. This period motivated me to update documentation and ensure the install process worked for a wider audience. Additional motivation came from knowing that Bookie is interesting as a tool to other people besides myself.

I want every student not selected for Google Summer of Code to know their work and effort is greatly appreciated. I and the other members of the Bookie community enjoyed working with everyone who participated. Bookie had 32 applications for 2 available spots. In conversations with other organizations Bookie had a comparatively crazy amount of competition for few allocated spaces. I wish Bookie had a dozen or so more mentors and slots as over half of Bookie's proposals would have easily been accepted. Culling the dozens of great proposals into two positions was a very difficult process for us. It's hard to say "not right now" when there's so many great offers by so many eager and capable students.

Regardless of whether you were selected for Google Summer of Code the fun doesn't have to end. If you found the time contributing to Bookie valuable; if you learned something new, gained some material for that resume, or just had a good time: PLEASE DON'T STOP! Bookie isn't going anywhere or closing up shop; we're more than happy to continue mentoring and working with you all. We worked hard during this process to ensure all students were given the best chance to take something positive away from this application process. With your continued participation in the Bookie project we'd like to continue to mentor and provide guidance for you.

One area of guidance we owe all students relates to your proposal. Should you want any explanation of what you could do differently with your proposal / application please let us know. I'll be honest though: most of the applications we received were quite good, so there's little to critique. The scoring method we used put most of the applications within a few points of each other. But if you'd like to know more please feel free to ping me in irc and ask me anything you'd like.

Finally we'd like to congratulate Sambuddha and Pradyumna for their outstanding work leading up to this announcement, and we look forward to the results of their proposals for adding great features to Bookie over the summer. If you find the work interesting, please come help them out. Feel free to get involved, help with the work, the code reviews, and the testing of the new features. Maybe you'll be helping mentor Bookie next year? Who knows? :)

This was our first year participating in Google Summer of Code, but you can be assured it will not be the last.We'd like to thank all of the students for flooding our channels and making this not only an amazingly crazy and busy time but also an immensely rewarding period in Bookie's history. You are all part of Bookie's history and we look forward to seeing you as part of Bookie's future. Thank you.

Juju Quickstart and the power of bundles

The Juju UI team has been hard at work making it even easier for you to get started with Juju. We've got a new tool for everyone that is appropriately named Juju Quickstart and when you combine it with the power of Juju bundles you're in for something special.

Quickstart is a Juju plugin that aims to help you get up and running with Juju faster than any set of commands you can copy and paste. First, to use Quickstart you need to install it. If you're on the upcoming Ubuntu Trusty release it's already there for you. If you're on an older version of Ubuntu you need to get the Juju stable ppa

sudo add-apt-repository ppa:juju/stable sudo apt-get update

Installing Quickstart is then just:

sudo apt-get install juju-quickstart

Once you've got Quickstart installed you are ready to use it to deploy Juju environments. Just run it with `juju-quickstart`. Quickstart will then open a window to help walk you through setting up your first cloud environment using Juju.

Quickstart can help you configure and setup clouds using LXC (for local environments), OpenStack (which is used for HP Cloud), Windows Azure, and Amazon EC2. It knows what configuration data is required for each cloud provider and provides hints on where to find the information you’ll need.

Once you've configured  your cloud provider, Quickstart will bootstrap a Juju environment on it for you. This takes a while on live clouds as this is bringing up instances.

Quickstart does a couple of things to make the environment nicer than your typical bootstrap. First, it will automatically install the Juju GUI for you. It does this on the first machine brought up in the environment so that it's co-located, which means it comes up much faster and does not incur the cost of a separate machine.  Once the GUI is up and running, Quickstart will automatically launch your browser and log you into the GUI. This saves you from having to copy and paste your admin secret to log in.

If you would like to setup additional environments you can re-launch Quickstart at any time. Use juju-quickstart -i to get back to the guided setup.

Once the environment is up Quickstart still helps you out by providing a shortcut to get back to the Juju GUI running. It will auto launch your browser, find the right IP address of the GUI, and auto log you in. Come back the next day and Quickstart is still the fastest way to get back into your environment.

Finally, Quickstart works great with the new Juju charm bundles feature. A bundle is a set of services with a specific configuration and their corresponding relations that can be deployed together via a single step. Instead of deploying a single service, they can be used to deploy an entire workload, with working relations and configuration. The use of bundles allows for easy repeatability and for sharing of complex, multi-service deployments. Quickstart can accept a bundle and will deploy that bundle for you. If the environment is not bootstrapped it will bring up the environment, install the GUI, and then deploy the bundle.

For instance, here is the one command needed to deploy a bundle that we’ve created and shared:

juju-quickstart bundle:~jorge/mongodb-cluster/1/mongodb-cluster

If the environment is already bootstrapped and running then Quickstart will just deploy the bundle. The two features together work great for testing repeatable deployments. What's great is that the power of Juju means you can test this deployment on multiple clouds effortlessly.  For instance you can design and configure your bundle locally via LXC and, when satisfied, deploy it to a real environment, simply by changing the environment command-line option when launching Quickstart.

Try out Quickstart and bundles and let us know what you think. Feel free to hop into our irc channel #juju on Freenode if you've got any questions. We're happy to help.

Make sure to check out Mat's great YouTube video walk through as well over on the Juju GUI blog.

Bookie Sprint - Aug 31st

It's time for another Bookie sprint! When - Saturday August 31st

What time - Starts at 11am

Where - my house! Ping me for address/map info if you're coming along. Map out to Clarkston, MI.

What will we be working on?

The goal is to work on test coverage and breadability article parsing. Are you new to application testing? Come out and learn while helping out an open source project.

If you want to participate online please join our irc channel #bookie on freenode.net. If there's something else you'd rather work on then please let me know and I'll be happy to do whatever I can to aid in participation.

Pebble: first impressions

pebble_image

Some time back in April of 2012 someone on Twitter linked to this Kickstarter campaign to raise funds to build a very geeky watch. The idea was interesting to me. I love my android phone, but it's in my pocket all day. The idea of getting texts on my wrist while driving, working, and woodworking was intriguing. Not all messages require me to pull out my phone, unlock it, and view what was up. So I supported it.

Here we are, not that far from a year later, and I've gotten a copy of the watch. I've been using it for the past few days and wanted to put out there my feedback. There's a bit out there already, but hey, my turn!

The Good

Does it work like it's supposed to? Definitely! I've been getting texts and calendar notifications on the watch and it's been really nice. Simple texts like the one from my wife "on my way home" have been nice to just press a button and dismiss. Calendar notifications as well. I'm up getting a drink and a meeting notice buzzes my wrist with a note what's coming up. It's much nicer than pulling out the phone. The one downside is that when you do pull your phone out there's a bunch of notifications to dismiss and you want to make sure you got them all.

As for fit, I need to get a replacement wrist band, but it's not that bad. I was worried about the size of the face, but I've not found that it's actually not that large. There are much larger watches out there by far. I do find that it rotates around and ends up on the bone every once in a while, but I'm hoping a better watch band will help with that. I'd rather they put the $$ into the device and less into meeting everyone's private feelings about what makes a great watch band, so no complaints.

Battery life isn't really tested yet. I charged it when I first got it and I'm on day 3. This includes leaving it overnight hooked up to my phone which I should probably stop I guess. My phone battery seems ok as well. It's hard to judge as I've got a nearly 1yr old Galaxy Nexus with a battery that needs replacement currently.

The Potential

I'm not going to go down the 'bad' road here. This is a new product, just released, and it's getting updates so let's just concentrate on what could be better.

The first thing is the navigation. It came out during the initial demos. The nav menu is setup in a way that needs love. If I have to have watch faces on there as full app status, then I should be able to remove ones I don't use so that navigation is less painful. Honestly, watchfaces need a sub nav. Settings already has this, so the concept should be easy to implement.

Along those lines, back from the home menu should activate the watch face. I once changed watch faces because I was in the settings and waiting for the home nav to timeout back to the watch face. In that time I moved in my wrist in a way that activated a new watch face on accident. Doh!

Next up, we really really need the sdk. Currently, there's just a limited set of uses. It's great for texts/calender notifications, but there's so much more that could be possible. Imagine a pomodoro app for the time management geeks, or hooking up the Field Trip app into notifications as you walk around. There's a lot of potential and the sdk needs to come out to enable a lot of it.

It really needs a battery indicator somehow. I'd really settle for a number value in the settings/about area. If I'm going to trust this as my watch I need to have an idea when I leave the house if it's going to make it or not. If it's meant to charge once a week, I'm not going to have a full recollection of the last time I put it on the charger. Was that Sunday? or maybe it was Saturday? I don't think it needs to be too prominent though.

The final thing is more a nitpick. I listen to Audible a lot on my phone. While doing the dishes, cleaning the house, making dinner, all the time. So I really love the idea of using my watch to start/stop vs using my phone itself as it sits in my pocket. However, the integration there isn't perfect. If I'm playing a book and use the music app to pause, it will pause my book, but starts the Google Music application. Then another pause will stop that music as well. However, I can't then start back up again. Somehow, my bluetooth speaker and headphones talk to Android in a way that it can start/stop any audio application. It just starts the last thing playing, podcasts in DogCatcher, books in Audible, or music in Google Music. I really want the pebble music app to work like that as well.

Conclusion

I really like the Pebble, but a big part of my like is seeing the potential. I think they made a great decision to not try to make the watch the computer and use the phone for that. I hope they don't ever stray from that decision. I also really like how Android has made things much nicer/easier for them. I can't wait to see what other apps can do with a Pebble intent that would allow exposing some UX away from the device itself. If you're a dev I'll finish up with a few wishlist items for people to work on:

  • Google Authenticator app on the watch to show the number generated
  • Google now card info: weather, flight upcoming, upcoming meeting
  • Syncing alarms from device to watch
  • Twitter replies/DM notifications
  • Field Trip app info when you pass by a specific place
  • Guidebook integration with next talk/room information

Bookie 0.4: one week retrospective

Phew, that was a whirlwind of a week. Just over one week ago I finally released Bookie 0.4 and published the blog post to reddit as an announcement. This introduced signups and I was eager to see if there was real interest in the project now that users could sign up and try things out.

By the numbers

Traffic definitely came.

  • The blog post picked up 800 visits over the two days in the weekend.
  • https://bmark.us grabbed 360 unique new visitors.
  • We went from 58 to 126 activated user accounts.
  • Those users brought us to over 26,000 bookmarks stored in the site.

Complications

Of course, any swarm of new users finds the holes in the system and Bookie was no different. There were a few issues. First, the celery task that sends out emails on signup wasn't running because the email config wasn't setup right. This was a pretty quick fix. Next, the import system wasn't filling out the path for uploaded files correctly. This one was another pretty easy fix, but I managed imports manually until I got the fix deployed.

The big thing was that, for probably the first time, all three moving parts to the system were trying to store bookmarks at once. The celery backend, the web UI, and a cron script that looks for new bookmarks without readable content and fetches it for storing. All of these hit the Whoosh fulltext index and caused locking issues that broke both imports and saving new bookmarks from the webui until I figured out the issue and just reset the fulltext index.

It was pretty bad timing as I could see users trying to add test bookmarks via the web interface. Google realtime analytics is pretty entrancing to watch. In the end I had to run to the Whoosh docs and change things up to use the async writer instead of the default locking mechanism. This got things running again, but the problem now is that I had to remove all the existing fulltext index. I've still got to finish a background job that will walk through all bookmarks and index them.

At some point I might need to remove the fulltext indexing from the current SqlAlchemy event hooks, but as purely background celery jobs that I can control from one place easier. This would remove the lock at all from the cron job and the web ui.

Disappointments

While I could see the charts showing traffic, it was tough because it was pretty invisible traffic. There were only three new users into the #bookie irc channel, and only a few people left comments in the reddit thread. No one left a comment on the blog post. Both my Twitter account and the Bookie accountgained fewer than 5 new followers. While the repository was starred many times, only two forks were created.

Going forward

There are a few new users active over the last week, and I've gotten a pair of pull requests. While the saving of new bookmarks was broken for a lot longer than I'd have liked, the site never went down. Imports were done in a semi-reasonable time frame. All of this felt pretty great and is encouraging for future work. I still need to finish fixing up the readable parsing. It's the big selling point of Bookie, and the fact that fulltext search and readable parsed content for all bookmarks isn't there is frustrating.

Here's looking forward to great work and a more popular release announcement for Bookie 0.5.

Bookie 0.4 released into the wild!

Bookie is a Python based open source bookmark managing web application that includes content archiving, a Chrome extension, and much more. Phew, that took a lot longer than expected. I've tagged Bookie 0.4 and the live site is updated to run it.

This brings a ton of work on getting an updated webui with some client side MVC, an API, Celery job running backend, some stats, and spin off projects such as breadability and a cli client.

The big thing is that signups are now there as well as a landing page. So hopefully this will spike up interest in new users checking out Bookie.

There are still a ton of long term ideas to work on with Bookie. I'd like to get a 'reading' view setup so that you can easily run through the bookmarks you've marked `toread`, especially in a mobile view. <3 my N7. I also want to work on getting suggestions for related bookmarks, suggested tags based on content, and other interesting machine learning type problems.

If you're the type that takes your bookmarks seriously give it a try. If you don't want to run your own instance, sign up to https://bmark.us and try it out there.

You can get an idea of the roadmap we're working off of on the Trello board.

Bookie weekly status report: May 6th 2012

This week was spent on a big side project. I've been trying like mad to update the python-readability library and take it over to help use it in the Bookie project space. After spending a ton of time trying to do just this I gave up.

I now present the breadability package. It's a fresh port from the arc90 readability.js using the knowledge I've gained from all the other work and trying to stick to the JS file that's the original inspiration.

I've got a bunch more work to do to add tests, get it in the build server, etc.

If you've been using one of the other dozen ports out there give this a shot. There's work to be done, but I'd love to get some real work use in there, let me know what sites don't work well, etc.