Working at Canonical, three years in. a.k.a wtf just happened?

A couple of people have reached out to me via LinkedIn and reminded me that my three year work anniversary happened last Friday. Three years since I left my job at a local place to go work for the Canonical where I got the chance to be paid to work on open source software and better my Python skills with the team working on Launchpad. My wife wasn't quite sure. "You've only been at your job a year and a half, and your last one was only two years. What makes this different?" What's amazing, looking back, is just how *right* the decision turned out to be. I was nervous at the time. I really wasn't Launchpad's biggest fan. However, the team I interviewed with held this promise of making me a better developer. They were doing code reviews of every branch that went up to land. They had automated testing, and they firmly believed in unit and functional tests of the code. It was a case of the product didn't excite me, but the environment, working with smart developers from across the globe, was exactly what I felt like I needed to move forward with my career, my craft.

2013-09-02 18.17.47

I joined my team on Launchpad in a squad of four other developers. It was funny. When I joined I felt so lost. Launchpad is an amazing and huge bit of software, and I knew I was in over my head. I talked with my manager at the time, Deryck, and he told me "Don't worry, it'll take you about a year to get really productive working on Launchpad." A year! Surely you jest, and if you're not did I just get myself into?

It was a long road and over time I learned how to take a code review (a really hard skill for many of us), how to do one, and how to talk with other smart and opinionated developers. I learned the value of the daily standup, how to manage work across a kanban board. I learned to really learn from others. Up until this point I'd always been the big fish in a small pond and suddenly I was the minnow hiding in the shallows. Forget books on how to code, just look at the diff in the code review you're reading right now. Learn!

My boss was right, it was nearly ten months before I really felt like I could be asked to do most things in Launchpad and get them done in an efficient way. Soon our team was moved on from Launchpad to other projects. It was actually pretty great. On the one hand, "Hey! I just got the hang of this thing" but, on the other hand, we were moving on to new things. Development life here has never been one of sitting still. We sit down and work on the Ubuntu cycle of six month plans, and it's funny because even that is such a long time. Do you really know what you'll be doing six months from now?


Since that time in Launchpad I've gotten work on several different projects and I ended up switching teams to work on the Juju Gui. I didn't really know a lot about this Juju thing, but the Gui was a fascinating project. It's a really large scale JavaScript application. This is no "toss some jQuery on a web page" thing here.

I also moved to work under a new manager Gary. As my second manager since starting at Canonical and I was amazed at my luck. Here I've had two great mentors that made huge strides in teaching me how to work with other developers, how to do the fun stuff, the mundane, and how to take pride in the accomplishments of the team. I sit down at my computer every day and I've got the brain power of amazing people at my disposal over irc, Google Hangouts, email, and more. It's amazing to think that at these sprints we do, I'm pretty much never the smartest person in the room. However, that's what's so great. It's never boring and when there's a problem the key is that we put our joint brilliant minds to the problem. In every hard problem we've faced I've never found that a single person had the one true solution. What we come up with together is always better than what any of us had apart.

When Gary left and there was a void for team lead and it was something I was interested in. I really can't say enough awesome things about the team of folks I work with. I wanted to keep us all together and I felt like it would be great for us to try to keep things going. It was kind of a "well I'll just try not to $#@$@# it up" situation. That was more than nine months ago now. Gary and Deryck taught me so much, and I still have to bite my tongue and ask myself "What would Gary do" at times. I've kept some things the same, but I've also brought my own flavor into the team a bit, at least I like to think so. These days my Github profile doesn't show me landing a branch a day, but I take great pride in the progress of the team as a whole each and every week.

The team I run now is as awesome a group of people, the best I could hope to work for. I do mean that, I work for my team. It's never the other way around and that's one lesson I definitely picked up from my previous leads. The projects we're working on are exciting and new and are really important to Canonical. I get to sit in and have discussions and planning meetings with Canonical super genius veterans like Kapil, Gustavo, and occasionally Mark Shuttleworth himself.

Looking back I've spent the last three years becoming a better developer, getting an on the job training course on leading a team of brilliant people, and crash course on thinking about the project, not just as the bugs or features for the week, but for the project as it needs to exist in three to six months. I've spent three years bouncing between "what have I gotten myself into, this is beyond my abilities" to "I've got this. You can't find someone else to do this better". I always tell people that if you're not swimming as hard as you can to keep up, find another job. I feel like three years ago I did that and I've been swimming ever since.


Three years is a long time in a career these days. It's been a wild ride and I can't thank the folks that let me in the door, taught me, and have given me the power to do great things with my work enough. I've worked by butt off in Budapest, Copenhagen, Cape Town, Brussels, North Carolina, London, Vegas, and the bay area a few times. Will I be here three years from now? Who knows, but I know I've got an awesome team to work with on Monday and we'll be building an awesome product to keep building. I'm going to really enjoy doing work that's challenging and fulfilling every step of the way.


Pebble: first impressions


Some time back in April of 2012 someone on Twitter linked to this Kickstarter campaign to raise funds to build a very geeky watch. The idea was interesting to me. I love my android phone, but it's in my pocket all day. The idea of getting texts on my wrist while driving, working, and woodworking was intriguing. Not all messages require me to pull out my phone, unlock it, and view what was up. So I supported it.

Here we are, not that far from a year later, and I've gotten a copy of the watch. I've been using it for the past few days and wanted to put out there my feedback. There's a bit out there already, but hey, my turn!

The Good

Does it work like it's supposed to? Definitely! I've been getting texts and calendar notifications on the watch and it's been really nice. Simple texts like the one from my wife "on my way home" have been nice to just press a button and dismiss. Calendar notifications as well. I'm up getting a drink and a meeting notice buzzes my wrist with a note what's coming up. It's much nicer than pulling out the phone. The one downside is that when you do pull your phone out there's a bunch of notifications to dismiss and you want to make sure you got them all.

As for fit, I need to get a replacement wrist band, but it's not that bad. I was worried about the size of the face, but I've not found that it's actually not that large. There are much larger watches out there by far. I do find that it rotates around and ends up on the bone every once in a while, but I'm hoping a better watch band will help with that. I'd rather they put the $$ into the device and less into meeting everyone's private feelings about what makes a great watch band, so no complaints.

Battery life isn't really tested yet. I charged it when I first got it and I'm on day 3. This includes leaving it overnight hooked up to my phone which I should probably stop I guess. My phone battery seems ok as well. It's hard to judge as I've got a nearly 1yr old Galaxy Nexus with a battery that needs replacement currently.

The Potential

I'm not going to go down the 'bad' road here. This is a new product, just released, and it's getting updates so let's just concentrate on what could be better.

The first thing is the navigation. It came out during the initial demos. The nav menu is setup in a way that needs love. If I have to have watch faces on there as full app status, then I should be able to remove ones I don't use so that navigation is less painful. Honestly, watchfaces need a sub nav. Settings already has this, so the concept should be easy to implement.

Along those lines, back from the home menu should activate the watch face. I once changed watch faces because I was in the settings and waiting for the home nav to timeout back to the watch face. In that time I moved in my wrist in a way that activated a new watch face on accident. Doh!

Next up, we really really need the sdk. Currently, there's just a limited set of uses. It's great for texts/calender notifications, but there's so much more that could be possible. Imagine a pomodoro app for the time management geeks, or hooking up the Field Trip app into notifications as you walk around. There's a lot of potential and the sdk needs to come out to enable a lot of it.

It really needs a battery indicator somehow. I'd really settle for a number value in the settings/about area. If I'm going to trust this as my watch I need to have an idea when I leave the house if it's going to make it or not. If it's meant to charge once a week, I'm not going to have a full recollection of the last time I put it on the charger. Was that Sunday? or maybe it was Saturday? I don't think it needs to be too prominent though.

The final thing is more a nitpick. I listen to Audible a lot on my phone. While doing the dishes, cleaning the house, making dinner, all the time. So I really love the idea of using my watch to start/stop vs using my phone itself as it sits in my pocket. However, the integration there isn't perfect. If I'm playing a book and use the music app to pause, it will pause my book, but starts the Google Music application. Then another pause will stop that music as well. However, I can't then start back up again. Somehow, my bluetooth speaker and headphones talk to Android in a way that it can start/stop any audio application. It just starts the last thing playing, podcasts in DogCatcher, books in Audible, or music in Google Music. I really want the pebble music app to work like that as well.


I really like the Pebble, but a big part of my like is seeing the potential. I think they made a great decision to not try to make the watch the computer and use the phone for that. I hope they don't ever stray from that decision. I also really like how Android has made things much nicer/easier for them. I can't wait to see what other apps can do with a Pebble intent that would allow exposing some UX away from the device itself. If you're a dev I'll finish up with a few wishlist items for people to work on:

  • Google Authenticator app on the watch to show the number generated
  • Google now card info: weather, flight upcoming, upcoming meeting
  • Syncing alarms from device to watch
  • Twitter replies/DM notifications
  • Field Trip app info when you pass by a specific place
  • Guidebook integration with next talk/room information

An updated email config, 2 offlineimap, mutt, and dovecot ftw!

Since joining the Launchpad team my email has been flooded. I've always been pretty careful to keep my email clean and I've been a bit overwhelmed with all the new mailing lists. There are a bunch of people working on things, as you can imagine. So the email never stops. I'm still working on figuring out what I need to know, what I can ignore, and what should be filed away for later.

Another thing I'm finding is that I've got emails in both of my accounts around a single topic. For instance, I have to do some traveling. I've got emails on both my Gmail (personal) and Canonical (work) accounts that I really want to keep together in a single travel bucket.

I currently have offlineimap pull both of work and personal accounts down into a single folder on my machine ~/.email/. So I've got a ~/.email/work and a ~/.email/personal. I use mutt then to open the root there and to work through email. It works pretty well. Since I really wanted a global "travel" folder, I figured I'd just created one. So that works. I end up with a directory structure like:

  • personal
  • travel
  • work

The problem

Of course the issue here is that when offlineimap runs again it sees the email is no longer in the personal or work accounts and removes them from the server. And the travel folder isn't a part of any server side account so it's not backed up or synced anywhere. This means Gmail no longer sees things, my phone no longer sees them, and I've got no backups. Oops!

Solution start

So to fix that, my new directory structure needs to become an account. So I setup dovecot on my colo server. This way I could have an imap account that I could do whatever with. To get my email into there, I setup offlineimap on my colo to pull personal and work down as I had on my laptop. So I still have things in a ~/.email that's from the accounts and then dovecot is keeping all of my email in ~/email (not a hidden dir). To get my email into there, I symlinked the ~/.email/personal/INBOX to ~/email/personal and did the same with the work account. Now the two accounts are just extra folders in my dovecot setup.

So there we go, colo is pulling my email, and I changed my laptop to offlineimap sync with the new dovecot server. In this way, I've got a single combined email account on my laptop using mutt. I then also setup my phone with an imap client to talk directly to the dovecot server. Sweet, this is getting closer to what I really want.

Issues start, who am I

Of course, once this started working I realized I had to find a way to make sure I sent email as the right person. I'd previously just told mutt if I was in the personal account to use that address and if in the work account use that one. Fortunately, we can help make mutt a bit more intelligent about things.

First, we want to have mutt check the To/CC headers to determine who this email was to, if it was me, then use that address as a From during replies.

mutt config:

# I have to set these defaults because when you first startup mutt
# it's not running folder hooks. It just starts in a folder
set from=""
# Reply with the address used in the TO/CC header
set reverse_name=yes
alternates "|"

This is a start, but it fails when sending new email. It's not sure who I should be still. So I want a way to manually switch who the active From use is. These macros give me the ability to swap using the keybindings Alt-1 and Alt-2.

mutt config:

macro index,pager \e1 ":set\n:set status_format=\" %f [Msgs:%?M?%M/?%m%?n? New:%n?%?o? Old:%o?%?d? Del:%d?%?F? Flag:%F?%?t? Tag:%t?%?p? Post:%p?%?b? Inc:%b?%?l? %l?]---(%s/%S)-%>-(%P)---\"\n" "Switch to"
macro index,pager \e2 ":set\n:set status_format=\" %f [Msgs:%?M?%M/?%m%?n? New:%n?%?o? Old:%o?%?d? Del:%d?%?F? Flag:%F?%?t? Tag:%t?%?p? Post:%p?%?b? Inc:%b?%?l? %l?]---(%s/%S)-%>-(%P)---\"\n" "Switch to"

That's kind of cool, and it shows in the top of my window who I am set to. Hmm, even that fails if I've started an email and want to switch who I am on the fly. There is a way to change that though, so another macro to the rescue, this time for the compose ui in mutt.

mutt config:

macro compose \e1 "<esc>f ^U Rick Harding <>\n"
macro compose \e2 "<esc>f ^U Rick Harding <>\n"

There, now even if I'm in the middle of creating an email I can switch who it's sent as. It's not perfect, and I know I'll screw up at some point, but hopefully this is close enough.

Firming up with folder hooks

Finally, if I know the folder I'm in is ONLY for one account or the other, I can use folder hooks to fix that up for me.

mutt config:

folder-hook +personal.* set from=""
folder-hook +personal.* set signature=$HOME/.mutt/signature-mitechie
folder-hook +personal.* set query_command='"goobook query \'%s\'"'

So there, if I'm in my personal account, set the from, the signature, and change mutt to complete my addresses from goobook instead of the ldap completion I use for work addresses.

Not all roses

There are still a few issues. I lose webmail. After all, mail goes into my Gmail Inbox and then from there into various folders of my dovecot server. Honestly though, I don't think this will be an issue. I tend to use my phone more and more for email management so as long as that works, I can get at things.

I also lose Gmail search for a large portion of my email. Again, it's not killer. On my laptop I've been using notmuch (Xapian backed) for fulltext search and it's been doing a pretty good job for me. However, I can't run that on my phone. So searching for mail on there is going to get harder. Hopefully having a decent folder structure will help though.

I've also noticed that the K-9 mail client is a bit flaky with syncing changes up on things. Gmail, mutt, and I've also setup Thunderbird all seem to sync up ok without issue, so I think this is K-9 specific.

That brings up the issue of creating new folders. Offlineimap won't pick up new folders I create from within mutt. It won't push those up as new imap folders for some reason. I have to first create them using thunderbird, which sets up the folder server side for me. Then everything works ok. It's a PITA, but hopefully I can find a better way to do this. Maybe even a Python script to hook into a mutt macro or something.

Wrap Up

So there we are. Next up is to setup imapfilter to help me pre-filter the email as it comes in. Now that all email is in one place that should be nice and easy. I can run that on my colo server and it'll be quick.

This is obviously more trouble than most people want to go through to setup email, but hey, maybe someone will find this interesting or have some of their own ideas to share.

CoffeeHouseCoders Detroit hiring a moving truck

Coffee House Coders is moving

Coffee House Coders has been meeting every Wed night for something near two years now. I've not been able to find the announcement for the first one. However, starting August 24th (note, that's a week away, not in two days) we'll make our first location move.

Why? Well, as those of you that show up know, we don't always have all the space that we need. Sometimes we get the place to ourselves, sometimes it's crazy and hard to get seats for everyone. The new location has a private meeting room. So we can be sure we have a set of space available.

Hopefully, this will help attract some of the irregular visitors and give us room to grow. I've always wanted to try to do some more things with CHC. It's great to have that set time each week, but I get the feeling that it would be cool to actually do some mini-user group content. We don't have an east side Python user group, but we have a ton of people interested in that content. We don't have an east side Web Developers group, but again, we have potential interest. Using this meeting location, we might be able to expand and actually do some user group like material in the future.

What is this Coffee do dad?

The idea of CHC is easy, we all want to code/geek out on fun stuff. Every Wed night, from 8pm until 10pm we meet at the coffee shop and sometimes write code and sometimes just BS the time away with fellow geeks. We find it's great to have some time set aside each week for personal projects, learning something new, or whining about what that co-worker did to your code while you weren't looking.

When again?

Our first week in the new location is August 24th. We'll be in the new location from then on as long as things work out. As before, we meet 8pm - 10pm. The last Wed of each month we add an extra hour so get there at 7pm. The 31st will be our next "long edition" CHC and that will be at the new location.

Where is this?

Caribou Coffee
31901 Woodward Avenue
Royal Oak, MI 48073-0984

[googlemaps,+Woodward+Avenue,+Royal+Oak,+MI&aq=0&sll=42.677261,-83.160496&sspn=0.113078,0.099049&vpsrc=0&ie=UTF8&hq=Caribou+Coffee,+Woodward+Avenue,+Royal+Oak,&hnear=Michigan&cid=8777002513835499183&z=13&iwloc=A&output=embed&w=425&h=350" /]

View Larger Map

If you have any questions or have any ideas on topics/wishlist for a local programmers group in the area, feel free to let me know.

PyOhio 2011: Another year, another great time.

Phew, another PyOhio has come and gone. This year was a great event. I can't say enough good things about the group that puts it together. It's really nice to have something somewhat local to head to every year.


  • Data-Transfer Objects Are a Disease. Meet the cure.

    The first I went to on a whim, I wasn't really sure what kind of Data Transfer Object (DTO) was going to be discussed. It turns out, the talk was a praising of the NamedTuple as a great way to pass data in Python. It's got some nice sanity checks, is very lightweight, and helps prevent developers from going crazy adding all kinds of business logic to simple data containers. I'm not sure how they're passing them around their applications, but I can see the appeal. I know I've been bit a few times when unserializing some JSON and a typo in a dict key bites me. I wonder if there's a way to easily get back a list of NamedTuples vs dicts when loading up some JSON transferred around.

  • Aspen: A Next-generation Web Framework

    I checked out the Aspen web framework talk just to check out what new ideas people are playing with when it comes to web tools. I wasn't disappointed, Aspen has some very interesting ideas. The author has done some work to bring back the url meaning something on the project layout. The idea of your layers of the app in a single file is kind of interesting, and I can see how that'd be helpful in some development cases. I end up sitting my controller/template code side by side when I work anyway.

    The weakness I see, is that it's got the same issue PHP has when it comes to helping new developers start good practices. One of the great things about web frameworks, is that they help tell you where and how to organize your code. They give you test directories out of the box, they help bootstrap a good way to get your database connection up in a way that avoids shared sessions across requests, etc. Aspen does a lot less of that, and I could see a younger dev doing way more copy/paste of code than I'd want. It's a little bit of a bare framework, which is great to help integrate you tools of choice, but also provides a barrier to entry for new developers at times.

  • Django and Google App Engine: Why I'm using Flask and EC2

    This talk was from a friend on the west side of the state. As I've been part of the IRC discussions where he's been trying to go through various tools to build some small and simple web apps, it was cool to see the story in one swoop. He's a fan of the microframework. For his use cases, the full stack just threw hurdles in front of him. It's great to hear he's found a tool he loves in flask.

  • Evolving an internal web service

    Taavi gave a great talk on how they worked to rebuild an aging PHP app in Python over a long period of time. It's a great example of what I've been trying to get going for a while, APIs all the time. Everything seems to talk to the new application base via an array of remote methods and this great decoupling has been a boon for them to help provide data to several different systems in a clean way.

    I wish I had more time to try to chat with him. They're doing some cool things with SqlAlchemy, migrating data from old systems to new, and just some very good work and testing done on performance.

  • Creating web apis that are a joy to use

    This was something I really was interested in. Since I've been pushing apis like mad at work and I've been working on my first public one for Bookie, I knew I needed some help/guidance on this. Issac did a great job hitting home the big rules I kind of knew, but wasn't following that well.

    First, document, document, document. I loved his graph of user happiness vs amount of documentation. Users of your api don't get happy until the docs are near complete. Until then, it's just as bad as no documentation. I've got some typing ahead of me.

    The second point was something I was battling with a bit. I tend to think of the api in terms of usage. "You want to do task XXX". At that point, I'm deciding what the user wants to do. In reality it's more about the term "resouce". A resource can be data, a function (send email), or something else along those lines. However you want to expose them via the api in simple distint manner. Just taking your current html view you push to browser users and building an api that is the same doesn't work. After all, the great thing about the api is that people build and do things you didn't think to do on your current application implementation.

  • My Talk, Sqlalchemy Tutorial

    Finally, I was of course at my talk. This year I decided to really didn't like myself and I should do more than a talk, but a two hour tutorial with some hands on coding exercises. The room was full of people of all levels and was a bit more challenging than I originally thought it would be. On top of that, the AC broke and the rooms were over 85 degrees which made holding an audience's attention all that much more challenging.

    In the end, I think things came out ok. I've heard from an array of people that they enjoyed the talk. Once the first hour/talk part was over, most of the room left. We had about seven people that did the hands on code for the second hour. In the end, it sucks, but I can't blame them. If it wasn't my talk I'd have searched out cooler air as well. I hope that people still take the time to try out the hands on code and let me know if they run into any issues. If you do, feel free to email me

    Thanks to PyOhio for letting me take a shot at something more classroom like. It's a new challenge to go from a talk to a tutorial and I encourage people to try it out.


I think it's true what they say, as you go to more and more conference you tend to do and learn less in the talks themselves, but make up for it with the great networking. This year it was very noticeable. What was great was that I got a chance to meet several people I've been following on Twitter for a while. These are people that are interesting, respected, and meeting them was kind of mini-starlet moments.

I got to have a great discussion around apis and self bootstrapping application installs with Issac Kelly. I met Michael Trier whos been a great Python presence on Twitter for a while now. I also caught up with the Ohio crew and guys Dave, Mike, and Catherine. If you run into these guys start a conversation, it'll be worth it.

I also had a ton of great conversations on things from new people that I really wish I did a better job of remembering and tracking down names for. Sorry that I don't call you all out.

All the hallway stuff really helped make this PyOhio a great one for me.


Another great thing PyOhio does are the sprints. Unfortunately I could only do them on Saturday, but man what fun it was. I think we managed to get 6 or so people with their own up and running Bookie instance running. We had nearly a dozen people hacking on things at one point. We had some fun hammering pypi from the wifi network and some really good ideas came up to help make the installs a bit easier. At the end of the night we had a pull request and some definite interest going forward. I hope that the people that sprinted on Bookie found it interesting to take part in and maybe learned something. I know I've got a lot of work to do still

I'll have a separate Bookie status report out later with some details on changes and things.

A reminder as well, if you'd like to have a hosted Bookie account on just sign up to the waiting list here:

Insert Giant PyOhio 2010 Recap Here

Ugggg, zombie day today as I attempt to get work done after a crazy PyOhio this year. Just like the last two this was a great regional conference I'm so glad that people put on. So congrats to the team for a 3rd year and some great stuff. The nice thing this year is that I managed to finally reach a goal of mine to get a job doing Python. After being a wanna-be the last two years, I got a chance to give back a bit more than I took for the first time. So below is a brain dump of my view of the conference. I want to thank all the awesome people that came to the open spaces, provided great insight, and to the speakers I attended for showing me cool stuff I've yet to see. If you've got any questions or corrections from my dump please let me know. And if you were the one giving the talk, or want to add let me know and I'll toss an link/edit in here going forward.

The full schedule is up at:

Day 1

Python Koans - Tutorial

I started out with the Koans tutorial track. You can get the koans from:

The idea is that you get the source code which is setup as a series of unit tests. Each test is broken and you need to fix it in order to understand some aspect of Python. They've got a ton of work in here and it's a really fun way to learn about Python.

For instance there's a whole test file for tuples. You have to go in and correct the unit tests so they pass and as you correct them you learn how tuples work and some of the ways you can use them.

You can see all the koans in their directory:

So one example:

[sourcecode lang="python"] def test_tuples_are_immutable_so_item_assignment_is_not_possible(self): count_of_three = (1, 2, 5) try: count_of_three[2] = "three" except TypeError as ex: self.assertMatch(__, ex[0]) [/sourcecode]

Here it's demonstrating that tuples are immutable. You'll get an exception if you try to edit the tuple in place like this. Your job is to replace the '__' with the name of the exception you'll end up getting.

The koans are small and you can easily test these in a python terminal to help you figure things out. Lots of fun.

Log Analysis with Python

This turned out to not be what I was looking for. It seemed like a chance for the writer of this petit tool to show off his tool. I was hoping for more about how to analyze, generate trends, etc from log files. More on the math/methods. His library seems cool and all, but unless a lot of regular time going over log files I'm not sure how handy it would be.

Open Space - Message Queues

I attended an open space on Message Queues. There were only a couple of people there and we did some chatting, one guy was a heavy ZeroMQ user, but wasn't that happy and was looking at rabbitmq. So not a lot of new notes, but seems the rabbitmq route is the most popular.

Wrangling the bits, standardizing how apps get built - My Talk

I gave my talk at the end of the day. It was about my work's setup we use and how it enables us to easily create new apps/projects in a hurry in a standard way so that we all know what to expect. It seemed to go over really well. I had a number of people comment after the talk how they thought it was really good.  It's basically notes on how to customize with your own actions/features making life a bit easier.
I've put my talk slides up in case you're interested, but there's not a lot of meat to them unfortunately:


That night we met up at Subway to do some sprints on projects. We didn't get a whole lot done. I did start to checkout the source code for Pypi, but didn't do much with it.

On a side note, all hail the awesome subway lady that got serving us all. I think we might have been some 25 people strong showing up at once and hammered here running the store all on her own. She didn't complain once and was just pure awesome.

Day 2

Vim open space

The talks were so-so and I decided to instead lead an open space on vim and using it for python work.  There were a bunch of people in there and it went over really well. Everyone learned a few new tricks in there. One thing I've updated is to use the plugin pylint.vim so that I can get in-line syntax error checking.

Some of the plugins brought up:

And another link to my vim config:

Controlling Unix Processes with Supervisord

This was a pretty good talk. There were several things I didn't realize that supervisor could do such as writing your own custom event handlers. For instance they have a custom thing that checks for some message that causes them to spin up more web servers to handle an increased server load.

Some of the tricks like tying all your services together under supervisor was pretty neat. For instance they tie in their mysql servers, web app servers, solr full text servers into one supervisor control setup. So they all startup/work together.

Some of the things they do are because they can provide a user control interface for clients. They create something like a cPanel-like interface for users on their network which allows them to restart/stop servers at will.

Code with Style

This talk basically a walk through Pep8. I try hard to keep to Pep8 and use a vim plugin to check my files as I work on them. The two places I have a hard time with it is the line length when you have long single line messages and with complicated list comprehensions.

He demonstrated a couple of tricks:

  • for list comprehensions that are complicated you can turn it into a function that you then run across your list/etc via the filter() function.
  • for long strings you can avoid the \ at the end of line by using ()

[sourcecode lang="python"] s = ("my one long" " line of text" " is this thing here") [/sourcecode]

will come out as: "my one long line of text is this thing here"

So now I have some work to do to clean up some things I've left behind.

Fabric - Open Space

I wasn't into the next set of talks so I again hosted an open space on fabric.  This one got interesting as many people in the room handle large server farms of hundreds of machines and have very separate ops/dev people. So a developer is separated from some of the things I was showing in fabric like stopping/restarting celery, resetting the web app, etc. There was also some question as to the advantage of using something like Fabric over just bash/shell scripts.

So it was interesting to see where fabric fits in, where it falls short, and while it's cool in some aspects, there are a number of limitations and concerns in several use cases.

The one regret was that with the discussion going on we never got other people up front showing off their fabric setups which I would have liked to see. I guess we had a hard time keeping the discussion going in line and I'd have liked to see more stuff from other fabric fans.

Making it go faster

This was the last talk I went to. It started out looking at how to profile your code using cProfile and pstats. I had messed with this, but then he covered using runsnakerun in order to do the profiling which gives you pretty graphs that represent the time spend in functions as a series of shells like this:

You can see the site there:

So it's a decent wrapper around pstats for some more visual/easy to view performance stats. I'll have to try that out in the future.

From Software Center to App Store

Yay, Craig and I managed to get back on the podcast train once I got back from my Vacation. It's still a work in progress as we fight to try to keep our normal extended discussions down to a 30min podcast. This time I wanted to discuss the idea of the desktop os app store and there were a few things we didn't get to in the podcast. I had originally wanted to try to go through the things I felt were missing from the Software Center in Ubuntu that prevented me from counting it as an "App Store". The Software Center is coming along since I checked it out during the original blueprint phase. I think it could definitely get to an app store look/feel. So first my rants on things I'm not a fan of.

  • The universal access icon is white which set upon a light background makes it nearly impossible to read/see.
  • In the "Get Software" section, is that really the new Canonical logo? A purple dot? Sorry, but it looks horrible.
  • In the "Installed Software" link on the left side I completely missed it had any logo since it's again, white logo on white background.
  • When viewing the details on a software package I love the link for the Website, but the fact that there's no hover/other indication this is clickable it looks more like a heading vs a link.

Anyway, that's just my off the cuff nit picking. What I really wanted to go through is the list of features I'd love to see to have the app store concept take off in Ubuntu.

First up, only show the software. Forget the libraries. Anything that starts with lib should be hidden by default. I also don't think there should be any view of all packages. It's just scary. I think search and going through categories/simplified interfaces are the only way to go. Does anyone honestly think users are going to go through the entire list? I think there's a bunch of things that can be done to clean up the lists of package in order to make things approachable to users.

Next up, when I think of app stores I think of paid apps. Now I know Canonical has some software in their online canonical store, but that's not where I go to install software. I should be able to purchase software right in the Software Center. Along these lines, the Canonical partners repo is the place to put this stuff. Beyond these few things from the Canonical store, I'd love to see this opened up for other software to be submitted for purchase or maybe donation. How cool would it be to be able to support your favorite apps right through the software center?

While we're talking about the parters repo, how is it that just about none of those packages have icons? Not only that, a quick test of a few shows no info in the "More Info" section. You'd think these packages would have help from Canonical getting into place and these should be the gold standards of user experience for packages in the Software Center.

Finally, and according to a recent podcast episode there's already work going on to allow users to write reviews and ratings of software in there. This is great news and will open up a bunch of user interface enhancements for users looking for software.

So discovery, purchase, and reputation. These are the big things I think need help in the Ubuntu Software Center. What things do you think are missing?

Edit: And I should have started here, but definitely looks like much of the paid/donations stuff is something they're looking to do currently. Check out their roadmap.

My shot at radio,

I've always had a secret love affair with DJs on the radio. It seems like a pretty cool job until you realize that awful hours, pay, and crap you have to do to start out. In an effort to try to have some of that fun without changing careers I've joined forces with Craig Maloney and we've started a podcast of our own with the idea of putting out extremely techie point of view onto things we have interests in. So we'll talk about things that come up in the Ubuntu community, represent the Michigan Loco, and just talk tech. If you get a second, check out our first episode over at

It's definitely a work in progress, but check it out and let us know what you think in the comments at

Pylons controller with both html and json output

We're all using ajax these days and one of the simple uses of ajax is to update or add some HTML content to a page. What often happens is that the same data is often display in a url/page of its own. So you might have a url /job/list and then you might want to pull a list of jobs onto a page via ajax. So my goal is to be able to reuse controllers to provide details for ajax calls, calls from automated scripts, and whole pages. The trouble with this is that the @jsonify decorator in Pylons is pretty basic. It just sets the content type header to application/json and takes whatever you return and tries to jsonify it for you.

That's great, but I can't reuse that controller to send HTML output any more. So I set out to figure out how the decorator works and create one that works more like I wish.

The first thing in setting this up was to look at how to structure any ajax output. I can't stand the urls you hit via ajax that just dump out some content, outputs some string, and makes you have to look up every controller in order to figure out just what you're getting back.

I prefer to use a structured format back. So what parts do we need. Really, just a few things. Your ajax library will tell you if there's an error such as a timeout, 404, etc. It won't tell you if you make a call to a controller and don't have perimission, or maybe the controller couldn't complete the requested action. So the first thing we need is some value of success in our request.

The second component is feedback as to why the success came back. If the controller returns a lack of success we'll want to know why. Maybe it is successful, but we need some note about the process along the way. We need a standard message we can send back.

Finally, we might want to return some sort of data back. This could be anything from a json dump of the object requested to actual html output we want to use.

So that leaves us with a definition [sourcecode lang="javascript"] {'success': true, 'message': 'Yay, we did it', 'payload': {'id': 10, 'name': 'Bob'}} [/sourcecode]

I want to enforace that any ajax controller will output something in format. It makes it much easier to write effective DRY javascript that can handle this and really leaves us open to handle about anything we need.

So my json decorator is going to have to make sure that if the user requests a json response, that it gets all this info. If the user requests an html response, it'll just return the generated template html.

By copying parts of the @jsonify and the @validate decorators I came up with something that adds a self.json to the controller method. In here we setup our json response parts.

Finally, we catch if this is a json request. If so, return our dumped self.json instance. Otherwise, return the html the controller sends back. If the controller is returning rendered html and is a json request, then we stick that into the payload as payload.html

So take a peek at my decorator code and the JSONResponse object that it uses. Let me know what you think and any suggestions. It's my first venture into the world of Python decorators.

@mijson decorator Gist

Sample Controller [sourcecode lang="python"]

@myjson() def pause(self, id): result = SomeObj.pause()

if self.accepts_json(): if result: self.json.success = True self.json.message = 'Paused' else: self.json.success = False self.json.message = 'Failed'

self.json.payload['job_id'] = id

return '<h1>Result was: %s</h1>' % message

#Response: # {'success': true, # 'message': 'Paused', # 'payload': {'html': '<h1>Result was: Paused</h1>'}} [/sourcecode]

SqlAlchemy Migrations for mysql and sqlite and testing

I really want to be a unit tester, I really do. Unfortunately it's one of those things I can't seem to get going. I start and end up falling short before I get over the initial setup hurdle. Or I get a couple of tests working, but then as I have a hard time trying to test parts of things I fade. So here goes my latest attempt. It's for a web app I'm working on at work and I REALLY want to have this under at least basic unit tests. Since it's a database talking web application my first step is to get a test db up and runnging to run my tests against. At least with that up I can start some actual web tests that add and alter objects via some ajax api calls.

In order to get a test db I first had to figure out how to setup a database for the tests. For speed and ease purposes I'd rather be able to use sqlite. This way I don't need to setup/reset a mysql db on each host I end up trying to run tests on.

Of course this is complicated because I'm using sqllchemy-migrate for my application. This means part of the testing should be to init a new sqlite db and then bring it up to the latest version. In order to do this I had to convert my existing migrations to work in both MySQL and Sqlite.

Step 1: I need a way to tell the migrations code to use the sqlite db vs the live mysql db. I've setup a script in my project root so I hacked it up to check for a --sqlite flag. Not that great, but it works. [sourcecode lang="python"] """ In order to support unit testing with sqlite we need to add a flag for specifying that db

python version_control --sqlite python upgrade --sqlite

Otherwise it will default to using the mysql connection string

""" from import main

import sys if '--sqlite' in sys.argv: main(url='sqlite:///apptesting.db',repository='app/migrations') else: main(url='mysql://connection_stringl',repository='app/migrations') [/sourcecode]

Step 2: Not all of my existing migrations were sqlite friendly. I had cheated and added some columns by straight sql like

[sourcecode lang="python"] from sqlalchemy import * from migrate import *

def upgrade(): sql = "ALTER TABLE jobs ADD COLUMN created TIMESTAMP DEFAULT CURRENT_TIMESTAMP;" migrate_engine.execute(sql);

def downgrade(): sql = = "ALTER TABLE jobs DROP created;" migrate_engine.execute(sql); [/sourcecode]

This worked great with MySQL, but sqlite didn't like it. In order to get things to work both ways I moved to using the changeset tools to make these more SA happy.

[sourcecode lang="python"] from sqlalchemy import * from migrate import * from migrate.changeset import * from datetime import datetime

meta = MetaData(migrate_engine) jobs_table = Table('jobs', meta)

def upgrade(): col = Column('created', DateTime, col.create(jobs_table)

def downgrade(): sql = "ALTER TABLE jobs DROP created;" migrate_engine.execute(sql); [/sourcecode]

A couple of notes. This abstracted the column creation so that sqlite and mysql would take it. Notice I did NOT update the drop command. Sqlite won't drop columns, and I honestly didn't care because the goal is for my unit tests to be able to bring up a database instance for testing, I'm not going to run through the downgrade commands in the sqlite database.

Step 3. So with all of the migrations moved to do everything via SA abstracted code vs SQL strings, I was in business. My final problem was one migration in particular. I had changed a field from a varchar to a int field. Sqlite won't let you do simple 'ALTER TABLE...' and even when I had the command turned into SA based changeset code my db upgrade failed due to sqlite tossing an exception.

What did I do? I cheated. First, I updated the migration with the original column definition to an Integer field. I mean, any new installs could walk that migration up just fine. I happen to know that the two deployments right now have already made the change from varchar to int. So for them, the change won't break anything for upgrade/downgrade.

I then kept the migration with the change, but tossed it in a try/except block so that I could trap it nice and just output a message "If this fails, it's probably sqlite choking.". It's hackish, but works for all my use cases I need.

Now I can create a new test database with the commands [code] python version_control --sqlite python upgrade --sqlite [/code]

Now I can start building my test suite to use this as the bootstrap to create a test db. I'll have to them remove the file on teardown so that I don't get any errors, but that'll be part of the testing setup. Not in memory, but oh works.

A follow up, more dict to SqlAlchemy fun

Just a quick follow up to my last post on adding the ability to add some tools to help serialize SqlAlchemy instances. I needed to do the reverse. I want to take the POST'd values from a form submission and tack them onto one of my models. So I now also add a fromdict() method onto base that looks like. [sourcecode lang="python"] def fromdict(self, values): """Merge in items in the values dict into our object if it's one of our columns

""" for c in self.__table__.columns: if in values: setattr(self,, values[])

Base.fromdict = fromdict [/sourcecode]

So in my controllers I can start doing [sourcecode lang="python"] def controller(self): obj = SomeObj() obj.fromdict(request.POST) Session.add(obj) [/sourcecode]

Hacking the SqlAlchemy Base class

I'm not a month into my new job. I've started working for a market research company here locally. Definitely new since I don't know I've ever found myself really reading or talking about 'market research' before. In a way it's a lot like my last job in advertising. You get in and there's a whole world of new terms, processes, etc you need to get your head around. The great thing about my new position is that it's a Python web development gig. I'm finally able to concentrate on learning the ins and outs of the things I've been trying to learn and gain experience with in my spare time.

So hopefully as I figure things out I'll be posting updates to put it down to 'paper' so I can think it through one last time.

So started with some more SqlAlchemy hacking. At my new place we use Pylons, SqlAlchemy (SA), and Mako as the standard web stack. I've started working on my own first 'ground up' project and I've been trying to make SqlAlchemy work and get into the way I like doing things.

So first up, I like the instances of my models to be serializable. I like to have json output available for most of my controllers. We all want pretty ajax functionality right? But the json library can't serialize and SqlAlchemy model by default. And if you just try to iterate over sa_obj.__dict__ it won't work since you've got all kinds of SA specific fields and related objects in there.

So what's a guy to do? Run to the mapper. I've not spent my time I pouring over the details of SA parts and the mapper is something I need to read up on more.

Side Notes: all these examples are from code using declarative base style SA definitions.

The mapper does this great magic of tying a Python object and a SA table definition. So in the end you get a nice combo you do all your work with. In the declarative syntax case you normally have all your models extend the declarative base. So the trick is to add a method of serializing SA objects to the declarative base and boom, magic happens.

The model has a __table__ instance in it that contains the list of columns. Those are the hard columns of data in my table. These are the things I want to pull out into a serialized version of my object.

My first try at this looked something like

[sourcecode lang="python"] def todict(self): d = {} for c in self.__table__.columns: value = getattr(self, d[] = value

return d [/sourcecode]

This is great and all but I ran into a problem. The first object I ran into had a DateTime column in it that choked since the json library was trying to serialize a DateTime instance. So a quick hack to check if the column was DateTime and if so put it to string got me up and running again.

[sourcecode lang="python"] if isinstance(c.type, sqlalchemy.DateTime): value = getattr(self,"%Y-%m-%d %H:%M:%S") [/sourcecode]

This was great and all. I attached this to the SA Base class and I was in business. Any model now had a todict() function I could call.

[sourcecode lang="python"] Base = declarative_base(bind=engine) metadata = Base.metadata Base.todict = todict [/sourcecode]

This is great for my needs, but it does miss a few things. This just skips over any relations that are tied to this instance. It's pretty basic. I'll also run into more fields that need to be converted. I figure that whole part will need a refactor in the future.

Finally I got thinking, "You know, I can often do a log.debug(dict(some_obj)) and get a nice output of that object and its properties." I wanted that as well. It seems more pythonic to do

[sourcecode lang="python"] dict(sa_instance_obj) # vs sa_instance_obj.todict() [/sourcecode]

After hitting up my Python reference book I found that the key to being able to cast something to a dict is to have it implement the iterable protocol. To do this you need to implement a __iter__ method that returns something that implements a next() method.

What does this mean? It means my todict() method needs to return something I can iterate over. Then I can just return it from my object. So I turned todict into a generator that returns the list of columns, values needed to iterate through.

[sourcecode lang="python"] def todict(self): def convert_datetime(value): return value.strftime("%Y-%m-%d %H:%M:%S")

d = {} for c in self.__table__.columns: if isinstance(c.type, sa.DateTime): value = convert_datetime(getattr(self, else: value = getattr(self,

yield(, value)

def iterfunc(self): """Returns an iterable that supports .next() so we can do dict(sa_instance)

""" return self.todict()

Base = declarative_base(bind=engine) metadata = Base.metadata Base.todict = todict Base.__iter__ = iterfunc [/sourcecode]

Now in my controllers I can do cool stuff like

[sourcecode lang="python"] @jsonify def controller(self, id): obj = Session.query(something).first()

return dict(obj) [/sourcecode]

Auto Logging to SqlAlchemy and Turbogears 2

I've been playing with Turbogear2 (TG2) for some personal tools that help me get work done. One of the things I've run into is an important missing feature that my work code has that isn't in my TG2 application yet. In my PHP5 app for work, I use the Doctrine ORM and I have post insert, update, delete hooks that will actually go in and log changes to the system. It works great and I can build up a history of an object over time to see who changes which fields and such.

With my TG2 app doing inserts and updates I initiall just faked the log events by manually saving Log() objects from within my controllers as I do the work that needs to be done.

This sucks though since the point is that I don't have to think about things. Anytime code changes something it's logged. So I had to start searching the SqlAlchemy (SA) docs to figure out how to duplicate this in TG2. I wanted something that's pretty invisible. In my PHP5 code I have a custom method I can put onto my Models in case I want to override the default logged messages and such.

I found part of what I'm looking for in the SA MapperExtension. This blog post got me looking in the right direction. The MapperExtension providers a set of methods to hook a function into. The hooks I'm interested in are the 'after_delete', 'after_insert', 'after_update' method. These are passed in the instance of the object and a connection object so I can generate an SQL query to manually save the log entry for the object.

So I have something that looks a little bit like this:

[sourcecode lang="python"] from sqlalchemy.orm.interfaces import MapperExtension

class LogChanges(MapperExtension):

def after_insert(self, mapper, connection, instance): query = "INSERT INTO log ( username, \ type, \ client_ip, \ application ) VALUES( '%s', %d, '%s', '%s')" % ( u"rick", 4, u'', u'my_app')

connection.execute(query) [/sourcecode]

Then I pass that into my declarative model as:

[sourcecode lang="python"] __mapper_args__ = {'extension': LogChanges()} [/sourcecode]

This is very cool and all, but it's not all the way where I want to head. First, the manual SQL logging query kind of sucks. I have an AppLog() model that I just want to pass in some values to to create a log entry. I'm thinking what I really should do is find a different way to do the logging itself. I'm debating between actually doing a separate logging application that I would call with the objects details.

The problem with this is that one of the things I do in my current app is store the old values of the object. This way I can loop through them and see which values actually changed and generate that in the log message. This is pretty darn useful.

The other downside is that I don't have a good way to have a custom logging message generator is I just call a Logging app API.

So I think I might try out the double db connection methods that SA and TG2 support. This way I could actually try to use the second db instance with a Logging() object to write out the changes without messing up the current session/unit of work.

The missing part here is that I'm still not really sure how to get the 'old' object values in order to generate a list of fields that have been changed. Guess I have some more hacking to do.

Moving day, and hopefully a rededication

Today is moving day. I've run my own blog for a long while. I had a s9y blog setup and for the last year I've managed something like 2 posts. I want to get back into blogging as I've actually been doing some fun stuff. I also want to bring together my tech posting and my woodwork posting into one place. It's just easier to manage and hopefully it'll get me posting more often. It looks like migrating my old posts is going to be a chore. We'll see how that goes later on. If you want to follow along realize there will be more non-tech content in the blog so subscribe to the tech category if you don't want the misc stuff.

Now to get working on some new posts about the various projects I've been hacking on lately.