In search of better enums

I've been fighting now to deal with a pretty common use case in my Pylons applications. I often have a db model (with the marvelous SqlAlchemy) and the object in question has some type of limited set of options for a field. It's basically an enum field, but I don't want to deal with the enum on the DB end. It makes it hard to support with different databases and such.

One solution is to actually create a table for the options. For instance you might have a state table that is joined to your object. It has all the US states in there for you and whenever you query you have to do a join on the state table to get the pretty name. The use cases I'm talking about are sets of 2->5 options. The models might have 5 or more of these as well. So I don't want to deal with creating tables, migrations, and joins for them all.

I had a few failed attempts, but I ended up learning some great new things about SqlAlchemy and have something working. First up I need to create an object that contains the options. Forgive the cheesy names, but let's say we have a table with a Severity field in it.

[sourcecode lang="python"] # first we want to inherit from a common base for all of these enums

class MyEnum(object): """Enum-like object type

Each implementation need only extend MyEnum and define the dict _enum

""" def __getattribute__(self, name): if name in object.__getattribute__(self, '_enum'): return object.__getattribute__(self, '_enum')[name] else: # Default behaviour return object.__getattribute__(self, name)

def to_list(self): """Generate a list of tuples we use to quickly create <select> elements""" return [(val, string) for val, string in self._enum.iteritems()]

class Severity(MyEnum):

_enum = { 'low': 'Low', 'med': 'Medium', 'high': 'High', } [/sourcecode]

So the base MyEnum object gives up a method of accessing the values via an object like interface.

[sourcecode lang="python"] my_instance = SomeModel() my_instance.severity = Severity().low [/sourcecode]

Now this is cool because it helps us have nice refactorable code to reference these strings we're storing into the database for the column. Yep, I know that having lots of strings is more resource intensive on the database. In most of these low scale tables though I'd rather have the table easy to read. Not only that, when I get the values out they're actually the prettified version of the string. So output is much easier.

Also just to note, in order for the MyEnum.__getattribute__ stuff to work you have to have an instance of the object. So that's why you have to enter Severity().high because Severity.high won't hit __getattribute__ and you'll get that there's no attribute by the name 'high'.

I also added a nice to_list() method because we use the webhelpers package to generate html elements and it'll accept a list of value tuples. I can just shoot Severity().to_list() to the webhelper for generation now

This was a good start and works well, but the one other thing I get nervous about is making sure the database doesn't take in garbage, it should be able to make sure that the option I'm setting is in fact in the Severity list of options. In order to do this I added a custom SqlAlchemy column type using the TypeDecorator. In doing this I basically have access to when the value is pulled from the database and placed upon the instance of the model, as well as when the value gets put onto the model from the code side. A database getter/setter in practice.

[sourcecode lang="python"] class MyEnumType(types.TypeDecorator): """SqlAlchemy Custom Column Type MyEnumType

This works in partnership with a MyEnum object


# so this is basically usable anywhere I have a column type of Unicode impl = types.Unicode

def __init__(self, enum_obj, *args, **kwargs): """Init method

:param enum_obj: The enum class that this column should be limited to

e.g. severity = Column(MyEnumType(Severity, 255)) for a unicode column that has a length allowance of 255 characters

""" types.TypeDecorator.__init__(self, *args, **kwargs) self.enum_obj = enum_obj

def process_bind_param(self, value, dialect): """When setting the value make sure it's valid for our MyEnum

# allow setting to None for empty value if value is None or value in self.enum_obj._enum.itervalues(): return value else: assert False, "%s -- %s" % (value, self._enum)

def process_result_value(self, value, dialect): """Going out to the database just return our clean value""" return value

# brief usage example class SupportTicket(meta.Base): __tablename__ == 'tickets'

severity = Column(MyEnumType(Severity, 255)) [/sourcecode]

So now I have things tied into my database model as well. Hopefully this holds up as a valid method for doing this. It seems like it'd be a common use case. What I love is that my code never has to get down to specifying strings. If I want to limit a column or check a value I can use somewhat cleaner (imho that is)

[sourcecode lang="python"] if some_ticket.severity == Severity().low: ... do something

# or in a query results = SupportTicket.query.\ filter(SupportTicket.severity == Severity().high).\ all()


Anyway, food for though. let me know if you've got an alternative I'm missing that I should be using.

Working on OSS @ Work: Dozer profiling Pylons

It's been a fun little bit working on helping speed up a Pylons app at work. Performance needed improvement, and while we knew a couple of big places, I also wanted to look at getting some profiling in place. Fortunately I recently ran across Marius's post on profiling Pylons applications with Dozer. Now Dozer was originally meant for viewing app logs and memory checking, but it seemed that the in dev work added some profiling ability. So away I went checking out his code and seeing if I could get it to run. Once I realized that you had to setup the middleware for each type of activity you wanted to perform (cProfile, memory monitor, log view) I got things running. Very cool!

Right off the bat I realized I might be able to help a bit. The links were dark blue on black, there seemed to be a some issues with the ui on the profiling detail screen. Since we were using this for work I took it a chance to help improve things upstream. I moved the css images to get the arrow indicators working right, cycled the background rows vs hard coding them, and did some small ui tweaks I liked. I also coded up a set of "show/hide all" links to quickly expand things out and back.

Of course, it's not all roses. The show all pretty much times out in anything but Chrome. There's still more ui bits I think that could be improved.

Now that I had tinkered though it was time to add in some features we could use. First up, log view for json requests. We have some timer code in our SqlAlchemy logs and so I want to be able to view those in browser, but also for json requests. So I tweaked it up so that on json requests it adds a _dozer_logview object to the response. This then shows the various log entries along with the time diff that the html view shows.

Once that was going we decided this would be great to put on a staging version of our web application that the client has access to. The nice thing is that the staging is as close to production as possible, so some profiling information there would be very helpful. We don't want the html logview information visible to the client though. It detracts from the ui. To help with this I added a ipfilter config option for the LogView feature. In this way we can limit it to a couple of testing dev boxes and not worry about the client getting access.

I've pushed the code up into a bitbucket fork for Marius's repository. Hopefully this is useful and it's awesome that I got to spend some work time working on code that I can hopefully give back. I love this stuff.

Pylons controller with both html and json output

We're all using ajax these days and one of the simple uses of ajax is to update or add some HTML content to a page. What often happens is that the same data is often display in a url/page of its own. So you might have a url /job/list and then you might want to pull a list of jobs onto a page via ajax. So my goal is to be able to reuse controllers to provide details for ajax calls, calls from automated scripts, and whole pages. The trouble with this is that the @jsonify decorator in Pylons is pretty basic. It just sets the content type header to application/json and takes whatever you return and tries to jsonify it for you.

That's great, but I can't reuse that controller to send HTML output any more. So I set out to figure out how the decorator works and create one that works more like I wish.

The first thing in setting this up was to look at how to structure any ajax output. I can't stand the urls you hit via ajax that just dump out some content, outputs some string, and makes you have to look up every controller in order to figure out just what you're getting back.

I prefer to use a structured format back. So what parts do we need. Really, just a few things. Your ajax library will tell you if there's an error such as a timeout, 404, etc. It won't tell you if you make a call to a controller and don't have perimission, or maybe the controller couldn't complete the requested action. So the first thing we need is some value of success in our request.

The second component is feedback as to why the success came back. If the controller returns a lack of success we'll want to know why. Maybe it is successful, but we need some note about the process along the way. We need a standard message we can send back.

Finally, we might want to return some sort of data back. This could be anything from a json dump of the object requested to actual html output we want to use.

So that leaves us with a definition [sourcecode lang="javascript"] {'success': true, 'message': 'Yay, we did it', 'payload': {'id': 10, 'name': 'Bob'}} [/sourcecode]

I want to enforace that any ajax controller will output something in format. It makes it much easier to write effective DRY javascript that can handle this and really leaves us open to handle about anything we need.

So my json decorator is going to have to make sure that if the user requests a json response, that it gets all this info. If the user requests an html response, it'll just return the generated template html.

By copying parts of the @jsonify and the @validate decorators I came up with something that adds a self.json to the controller method. In here we setup our json response parts.

Finally, we catch if this is a json request. If so, return our dumped self.json instance. Otherwise, return the html the controller sends back. If the controller is returning rendered html and is a json request, then we stick that into the payload as payload.html

So take a peek at my decorator code and the JSONResponse object that it uses. Let me know what you think and any suggestions. It's my first venture into the world of Python decorators.

@mijson decorator Gist

Sample Controller [sourcecode lang="python"]

@myjson() def pause(self, id): result = SomeObj.pause()

if self.accepts_json(): if result: self.json.success = True self.json.message = 'Paused' else: self.json.success = False self.json.message = 'Failed'

self.json.payload['job_id'] = id

return '<h1>Result was: %s</h1>' % message

#Response: # {'success': true, # 'message': 'Paused', # 'payload': {'html': '<h1>Result was: Paused</h1>'}} [/sourcecode]