I’ve become a really big fan of the PickledObjectField provided by this django snippet.  So much so that I use it in almost every django model I create these days.

Basically it serves as the best way to do an object store in your database and perfectly translates in any JSON conversion.  Its an essential tool in any javascript heavy application.

In my current project, we’re aspiring to be google analytics for your database.  We’re basically creating a system that will handle all of your dashboard/analytics needs in the easiest way possible.  This means that we have a LOT of different charts and more in the making.

Its not feasible to put all of those extra parameters that each chart type requires into the model as different columns.  We would end up with an incredible mess in short order.  So I instead create a PickledObjectField called ‘params’ in the model.

class Chart(models.Model):
    params = PickledObjectField(default={'just': 'some', 'default': 'parameters'} )

The params variable then takes most any dictionary of parameters and automatically converts it to string to be stored in the database.

The following command for example will save a params value of something like “KGRwMQpWa2V……” to the database, but you can still use it just like any dict object.

>>> chart = Chart(params = {"type": "scatter", "dot_size":, 4, "color_list": ["red", "green", "orange"], });
>>> chart.parms
{"type": "scatter", "dot_size":, 4, "color_list": ["red", "green", "orange"], }
#You can also treat the field just like a dict
>>> chart.parms['awesome'] = 'for sure'

Its worth noting that there is a similar snippet to this that uses JSON object to string conversion instead of Pickle.  I find that when using Javascript so heavily its easier to use some other string conversion so as not to get confused and I’ve been really impressed with the way that this particular snippet works.

Its incredibly rare that a django snippet becomes such a major tool.  With the exception of my subdomain middleware, I can’t think of another snippet that I use more regularly which leads me to think that it should really get moved into the core fields that django provides.  Object store is essential element to many applications and the PickledObjectField is the best way to do it.


I use a lot of doctests for apps that all need to work on a set of initialized data.  I was hoping that there would be some kind of hook in Django for this but there is not.

I could switch all of the doctests to unittests and use fixtures but that would be a lot of work and I prefer doctests.  I could also go through and paste some sort of init command at the beginning of each test that would ensure the data was loaded or do the loading but that’s just plain bad practice.

I came up with a method of creating a ‘testsetup’ app that is always run before the other apps ensuring that whatever that app configures or loads into the database will be run first before any other apps preform their tests.  Here’s how you can do it too.

First create a ‘testsetup’ app and edit its file

./ startapp testsetup
open testsetup/

__test__ = {"initialize tests": """

>>> your init code here

""" }

In the file you can load the database, prime the cache, or setup whatever else you need initialized.  Then add the ‘testapp’ as the first app to your





Now whenever you run

./ test

It will first run the tests for the ‘testssetup’ script and everything will be primed.  If that’s the only kind of test you run then that’s all you’ll need.  You’re done with this tutorial.

But if you run app level tests (ie. ./ test someapp anotherapp ) then the above solution is not enough.  To ensure the testsuite is run before these apps we’ll make our own TEST_RUNNER.  Create a file called ‘’ with the following source.

def run_tests(test_labels, verbosity=1, interactive = True, extra_tests=[]):
print "Given these test_labels", test_labels
print "With these extra_test", extra_tests

from django.test.simple import run_tests as django_run_tests
if test_labels:
# Make sure 'testsetup' is run first
tl = ['testsetup']
test_labels = tuple(tl)
print "Testing these apps:", test_labels

django_run_tests(test_labels, verbosity, interactive, extra_tests)

and then in your file set the TEST_RUNNER variable

TEST_RUNNER = 'testrunner.run_tests'

This script simply wraps the django test runner to ensure that the testsetup app is tested before any other apps are.  It basically makes the django test runner think that you’re running ./ test testsetup someapp when you actually run ./ test someapp.


A note to the community: I think it’d be great if the Django included a TEST_INIT variable which allowed you to point to a function that would be executed immediately before the first test was run.  The hook would make the setup process for doctests much easier.

I’ve made a few apps on Google’s Appengine now and am getting to the point where I can pump them out fairly quickly.  I really love that they make user authentication (my least favorite part of web applications) incredibly simplified.

I wrote this app in less than an hour as a simple tool for myself and to test out the authentication tools which I hadn’t gotten a chance to use yet.  I’m sharing the source here in case its of any use to others.


The app is called Quick Thoughts and its a very simple private micro blog (a private twitter).  Basically you log in and can record quick notes to yourself.  They’re dated and only you can see them.

Its all on one page.  You can get a good idea of what it is from this screen shot.  You can also just try it out yourself by logging into:


There are a lot of other Appengine tutorials including the official tutorial that do a very thorough job of explaining setting up a development environment and deploying.  I’m just going to share some more example code with some helpful comments.

I know looking at code is not pretty, but I’ve included a lot of comments for each part.  It should be easiest to understand in that format.  Ugh: and I’m sorry that the crappy WordPress syntax highlighting has totally f’d the format.  That pretty much makes python code useless, but hopefully you’ll be able to sift through.


There are only three files for this app.  The first is the HTML template for the single page that is used.



<title>Quick Thoughts</title>
<!--- The following CSS could ofcourse be put in a seperate file, but this is simplest for now --->
form { width: 320px; }

body {
font-family: "Trebuchet MS, Verdana, Arial, Helvetica, sans-serif"
font-size: 16px;
.date {  font-style: italic; font-size: 12px;  }
.thought { padding-top: 30px; }
div#outer {
 width: 500;
 margin-top: 50px;
 margin-bottom: 50px;
 margin-left: auto;
 margin-right: auto;
 padding: 10px;
 //border: thin solid #000000;
h2 { margin-bottom: 0px; }
.username {
 margin-bottom: 20px;

<div id="outer">
<h2>Quick Thoughts</h2>
<!--- Showing the user's Nickname here.  User object also has .email, and .user_id
 more info here:
<div>By {{ user.nickname }}</div>
 <form action="." method="POST">
 <textarea name="thought" rows="6" cols="40"></textarea>
<div align="right"><input type="submit" value="Record" align="right"></input></div>
 This for loop prints out the different Thoughts stored in the database
 Appengine uses Django .96's templating system which I personally think is pretty great.
 For more information visit
 {% for thought in thoughts %}
<div>{{ thought.thought|linebreaksbr }}</div>
<div>{{|date:"D. N jS g:i a" }}</div>
{% endfor %}</div>

This is all pretty straightforward especially if you come from the Django world.  Appengine wisely uses Django’s templating system to render its HTML pages. For more details on the templates view the Django .96 Template Documentation.

Request Handling

The second file is the python WSGI handler.  You can ofcourse use Django on Appengine and have the advantage of the nice url parser and the Django views format but here I just stuck with the WSGI RequestHandlers.

from google.appengine.api.urlfetch import fetch as urlfetch, GET, POST
from google.appengine.ext import db
from google.appengine.ext.webapp import RequestHandler, WSGIApplication
from google.appengine.ext.webapp import template
from google.appengine.api import users

import os

from wsgiref.handlers import CGIHandler

class Thought(db.Model):
 This is the Database Model that stores the different Thought objects that the user submits
 Each entry in the database stores a thought, date and the user who wrote it.
 thought = db.TextProperty()
 date = db.DateTimeProperty(auto_now_add=True)
 # The auto_now_add setting automatically adds the date that the object was created so you don't have to.
 user = db.UserProperty()  # Google handles the user for you.  Great!

class ThoughtHandler(RequestHandler):

 def get(self):
 user = users.get_current_user() # Get the user
 if not user:
 # If they are not logged in, ask google to authenticate them.

 # These are the variables that will be sent to the template
 template_values = {
 # This is a GQL query for the appengine datastore.  
 # Here we're finding all Thoughts for the given User and ordering them by Date descending
 # More info on GQL:
 'thoughts':  Thought.all().filter("user =", user).order('-date'),  
 'user': user,

 # Gather the full path to the template
 path = os.path.join(os.path.dirname(__file__), 'index.html')

 # Render the template with the template_values we collected above
 html = template.render(path, template_values)

 # Write out the result

 def post(self):
 A Thought has been submitted via POST.
 Create a new Thought object and re-direct back to the front page.
 user = users.get_current_user() # Get the user
 if not user:
 # If they are not logged in, ask google to authenticate them.

 # Get the 'thought' POST data from the request
 thought = self.request.get('thought')

 # Create a new thought object using the POST data and the authed user
 t = Thought( thought = thought, user = user )
 # Save the object

 # Now re-direct back to the front page

def main():
 This simple function is the URL parser
 There's only one URL for this app, so its a pretty bad example for this 😉
 application = WSGIApplication([

 ('/', ThoughtHandler),

 ], debug=True)


if __name__ == '__main__':

There are a few major highlights in this code: the Thought Datastore Model, the Query for your Thoughts, and the simple Google Authentication.  I LOVE these three lines of code (yes, I know how nerdy that sounds):

user = users.get_current_user() # Get the user
if not user:

In those 3 lines we’ve requested the User object and asked Google to authenticate them and send them back if they’re not logged in!  Super simple!  No more login/signup/change password/change username crap to deal with here.  The authentication is done for you.

Toward the top is the Thought model that is a subclass of db.Model.  For those of you who’re familiar with Django this format will look familiar.  The Thought model contains the text of the thought, the date it was recorded and the user who recorded it.

In the ‘get’ Request we query for all of the thoughts of the given user and order them by descending date.  The objects are fetched using GQL, the query interface for the Datastore.  You can handle most queries by playing with the format of the above example, but here is more information on GQL.


Finally we need the configuration file for our app.  Its called app.yaml and it tells appengine what App we’ve registered as, and how to handle the URLs.


application: YOURAPPNAME
version: 1
runtime: python
api_version: 1


- url: /.*

Other tutorials explain this file well.  You can expand it to include other scripts and serve static files.


Now you have your own private micro-blog on Google’s datastore!  That means it’s theoretically infinitely scalable without you ever having to worry about a thing.  You can grow to the size of twitter and never blink an eye :).

Hope it helped some people.  Feel free to use this code in any way you’d like and feel free to leave questions, comments or corrections.

Within 5 seconds of looking at a shell script I’m usually opening a new file in my text editor to re-write the ugliness into something that makes more visual sense.  To me at least python is highly preferable.

Still I use shell scripts all the time to batch a group of commonly used sequential executions, or to abbreviate a commonly used but lengthy commands.

Today I looked into going one step further into the complexities of shell scripts, probably my last step for a while, and discovered how to handle arguments.

The inspiring work case was in using Django manager to run tests for the different apps

./ test --settings test_settings <optional app names>

The following script will check if an argument exists and if it does it will use the argument in the tests command.

if [ -n "$1" ]
# Test whether command-line argument is present (non-empty).
 ./ test --settings test_settings $1
 ./ test --settings test_settings

Notice that $1 refers to the first argument ($0 refers to the name of the executable and $5 refers to the 5th argument).

Save the file as ‘test’ and then modify it as an executable

chmod +x test

And now I can run the tests on their own


or with an argument app

./test auth_user

Note that the above example has a limitation of only dealing with the first argument and seems a bit redundant.  The entire script can indeed be shortened to

./ test --settings test_settings $@

As $@ represents all arguments after $0.

The Django manager is a really handy tool.  I wrote earlier about making your own custom managers and there is a lot of other great documentation on it.

Django comes with a bunch of helpful management commands like ‘flush’, ‘syncdb’, ‘test’, etc.

I’ve created a generic ‘drop’ command as I felt it was missing.  I often found myself going into mysql to drop and re-create a database.  This is needed whenever you significantly change your models and need to start over.  The ‘drop’ command does that automatically using the database information in your settings file.

The following code is from ‘’

from django.conf import settingsfrom django.conf import settings

from import NoArgsCommand

class Command(NoArgsCommand):
 help = "Drop and re-create the database"
 def handle_noargs(self, **options):

 import MySQLdb

 print "Connecting..."
 db=MySQLdb.connect(host=settings.DATABASE_HOST or "localhost" ,user=settings.DATABASE_USER,
 passwd=settings.DATABASE_PASSWORD, port=int(settings.DATABASE_PORT or 3306))

 cursor = db.cursor()
 print "Dropping database %s" % settings.DATABASE_NAME
 cursor.execute("drop database %s; create database %s;" % (settings.DATABASE_NAME, settings.DATABASE_NAME))
 print "Dropped"

To install simply place this code in a file called ‘’ and add it to a management comands folder.  If you don’t have a management command folder yet you simply need to create the following file structure in one of your app directories (MY-APP-DIR).


Now, whenever you’ need to whipe your database and start fresh you can simply run:

./ drop

I’ve finally gotten around to playing with the Django Signals.  I’ve been pleased so far but I feel its currently missing a few key features.

Some background: Django Signals allows developers to more easily break up their code into separate components which allows much greater freedom and organization.  Often as a web application expands you add more an more functionality to each of the events.  When a new user is created for example the program might need to spin off a few emails, maybe search for whether any of their friends are the site, store an extra statistic, etc…

When adding or removing these features developers would previously have to paste more code into their create_new_user function.  Now however developers can creaet a Django Signal in creaet_new_user function and then build other functions (sending email, storing stats, etc) as seperate listeners for that signal.

It makes everything more modular.  Lots of other languages have this.  If you’re a hardware developer its similar to interupts, if you’re a javascript developer its similar to events.

So reading about them I’ve been excited about the possibilities and have been looking forward to putting them to use.  I’ve got to say its incredibly simple, works great and I’m going to start using them all the time.

That said there is a lot of growth yet to be done in this area.  With that in mind I’m going to make two requests fully aware of the fact that most people will say “add it yourself” or “just use the patch” or “roll your own”.  Some of which I will ofcourse do, still I continue to blog…

Request: ManyToMany Signals

A great feature is that django automatically has signals set up for many of the common tasks.  There are signals fired when objects are saved and deleted and when requests are made and finished.  Currently there is no signals for ManyToMany relationships.  There is a ticket and a patch on the issue but it has not been released in the trunk of yet.

A simple example with Socialbrowse is in following other people in the network.  In Django following someone is easy:

userprofile.following.add( other_userprofile )

Man it’d be great if there was a signal on that!  Unfortunately I had to make a wrapper funtion in UserProfile.

import django.dispatch
followed_signal = django.dispatch.Signal(providing_args=["followed", ])
def follow(self, dude_to_follow):
  """ follow the input <dude_to_follow> """
  self.following.add( dude_to_follow )

  # Send out a django signal
  followed_signal.send(sender = self, followed = dude_to_follow)

I know, its not a big deal, I’m a big baby, but that stuff does add up eventually.  Hopefully Ticket #5390 will be merged shortly.  The comments seem to suggest that its ready.

Request: Asynchronous Signals

It seems that I’m not the only one to assume that Django Signals created Asynchronous tasks.  If you come from a Javascript or Hardware world you would assume asycronous signals, as both JS events and hardware interrupts are asyncronous.

Unfortunately Django Signals are not.  That’s great in many cases, but I think it should support both. Sending emails for example tends to take a sigificant amount of time, mostly spent waiting around.  It would be great if the request could continue on and return, independant of the email task, instead of having to wait for it to complete.

The result would be a much faster experience for the user, and I think Django Signals is the place to put that type of functionality in.   The ideal interface would be specifying an extra input ‘async’ when registering a listener.  If true the process calling the signal would not wait for the listener to finish.

Continuing with the follow example above the code would look something like this:

def email_follow_notification(sender, **kwargs):
...    # do some emailing here
followed_signal.connect(email_follow_notification, async = True)

The email_follow_notification function would then be run in a seperate process, allowing everything else to continue on without waiting.

Those are my notes for now.  I’m sure I’ll have more and maybe some contributions in the form of real code.

In Django it is very straight forward to add extra commands to the and scripts.  The Django Documentation describes the process but gives no examples.  I like examples so I’m writing this for others.

In the past I’d add these  to make stand alone scripts that used the Django libraries.

from django.conf import settings
from import setup_environ
setup_environ( settings )

The method works well but I find it cleaner to add commands to the manage and django-admin scripts instead of having several stand alone scripts.  The desired effect is to be able to run

$ ./ your_command_name

instead of python  Big difference?  No…  But I think on a large project it adds organization and its some sort of a standard in the event that others will be using your code.

In the app which the command is used for make the following directory structure:


Then in your file ( paste the following code, puting the functions to run in the handle_noargs function.

from import NoArgsCommand

class Command(NoArgsCommand):
help = “Describe the Command Here”
def handle_noargs(self, **options):
< your code here >

That’s it!  You can also do some more complicated functionality with command options.  Take a look at commands listed in /django/core/management/commands for examples there.