Heroku is great! However…

2012/03/09 § 10 Comments

I like Heroku. We’ve recently made our first deployment on it, and all things considered, I don’t think we could’ve made a better platform choice at this time. Deploying to Heroku taught me quite a few things, easily the most important of them was hearing about the 12 Factor App methodology. If you haven’t heard of it1, it’s basically a way to design web applications thus that deploying them will be less of a pain in the arse. The methodology’s manifest was written by @hirodusk, Heroku’s CTO, who probably knows something about cloud deployments. The best thing about it is that it’s elegantly platform/language neutral – so theoretically you don’t need to deploy a 12 Factor App to Heroku, you can deploy to any 12 Factor App compatible PaaS (had there been any), or even roll your own 12 Factor App platform and deploy there. Then again, why not focus on your core business and pay Heroku to use their awesome 12 Factor App platform? Well, ugh. Like I said, if you’re thinking about deploying to Heroku then you probably should, but I’d like to warn you that I don’t think Heroku is an excellent implementation of a 12 Factor App platform.

It’s a good implementation, yes, sometimes maybe just good enough. It saddens me to say it, but I feel there’s significant disparity between the force and clarity I see in the 12 Factor App theory and what I perceive as murky, surprising or faulty implementation details in the Heroku practice. I will try to illustrate exactly what I mean in this post, after these public service announcements: (1) I’ll be using 12 Factor & Heroku jargon freely, so you’d really gain the most from this if you’re familiar with both; (2) the Heroku deployment I did is in Python, so the only stack I know is Celadon Cedar; (3) some of the pain points I mention are related and may be fixed by a single Heroku change, but they still hurt in different ways and finally (4) I’m not looking for cheap shots: I know Heroku’s been grilled for their recent outages, but if you dig in my Twitter feed you’ll see proof that this post has been in the making for a while now, and has nothing to do with this or that outage (maybe a topic in itself, but not now).

Handling of static assets is a broken mess

Other people wrote enough about strategies to deal with the fact that Cedar has no ‘official’ way to serve static files as Bamboo had. I can’t say I’m happy Heroku’s own documentation about this important issue – for example, the introduction to django document conveniently sidesteps the issue altogether. But of all the various solutions out there, I didn’t find one that alleviates the much uglier (from a 12 Factor App perspective) underlying issue. See, your statics will either reside in your slug or elsewhere. If you plan on putting them in the slug, you can either compile2 them during slug creation (with a custom buildback, what else), or locally on your computer and push them. Slug creation time is at best an annoying environment to work with and debug on, if not for anything else (and there is a lot of else), then because of the lack of Polyglotism and the prevalence of multi language asset management tools (also, good luck building Flash/Silverlight/other necessary evils during slug creation).

So you opt to build locally, ignoring the alarm bells as you shatter factors I, V and X (at least). How will you get the built statics to Heroku? You’ll commit and push build products to your VCS, right <puke/>? And who guarantees two deployments will be identical, since you’re developing on Linux and your coworker on OSX? This goes on. The ramifications of serving your statics from an external storage service are the same, but now you also need to get your files to the storage service. But how? You aren’t supposed to have your S3 secrets on your dev machine, and Heroku’s support for configuration-during-slug-creation has serious caveats and a frightening warning attached to it. This is a serious wart, and I don’t think writing documentation will solve it (it will be better than nothing!), because I don’t see how this can be solved with the building blocks Heroku offers today (the buildpack mechanism as it is, git transport, etc). An nontrivial change is needed for truly awesome 12 Factor static support. I don’t always expect much from my PaaS, but when I do, it’s because I pay five cents per dyno hour.

There’s no buffering proxy in Cedar

This one isn’t even funny. If you’re not sure what a buffering reverse proxy is or why you need one if your server uses sync workers, you must read this. Summarizing for the impatient, sync workers are resource hogs, so you never want them idling about waiting clients (which might be slow). A common pattern is to have a cheap async “thing” buffering between your sync workers and the wild Internet. As far as “async things” go, nginx is a great choice, and indeed Heroku’s previous stack, Bamboo, used to use it. However, in Cedar it disappeared, with very serious ramifications. Don’t get me wrong: I’m not too happy with Bamboo’s behaviour, either: it’s very possible an app wouldn’t want buffering: sometimes you want to read the request as it comes along (for example, for various Comet techniques), and you’re designed to handle it (mostly by making the app itself your “async thing”). Bottom line, a one-HTTP-layer-fits-all treatment is at best an inconvenience, and possibly something much worse.

Indeed, you could solve this by simply using async workers, but one does not “simply” use async workers (there are serious ramifications which are out of scope for now). Choosing your threading model based on a missing feature in your platform sucks, not to mention if you already have a working and field-proven codebase you’re migrating to this platform. I guess the best approach to solve this would be to have a Routefile at the root of the project describing the routes to reach your app: this URL over buffered HTTP, this URL over raw HTTP, maybe even that port over plain old raw UDP. But even if you don’t have this feature, I’d expect warnings, explanations and best-practices to be sprayed all over the documentation. Alas, the documentation doesn’t help even one bit (on the contrary sometimes; I judge this FAQ answer to be somewhere between misleading and plain wrong). To top it all, it seems that activating Heroku’s SSL support suddenly makes you go through something which smells to me like remnants from Bamboo, with sudden appearance of nginx and buffered requests. Again, all in utter silence from the documentation.

Polyglot programming’s great, but not on Heroku

I hear the term polyglot programming in relation to Heroku all the time. “A true polyglot platform”, they say. I don’t get these claims. Heroku isn’t polyglot. The buildpack mechanism (a good idea in itself) is built in such a way that buildpacks are given a chance to detect if your push matches their language, but the first matching buildpack is used to create your app’s slug, ignoring all other buildpacks and potential languages. So your app really has to be in one language. The workarounds are horrendous, the least-horrific one is done by the official Ruby buildpack which “vendors” node.js into the slug if a certain gem (execjs) is required in the Gemfile. Of course, it’s just a package specific hack, it inflates the slugs needlessly, and since it’s just a hack, similar support isn’t built into the Python buildpack for the PyExecJS Python package. Admittedly, for the latter point, the brilliance of custom buildpacks eases the pain it sure as heck doesn’t cure it.

A thousand cuts: the smaller things

  • I think separating you from your application (i.e., you’re unable to ssh into a running slug) is brilliant and leads to better design overall. But when I worked on XIV’s grid storage product, I fought to keep controlled ssh access to the discrete storage modules open (for us developers, not for customers), and I think Heroku should provide the same. Show me a big warning, let me click through whatever, but then let me bloody attach my strace/pdb/etc to my running instance.
  • I applaud factor X, dev/prod parity, but Heroku and especially its add-ons force me to integrate with backing services with unknown and possibly changing configuration. I think a Vagrant box providing a Cedar-like dev environment with some add-ons could help here. This bit me once as I misused the parsed results of REDISTOGO_URL and ignored the password (it was a bit more complex than that, but I see you yawning). A trivial bug really, but it was time consuming to diagnose, because it worked fine against my password-less development Redis (which now does require a password, thank you very much).
  • Factor III, configuration, says you should store configuration in the environment. But Heroku offers nothing to help you set this environment up during development, and share it across a team of five developers. Sure it’s a small thing to arrange on your own, but together with other small cuts regarding local development, I feel Heroku does too little to accommodate multi-developer teams, period (update: since I first wrote this, @kennethreitz wrote autoenv, which is a good first step in the right direction).
  • Putting everything (your language’s runtime, “vendored” binaries, possibly static assets) in your slug makes it easy to go past the 25mb warning mark, and forces you to be tight fisted about slug size. I understand the cause of this limitation, but if every dyno had more stuff built into it, things could be easier and I wouldn’t be counting mere megabytes of disk space in 2012 (just node, Python, Ruby, some C libraries… don’t think I’m asking for much).
  • I’ve been pulling my hair out wondering why my video processing worker dynos are getting R14 “Memory Quota” exceeded warnings, and only after I instrumented my code to take memory snapshots (ps -ef + cat /proc/meminfo at relevant timings) I got to suspecting that buffer cache is counted against me in my memory quota (why isn’t this documented? why can’t Heroku’s support resolve my ticket asking for explanations regarding my ps -ef outputs for a few weeks now?). We get false R14 positives all the time.
  • git is a terrific DVCS, but a poor deployment transport – see for example above how it complicates things with deploying compiled static files, or how it makes pushing private submodules impossible (at least now there’s some labs support for submodules).

There’s more, of course, but this post is already insanely long and I feel I got the main things off my chest. I humbly argue that some of these issues actually have a very material impact on Heroku app development/deployment while being not so hard for Heroku to fix, or at least make much more bearable or even just thoroughly documented. You could say many (all?) these things aren’t so bad if you’re a single developer developing a very early prototype and expecting little load, but I think these issues quickly become much worse in a complex professional project with multiple developers and even modest-medium load, which is who I imagine is Heroku’s prime target audience, rather than some Jekyll blog that gets 50 hits per day.

I’d like to close this post with repeating that I like Heroku, and at our current size, I definitely like it more than the alternatives (because deployment is such a bitch). They have many awesome things going that I didn’t list here, others wrote enough favorite reviews as it is. But Heroku still has a lot of hard work to do to get their implementation of a 12 Factor App platform on par with the elegance of the theory itself. I get the feeling the “D” stack beyond Cedar is right around the corner; I’ll be so happy to use a better Heroku 12 Factor stack. Or anyone else’s 12 Factor stack, for that matter. I’m an unloyal bastard this way. Challenge accepted, anyone?


1 If you’re not familiar with The 12 Factor App, I wholeheartedly recommend you go read it right now, it’s an important read. If you’ve got your act together with regard to web app deployment you probably do many/all of these things instinctively, but still, the 12 Factor App puts things in terseness and clarity I didn’t see before and I feel insisting on it (almost to the letter) really helped me design a better app.

2 The way I see it, the toughest thing about static assets is the fact that most of them became sneakily dynamic without us watching. Sprites, CoffeeScript, LESS/scss, heck, even .swf if you have to use them (webcams, anyone?) – they’re all compiled assets with a source form and a built form (sometimes more than one; minimized vs unminimized CoffeeScript, for instance). I think this complicates separation of build time vs. run time and minimization of dev/prod parity. I wrote more about this previously.

Advertisement

RESTfully atomically incrementing a counter using HTTP PATCH

2012/02/08 § Leave a comment

So today I ran into the question of incrementing a counter in a RESTful manner, and wasn’t sure how to go about doing it. Googling around a bit didn’t find me a satisfactory answer, though I did find @idangazit asked the same question on Stack Overflow, but alas the question was answered by what I humbly felt was an inadequate answer.

Idan had “PUT vs. POST” in his question, but quoting the answer I just added to that question (#selfplagiarism!), I believe PATCH is the answer as RFC 2068 says very well:

The PATCH method is similar to PUT except that the entity contains a list of differences between the original version of the resource identified by the Request-URI and the desired content of the resource after the PATCH action has been applied. The list of differences is in a format defined by the media type of the entity (e.g., “application/diff”) and MUST include sufficient information to allow the server to recreate the changes necessary to convert the original version of the resource to the desired version.

So, for example, to update profile 123’s view count, I would do (using requests, what else?):

import requests

requests.patch(
    'http://localhost:8000/profiles/123',
    'views + 1\n',
    headers={"Content-Type": "application/x-counters"}
)

Which would emit something like:

PATCH /profiles/123 HTTP/1.1
Host: localhost:8000
Content-Length: 10
Content-Type: application/x-counters
Accept-Encoding: identity, deflate, compress, gzip
Accept: */*
User-Agent: python-requests/0.10.0

views + 1

Where the x-counters media type (which I just made up) is made of multiple lines of field operator scalar tuples. views + 1, views = 500, views - 1 or views + 3 are all valid syntactically (but some may be forbidden semantically). I can understand some frowning-upon making up yet another media type, but I think this approach matches the intention of the RFC quite well, it’s extremely simple and if the backend is implemented correctly, it’s atomically correct.

Suggestions for another approach?

EDIT: I’ve had a long discussion with a friend who disliked the use of a non-standard media type. Perhaps something like this is better, though I’m still not entirely convinced:

import requests

requests.patch(
    'http://localhost:8000/profiles/123',
    '[{"field": "views", "operator": "+", "operand": 1}]',
    headers={"Content-Type": "application/json"}
)

I’m not sure what’s the bigger crime – using a non-standard media type, which, in the words of the RFC, is “discouraged”, or using a standard generic serialization format as the media type, which doesn’t say much about the scheme you’d like to use within it. Both are better than anything else I can think of.

p.s.: escaping spaces in field names are left as an exercise to the reader, I suggest application/x-www-form-urlencoded or simply using sane field names, ffs.

I wish someone wrote django-static-upstream… maybe even… me!

2011/12/28 § 7 Comments

I used to think serving static files (aka static assets) is really easy: configure nginx to serve a directory and you’re done. But things quickly became more complicated as issues like asset compilation, CDNs/scalability, file-specific custom headers, deployment complexity and development/production parity rear their ugly heads. Judging by the huge number of different asset management packages on djangopackages, it seems like I’m not the only one who ran into this problem and felt not-entirely-satisfied with all the other solutions out there. Things actually took a turn for the worse ever since I started drinking the Heroku cool-aid, and for the live of me I just can’t make sense of their best-practice with regard to serving static files. The heroku-django quickstart conveniently sidesteps the issue of statics, and while there are a few support articles that lightly touch on neighboring subjects, nothing I found was spot-on and hands-on (this is an exception to the rule; the Heroku cool-aid is otherwise very tasty and easy to drink, to my taste so far). Ugh, why can’t there be a silver bullet to solve all this? Let me tell you about my “wishlist” for the best static serving method evar.

First, I want to be able to take any checkout from my VCS, maybe run an easy bootstrap function, and get a working development environment with statics served. In production, I want to serve my statics from a CDN with aggressive caching, so I need some versioning system, but I’d like to minimize deployment complexity and I want fine-grained cache invalidation of my statics. I want my statics to be served the same way (same headers, same versioning mechanism) in development and production without having to update two different locations (i.e., my S3 syncing script and my development nginx configuration). I also don’t want to have to “garbage-collect” my old statics from S3 every so and so days. Like I said, I’d like some of my statics to be served with some bells and whistles, like various custom headers (Access-Control-Allow-Origin, anyone?) or gzip compression. Speaking of bells and whistles, how about a whole marching band, since I want to serve statics that require compilation (minify/concatenate/compile scss+coffe/spritalize/etc), but I don’t want to have to rerun a ‘build process’ every time I touch a coffee script file in development. Finally, and this isn’t something I’m not very adamant about, but I prefer my statics to be served from a different subdomain (static, not www), I think it’s cleaner, I don’t need my clients’ cookies with every static request and it allows for some tricks (like using a CDN with support for a custom origin).

And nothing I found does all that, definitely not easily. In my dream, there’s a package called django-static-upstream, which is designed to provide a holistic approach to all these issues. I’m thinking:

  • a pure Python/django static HTTP server (probably just django.views.static with support for the bells and whistles as mentioned above), and yeah, I think I should bloody use this server as a backend to serve my in production
  • a “vhost” middleware that will replace request.urlconf based on the Host: header; if the host starts with some prefix (say, static), the request will be served by the webserver above
  • a couple of template tags like {% static "images/logo.png" %} that will create content-hashed links to the static webserver (i.e., //static.example.com/829dd67168a3/images/logo.png); the static server will know to ignore the content-hash bit
  • this isn’t really up to the package, but it should be built to support easily setting a custom-origin supporting CDN (like CloudFront) as the origin URL; this is both to serve the statics from nearby edges CDN but also (maybe more importantly) to serve as a caching reverse proxy so the dynamic server will be fairly idle
  • support for compiling some static types on the fly (coffeescript, scss, etc) and returning the rendered result; the result may have to be cached using django’s cache (keyd by a content hash), but this is more to speed up multi-browser development where there is no CDN to serve as a reverse caching proxy than because I worry about production

So now I’m thinking maybe I should write something like this. There are two reasons I’m blogging about a package before I even wrote it; first, since I wanted to flesh out in my mind what is it that I want from it. But second, and more importantly, because I’d like to tread carefully (1) before I have the hubris to start yet another assets related django package, (2) before I start serving static content with a dynamic language (what am I, mad?) and (3) because compiling static assets on runtime in violation of the fifth factor in Adam Wiggins’ twelve-factor app manifesto (what, you didn’t read it yet? what’s wrong with you?). These are quite a few warning signs to cross, and I’d like to get some feedback before I go there. But I honestly think I’ll be happier if I had a package to do all this, I don’t think writing it should be so hard and I hope you’d be happy using it, too.

The ball’s in your court, comment away.

Walking Python objects recursively

2011/12/11 § 6 Comments

Here’s a small function that walks over any* Python object and yields the objects contained within (if any) along with the path to reach them. I wrote it and am using it to validate a deserialized datastructure, but you can probably use it for many things. In fact, I’m rather surprised I didn’t find something like this on the web already, and perhaps it should go in itertools.

Edit: Since the original post I added infinite recursion protection following Eli and Greg’s good advice, added Python 3 compatibility and did some refactoring (which means I had to add proper unit test). You will always be able to get the latest version here, on ActiveState’s Python Cookbook (at least until it makes its way into stdlib, fingers crossed…).

from collections import Mapping, Set, Sequence

# dual python 2/3 compatability, inspired by the "six" library
string_types = (str, unicode) if str is bytes else (str, bytes)
iteritems = lambda mapping: getattr(mapping, 'iteritems', mapping.items)()

def objwalk(obj, path=(), memo=None):
    if memo is None:
        memo = set()
    iterator = None
    if isinstance(obj, Mapping):
        iterator = iteritems
    elif isinstance(obj, (Sequence, Set)) and not isinstance(obj, string_types):
        iterator = enumerate
    if iterator: 
        if id(obj) not in memo:
            memo.add(id(obj))
            for path_component, value in iterator(obj):
                for result in objwalk(value, path + (path_component,), memo):
                    yield result
            memo.remove(id(obj))
    else:       
        yield path, obj

And here’s a little bit of sample usage:

>>> tuple(objwalk(True))
(((), True),)
>>> tuple(objwalk({}))
()
>>> tuple(objwalk([1,2,3]))
(((0,), 1), ((1,), 2), ((2,), 3))
>>> tuple(objwalk({"http": {"port": 80, "interface": "0.0.0.0"}}))
((('http', 'interface'), '0.0.0.0'), (('http', 'port'), 80))
>>> 

"any" is a strong word and Python is flexible language; I wrote this function to work with container objects that respect the ABCs in the collections module, which mostly cover the usual builtin types and their subclasses. If there’s something significant I missed, I’d be happy to hear about it.

enqueue: CLI utility to queue command execution

2011/11/19 § 3 Comments

Update: As you can see in the comments below, and as I feared, it turned out that Lluís Batlle i Rossell already implemented something much like Enqueue, only better in many regards. I doubt I’ll keep maintaining enqueue, there’s no reason to. Oh well, it was a nice afternoon project.

Something that always bugged me with my shell workflow is the problem of queueing commands to run one after the other, while adding commands to the queue as they previous commands are being executed.

Take, for example, a simple usecase: we want to move three large files from diskA to diskB. The problem is that you don’t know the name of the files in advance, perhaps because you’re renaming them manually and it takes you time to type, or because you’re hunting for them in the directory tree, or whatever. Here are some solutions to this:

  • Start one command in the background, then do something like fg ; second-command. Then prepare the third command, but only run it after you saw the second finished. Meh. Or,
  • Just let the jobs run concurrently in the background as you run them (using the & control operator). But since each command is maxing out a resource (CPU, disk, etc), this becomes woefully inefficient really quickly. Or,
  • Use a mad concoction of Ctrl-Z, fg/bg, wait n or (if you left the shell and want to add something to the queue) use a madder concoction of while pgrep -f 'mv /disk/A/foo' > /dev/null; do ... (I’ll leave it as an exercise to the frustrated reader to finish that little one liner). But then again, you could also spend that time getting a paper-cut at the edge of your nostril, and it would probably be just as much fun and maybe even less error prone. Or,
  • Start a shell process reading from a named pipe (mkfifo(1)), and write the commands to the named pipe (credit to my friend and colleague m0she for this sneaky idea). In practice, I found it unwieldy at best, and impossible to extend with bells-and-whistle features if you need them, first and foremost, easily listing the queue and your position in it.

I reckon you could think of a few more ways, but I doubt (and hope! :) none would be more convenient than to simply use Enqueue, a simple and hopefully lightweight Python/twisted command line queuer (written today by yours truly). Usage looks a bit like so:

$ alias nq=enqueue
$ nq add mv /disk1/file1 /disk2/file1
$ nq add mv /disk1/file2 /disk2/file2
$ nq add beep
$ nq list
* mv /disk1/file1 /disk2/file1
  mv /disk1/file2 /disk2/file2
  beep
$

Nice and easy. Enqueue is still a bit rough around the edges and not very feature rich, but it does the job for me and I hope you’d like it too. Queues are managed by a twisted daemon that talks to the CLI client over UNIX domain sockets, and the whole thing fits in about 300 lines of Python. Feel free to open issues/send pull requests on GitHub if you find bugs or want to suggest something, I’ll try to keep up. Promise.


p.s.: Why is it that every time I dabble in Python packaging I end up (a) horribly frustrated and (b) feeling the result is awfully inadequate? Yes, I know I could probably package enqueue better, I know packaging isn’t Python’s strongest side and I know the future is better than the present, but the present sucks and I just had to say this. Yeah, I also walk around in the summertime sayin’ “how about this heat”.

pv: the pipe swiss army knife

2011/11/05 § Leave a comment

When using UNIX, every now and then you run into a relatively unknown command line application which, once you master it, becomes part of your “first class” commands along with cut or tr. You wince every time you work on a computer that doesn’t have it (and promptly wget-configure-make-install it) and you’re amazed your colleagues never heard of it. I often feel pv is such a command for me. Really, this command, much like netcat, should have been written in Berkley sometime circa 1985 and be in every /usr/bin today. Alas, somehow Hobbit only wrote netcat in 1996, and it took a long while for for it to reach /usr/bin ubiquity. Similarly, Andrew Wood only wrote pv in 2002, and I hope this post will convince you to place it in all your /usr/local/bins today and convince distribution makers to promote it to the status of a standard package as soon as possible.

The basic premise of pv is simple – it’s a program that copies stdin to stdout, while richly displaying progress using terminal graphics on stderr. If you use UNIX a lot and you never heard of pv before, I’m pretty sure the lightbulb is already lit above your head (if not, maybe pv isn’t for you after all or maybe it would help if you’d take a look this review of pv to help you see why it’s so great). pv has evolved rather nicely over the years, it’s available from Ubuntu universe for a while now (why only universe? why??), and it has a slew of useful features, like rate limiting, ETA prediction for an expected amount of data, on-the-fly parameter change (increase/decrease rate limit without breaking the pipe!), multiple invocation support (measure speed in two different points of the pipe!) and so on.

If you’re using pv, I hope you may want to see some of the recipes I use it in; if you don’t, maybe they’ll whet your appetite (I’m using long options for pv and short options for everything else):

  1. The basics: copy /tmp/src/ to /tmp/dst/, with progress
  2. $ src=/tmp/src ; tar -cC "$src" . |
      pv --size $(du -hsk "$src" | cut -f1)k |
      tar -xC /tmp/dst
     142MB 0:00:02 [43.4MB/s] [======>    ] 58% ETA 0:00:01
    $
    

    By the way, this works great if you add nc and compression, pv can even help you decide what level of compression to use to achieve the best throughput before the CPU becomes the bottleneck.

  3. Scale a bunch of images to a specific size, using multiple cores and with progress
  4. $ cd /tmp/src ; ls *.jpg |
      xargs -P 4 -I % -t convert -resize 1024 % /tmp/dst/% 2>&1 |
      pv --line-mode --size $(ls *.jpg | wc -l) > /dev/null
      96 0:00:16 [7.85/s] [===>       ] 36% ETA 0:00:28
    $
    
  5. Get a quick assessment of the traffic rate going through some interface
  6. $ sudo tcpdump -c 10000 -i eth1 -w - 2>/dev/null | pv > /dev/null
    35.4MB 0:00:07 [4.56MB/s] [ <=>          ]
    $
    

Nifty, eh? I find myself inserting pv in any pipe I expect to exist for more than a few moments. How do you use pv?

nginx+gzip module might silently corrupt data upon backend failure

2011/11/04 § 3 Comments

There are several elements that make absolutely certain the page you’re reading in your browser is an accurate representation of the resource the HTTP server meant to send you1. Disregarding caching for a minute, we have two elements making sure the representation you get is protected from errors. The first protecting element is, of course, TCP, making sure that if the server wrote two-hundred bytes in a particular order, either they’ll all arrive to your end (in order and without errors) or your TCP stack will realize something bad happened and give your user-agent (your browser) a chance to cope with the error. The need for the second protecting element is a bit more sneaky: TCP will guarantee everything the server wrote will arrive, i.e., bytes for which the server called write(2) or equivalent will arrive (or you’ll know something went wrong). But what about bytes the server should have written but didn’t write all – for example, because some component on the server’s side failed?

The original HTTP (HTTP 0.9, 1996 time) didn’t cope with this situation at all. The signal to the client that the server finished talking was to disconnect the TCP session, which, from the client’s side, is a vague signal. Did the TCP server disconnect because it finished or because it ran into trouble (software fault, sysadmin action, kernel behaviour due to memory pressure or even a bug, etc)? Thankfully, current HTTP kicks in to complement TCP, allowing the server to do one of several things in order to make sure you’ll at least know you didn’t receive the whole picture. By far the two most common thing the server will do are to specify a Content-Length in the response’s header or to use a Transfer-Encoding, most probably chunked transfer encoding.

Content length is simple to grasp. The server wishes to say 200 bytes. It explicitly says: “I will say 200 bytes” in the response header. If the user-agent didn’t receive 200 bytes of response, it knows something went wrong. Chunked transfer encoding is only slightly more complex – the server will send the response in chunks, each chunks prefixed by the length of the chunk. The end of the document is marked by a zero-length chunk. So if the user-agent saw a chunk cut in the middle, or didn’t receive a zero-length chunk, it also knows something went wrong and has a chance to decide what to do about it. For example, when faced with incorrect content length, Chrome displays an ERR_CONNECTION_CLOSED error, whereas Firefox would display the portion of the page it did receive. Different behaviour, yes, but at least both user-agents in this example had a chance realize the response they received is partial. Which is really, really important, you know why? I’ll tell you why.

Enter caching. HTTP caching is a non-trivial matter with many unexpected gotchas and pitfalls, and I can’t cover it all here (why the complexity? I think it’s because all caching is an intentional form of data/state repetition, and repetition is something that in my experience humans often have difficulty reasoning about). By far the best document I know about HTTP caching is this splendid guide, but if you’re in a hurry or impatient, let me summarize the points interesting for this particular post. First, caches might exist in many places, some of them might be surprising, some of them might be slightly broken or at least very aggressive (ISP transparent caches, mutter mutter cough cough). Second, among many other things, HTTP caching lets a server give a client a token together resource, telling the client “next time you request this resource, tell me you have this token; maybe I’ll just tell you that the representation you got with this token is still fresh, without transferring it all over again”. This is called an ETag, and the response that says “just use what you have in your cache” is called HTTP 304 NOT MODIFIED.

How is this relevant to HTTP responses cut in the middle? Well, if servers didn’t have a way of telling the user-agent how long is the document, and if the response was cut in the middle due to a server fault as described above, the user-agent/sneaky-caching-proxy might cache incorrect responses. If the server also sends an ETag along with the response, the caching entity will store this ETag along with the invalid cached representation, and even when it’s time to check the representation’s freshness with the origin server, the server will just take a look at the ETag, say “yep, this is fine”, tell the cache to keep using the bad representation and <sinister>never ever let it recover</sinister>. If this happens on a large ISP’s transparent cache, easily tens of thousand of your users could be affected. If the affected resource is a common element in many of your pages with strict syntax checking, like a javascript resource, you’re kinda screwed. The only hope in such a condition is that the client, for some reason, will specify Cache-Control: no-cache in the request, and that caching entities along the path to the server will honour this request. Browsers like caching, so they won’t usually request no-cache, although AFAIK, recently Chrome started sending no-cache when the user explicitly requests a force-reload (Cmd-R on a Mac). Other browsers don’t fare as well, and I think that hoping one of your Chrome users will force reload the bad resource in time to save the day is hardly a sturdy solution.

Bottom line is, it’s really important to know when a representation of a resource is broken. Which is why I was quite amazed to learn that my HTTP server of choice, nginx, doesn’t validate the Content-Length it receives from its upstreams and is simply unaware when the response it received from an upstream server is chopped off. If your response specifies a content length but closes the connection without delivering enough bytes, nginx will simply stall the request for a long time without closing the connection downstream, even though it has no hope of receiving additional data to push downstream. I tried this both with proxy_pass and uwsgi_pass, but I’m quite confident it’s true for other backends (fastcgi_pass, scgi_pass, etc). This is bad, but not as bad as the case where you want an nginx module to manipulate your content, removing existing content length/transfer encoding and applying its own (the gzip module indeed does that). If a backend error occurs while content-length-oblivious-nginx is altering the data, the content altering module will apply what it applies to the bytes it received, add new content-length/transfer-encoding, assuring everyone the response is OK, and entice user-agents or even proxies to enter the almost-never-recover bad cache scenario I described in the previews paragraph. Ouch!

The proper way to fix this, IMHO, is that nginx simply must start looking at the upstream’s content length (or transfer encoding, once nginx starts using chunked responses with its upstreams). Part of the reason I’m writing this post is that Maxim Dounin, venerable nginx comitter and an OK chap overall, told me he doesn’t consider this a top priority at the moment, but I humbly disagree with his assessment of how serious the issue is. Until such a time as nginx is fixed about this, I think you must disable all content-manipulating nginx modules and instead handle all message length affecting work in your upstream (compression, addition, etc). This is what I opted to do with my django based web app, I replaced nginx’s gzip module with Django’s GZipMiddleware. It’s a terrible shame though. It’s doing the job of nginx for it, probably in a lesser fashion than how nginx could, it violates a must not clause in Python’s WSGI PEP333, and I have empiric proof that Tim Berners-Lee chokes a kitten every time you do it.

But what’s the alternative? Risk invisibly cached corrupt data for an undetermined length of time? Ditch nginx, which I think is the best HTTP server on this planet despite this debacle? Nah. Both are unacceptable.


1 This post assumes convenient values of “absolutely certain”; also, everything related to security/content tampering is out of scope in this post. I’m talking about possibly misbehaving but certainly well-meaning components.

Introducing django-ajaxerrors

2011/01/31 § Leave a comment

Today I finished a small django middleware that facilitates development of AJAX applications. I reckon most if not all Django developers know about Django’s useful debug-mode unhandled error page (you know, the one that looks like this). However, when an AJAX request reaches a faulty view, the error page will not be rendered in your browser but instead be received by your AJAX error handler (assuming you even had one), which is almost always not what you want. This forces you you to find some other way to reach your traceback information. For example, before I wrote this package, I used to regularly open Chrome’s developer tools, find the failed resource in the Resources tab, and then either read through the raw HTML (yuck) or copy and paste it to a file and double click it (tedious).

As you can see, this bothered other people, too, but I couldn’t find a decent solution on the web. Thankfully, since the problem is really about ease of development and not very relevant in environments where DEBUG is false (you’d get the traceback via email anyway), and since I do most of my development work locally (and I suspect so do many other Django developers), I figured the solution can take advantage of the server being a full fledged desktop with a modern browser and a GUI. Enter ajaxerrors.middleware.ShowAJAXErrors.

This little middleware intercepts all unhandled view exceptions, pickles the technical error page and uses Python’s webbrowser module to direct a new browser tab at a special URL that will serve (and delete) the previously stored page. All this is only triggered if DEBUG and request.is_ajax() is true, so pretty much everything you’re used to in your development flow should stay the same. Sweet. ShowAJAXError can also be configured to run arbitrary user-defined handlers with the error information, and even comes with a handler for growlnotify and a handler that replies to all failed AJAX calls with HTTP OK result containing an empty JSON object (I use it for development, YMMV).

django-ajaxerrors is on PyPI, so you can install it with pip or easy_install with ease. You can clone/fork the code (and read the more detailed readme) here. Also, if you’d like to see what django-ajaxerrors is like without any special effort, you can download a simple AJAX app I’ve written for the purpose of developing django-ajaxerrors itself. The app has django-ajaxerrors contained within it, so this is pretty much all you need to see django-ajaxerrors in action about 1 minute from now:

$ virtualenv -q demo
$ pip install -q -E demo django
$ cd demo
$ source bin/activate
(demo)$ curl -s -o - http://cloud.github.com/downloads/yaniv-aknin/django-ajaxerrors/ajaxerrors-demo.tar.bz2 | tar jx
(demo)$ cd ajaxerrors-demo/
(demo)$ python manage.py runserver
# demoserver running...

Now point your browser at the development server, and play with the calculator a bit. Hint: the calculator is not protected against division by zero. :)

I hope you like django-ajaxerrors, it’s MIT licensed, so do what you will with it. Let me know if you need assistance or ran into a snag, or, even better, if you want to contribute something to it.

Something’s horribly wrong with Dell (and HP)

2010/12/09 § 4 Comments

Maybe it’s not the most interesting content I’ve ever published, but I’d like to share something with my readers, secretly hoping that maybe some Dell or HP employee would ever get to see this. My aunt, who lives in far-away London, asked me for help choosing her new computer. Her requirements from the new computer are dead simple: it should have dual screen support (she’s a translator and prefers the original and translated text open side-by-side) and preferably the chassis should have something purple on it. That’s it. I reckon if she wasn’t in London I’d pop by one of the small computer shops on the street near my home and pick up a brandless computer to fit her needs. I reckon I could do fine with the equivalent of something around 250GBP.

Since she’s far away and since my mom has a surprisingly long-lived Dell OptiPlex GX260 (Pentium 4!) and since I have a friend whose been badgering me for years that bitsie computers are bullocks and that going with a brand, preferably Dell (in his opinion) is the only sensible option, I asked my aunt if she’d like to shell the extra money for a brand name and she agreed. So I went to Dell UK’s stupid website, which keeps shoving deals I don’t want in my face while not letting me understand really what is it that I’m going to buy and for what price. Admittedly, I haven’t really bought a computer I cared much about in years, but still I thought that I knew my way around computers, and for the life of me I couldn’t be certain what’s the cheapest Dell with to which you can connect two monitors, and that’s all the dude ever wanted anyway.

I clicked the banner which was flying on the screen (you think I’m kidding, but there was a banner and it was flying) which read “Chat with a sales representative now”. As I expected I was taken to a chat with some poor sap who seemed intent on failing the Turing test. Honestly, I think Dr. Sbaitso would have helped me better than this person did1. After specifying my extremely simple requirements and after receiving several links that didn’t work because the Dell website was built by faceless and nameless gnomes who obviously never heard of beautiful URLs, I literally had to force the sales rep to give me just the name of the model they recommended and Google it. Namely, turns out I was looking for an Dell Inspiron 560 MT (model number: D005619, if the link won’t work for you guys either), priced at a whooping 500GBP. The computer was an obvious overkill for my needs, and still I couldn’t find a proper specification sheet saying simply that it had dual monitor support. I didn’t feel like I could trust the sales rep anyway, I felt our interests were horribly misaligned and he really couldn’t care. Every other sentence the sales rep uttered was “I hope I helped you today sir, would you like to proceed with me to checkout?” or something to that effect.

So I tried figuring things out myself, going backwards, first choosing the cheapest Dell with an ATI GPU (a Radeon 3000), and then searching about the Radeon 3000, and the AMD 760G chipset, and other (non-Dell) manufacturers that make boards with the 760G – all in order to find a picture of the board where I could see what will I be receiving, to no avail. There are many specs lying around the Internet and many boards and some had both DVI/VGA and some didn’t and I couldn’t come up with a simple, definite answer for my aunt, who’se a really kind person but would really rather not donate 500GBP to Dell needlessly. Amazingly, Dell thinks it makes sense to add a sentence like NOTE: Offerings may vary by region. For more information regarding the configuration of your computer, click Start -> Help and Support and select the option to view information about your computer. to a spec sheet. Good thing RFC791 didn’t read [the Identification field is] an identifying value assigned by the sender to aid in assembling the fragments of a datagram, but it may vary by region and be something completely different, or may not. We’ll see, or else the friggin’ Internet probably wouldn’t have existed as we know it.

Visiting HP’s website yielded frighteningly similar results. I’m pretty sure I’d tell her to go Apple by now, but she’s far away and OSX is a whole new interface and I don’t have much time these days for support calls and she needs good Hebrew support (for the translations), which I’m afraid Apple lacks. Besides, Apple scares me, and that’s a reason for another post but not now. So I’m stuck. Ya hear that, fatso Dell and HP boneheaded executives? I’m stuck! There’s a guy here who literally grew up with computers and probably bought more than a hundred of them over the years, he wants to give you money to finance your unjustified bonuses, and he’s unable to, because you couldn’t keep your product lines reasonably small, your model names and submodel number sensible (and sub-sub-model number, and sub-sub-sub-model number, repeat at stupidum), your specifications sheets straight, your website functional and your sales representatives earning more than the equivalent of 10 pennies an hour! What the heck is wrong with you guys? How hard do you think it is to sell me a computer?

Screw this dude, let’s go bowling. But what will I tell my aunt?!


1 <evil>Suddenly I’m thinking that was I tasked with covert development of software that would pass the Turing test, I’d probably either train it against these ‘sales reps’ or try to get it a job as such a sales rep. Hmm.</evil>

zsh and virtualenv

2010/10/14 § 8 Comments

A week ago or so I finally got off my arse and did the pragmatic programmer thing, setting aside those measly ten minutes to check out virtualenv (well, I also checked out buildout, but I won’t discuss it in this post). I knew pretty much what to expect, but I wanted to get my hands dirty with them so I could see what I assumed I’ve been missing out on for so long (and indeed I promptly kicked myself for not doing it sooner, yada yada, you probably know the drill about well-known-must-know-techniques-and-tools-that-somehow-you-don’t-know). Much as I liked virtualenv, there were two things I didn’t like about environment activation in virtualenv. First, I found typing ‘source bin/activate’ (or similar) cumbersome, I wanted something short and snazzy that worked regardless of where inside the virtualenv I am so long as I’m somewhere in it (it makes sense to me to say that I’m ‘in’ a virtualenv when my current working directory is somewhere under the virtualenv’s directory). Note that being “in” a virtualenv isn’t the same as activating it; you can change directory from virtualenv foo to virtualenv bar, and virtualenv foo will remain active. Indeed, this was the second problem I had: I kept forgetting to activate my virtualenv as I started using it or to deactivate the old one as I switched from one to another.

zsh to the rescue. You may recall that I already mentioned the tinkering I’ve done to make it easier to remember my current DVCS branch. Briefly, I have a function called _rprompt_dvcs which is evaluated whenever zsh displays my prompt and if I’m in a git/Mercurial repository it sets my right prompt to the name of the current branch in blue/green. You may also recall that while I use git itself to tell me if I’m in a git repository at all and which branch I’m at (using git branch --no-color 2> /dev/null | sed -e '/^[^*]/d' -e 's/* \(.*\)/(\1)/'), I had to resort to a small C program (fast_hg_root) in order to decide whether I’m in a Mercurial repository or not and then I manually parse the branch with cat. As I said in the previous post about hg and prompt, I’m not into giving hg grief about speed vs. git, but when it comes to the prompt things are different.

With this background in mind, I was perfectly armed to solve my woes with virtualenv. First, I changed fast_hg_root to be slightly more generic and search for a user-specified “magic” filename upwards from the current working directory (I called the outcome walkup, it’s really simple and nothing to write home about…). For example, to mimic fast_hg_root with walkup, you’d run it like so: $ walkup .hg. Using $ walkup bin/activate to find my current virtualenv (if any at all), I could easily add the following function to my zsh environment:

act () {
        if [ -n "$1" ]
        then
                if [ ! -d "$1" ]
                then
                        echo "act: $1 no such directory"
                        return 1
                fi
                if [ ! -e "$1/bin/activate" ]
                then
                        echo "act: $1 is not a virtualenv"
                        return 1
                fi
                if which deactivate > /dev/null
                then
                        deactivate
                fi
                cd "$1"
                source bin/activate
        else
                virtualenv="$(walkup bin/activate)" 
                if [ $? -eq 1 ]
                then
                        echo "act: not in a virtualenv"
                        return 1
                fi
                source "$virtualenv"/bin/activate
        fi
}

Now I can type $ act anywhere I want in a virtualenv, and that virtualenv will become active; this saves figuring out the path to bin/activate and ending up typing something ugly like $ source ../../bin/activate. If you want something that can work for you without a special binary on your host, there’s also a pure-shell version of the same function in the collapsed snippet below.

function act() {
    if [ -n "$1" ]; then
        if [ ! -d "$1" ]; then
            echo "act: $1 no such directory"
            return 1
        fi
        if [ ! -e "$1/bin/activate" ]; then
            echo "act: $1 is not a virtualenv"
            return 1
        fi

        if which deactivate > /dev/null; then
            deactivate
        fi
        cd "$1"
        source bin/activate
    else
        stored_dir="$(pwd)"
        while [ ! -f bin/activate ]; do
            if [ $(pwd) = / ]; then
                echo "act: not in a virtualenv"
                cd "$stored_dir"
                return 1
            fi
            cd ..
        done
        source bin/activate
        cd "$stored_dir"
    fi
}

This was nice, but only solved half the problem: I still kept forgetting to activate the virtualenv, or moving out of a virtualenv and forgetting that I left it activated (this can cause lots of confusion, for example, if you’re simultaneously trying out this, this, this or that django-facebook integration modules, more than one of them thinks that facebook is a good idea for a namespace to take!). To remind me, I wanted my left prompt to reflect my virtualenv in the following manner (much like my right prompt reflects my current git/hg branch if any):

  1. If I’m not in a virtualenv and no virtualenv is active, do nothing.
  2. If I’m in a virtualenv and it is not active, display its name as part of the prompt in white.
  3. If I’m in a virtualenv and it is active, display its name as part of the prompt in green.
  4. If I’m not in a virtualenv but some virtualenv is active, display its name in yellow.
  5. Finally, if I’m in one virtualenv but another virtualenv is active, display both their names in red.

So, using walkup, I wrote the virtualenv parsing functions:

function active_virtualenv() {
    if [ -z "$VIRTUAL_ENV" ]; then
        # not in a virtualenv
        return
    fi

    basename "$VIRTUAL_ENV"
}

function enclosing_virtualenv() {
    if ! which walkup > /dev/null; then
        return
    fi
    virtualenv="$(walkup bin/activate)"
    if [ -z "$virtualenv" ]; then
        # not in a virtualenv
        return
    fi

    basename $(grep VIRTUAL_ENV= "$virtualenv"/bin/activate | sed -E 's/VIRTUAL_ENV="(.*)"$/\1/')
}

All that remained was to change my lprompt function to look like so (remember I have setopt prompt_subst on):

function _lprompt_env {
    local active="$(active_virtualenv)"
    local enclosing="$(enclosing_virtualenv)"
    if [ -z "$active" -a -z "$enclosing" ]; then
        # no active virtual env, no enclosing virtualenv, just leave
        return
    fi
    if [ -z "$active" ]; then
        local color=white
        local text="$enclosing"
    else
        if [ -z "$enclosing" ]; then
            local color=yellow
            local text="$active"
        elif [ "$enclosing" = "$active" ]; then
            local color=green
            local text="$active"
        else
            local color=red
            local text="$active":"$enclosing"
        fi
    fi
    local result="%{$fg[$color]%}${text}$rst "
    echo -n $result
}

function lprompt {
    local col1 col2 ch1 ch2
    col1="%{%b$fg[$2]%}"
    col2="%{$4$fg[$3]%}"
    ch1=$col1${1[1]}
    ch2=$col1${1[2]}

    local _env='$(_lprompt_env)'

    local col_b col_s
    col_b="%{$fg[green]%}"
    col_s="%{$fg[red]%}"

    PROMPT="\
$bgc$ch1\
$_env\
%{$fg_bold[white]%}%m:\
$bgc$col2%B%1~%b\
$ch2$rst \
$col2%#$rst "
}

A bit lengthy, but not very difficult. I suffered a bit until I figured out that I should escape the result of _lprompt_virtualenv using a percent sign (like so: "%{$fg[$color]%}${text}$rst "), or else the ANSII color escapes are counted for cursor positioning purposes and screw up the prompt’s alignment. Meh. Also, remember to set VIRTUAL_ENV_DISABLE_PROMPT=True somewhere, so virtualenv’s simple/default prompt manipulation functionality won’t kick in and screw things up for you, and we’re good to go.

The result looks like so (I still don’t know how to do a terminal-“screenshot”-to-html, here’s a crummy png):

Voila! Feel free to use these snippets, and happy zshelling!