Why don’t I contribute to Python (often)

2010/04/23 § 11 Comments

Oddly enough, just a week or two after I wrote the post “Contributing to Python“, Jesse Noller (of multiprocessing fame) wrote a post called “Why aren’t you contributing (to Python)?“. As a somewhat wannabe contributor, I think it’s a terrific question which deserves wordy and open answers (hey, just like this one). I realize the question has been asked specifically in the context of python.org’s development, but I think some of the answer applies to the whole Python eco-system, alternative implementations and libraries/frameworks included. Before we begin, I think that to answer the question, some distinction should be made about two rather different kinds of contributions.

The first is driven by whatever it is that you’re contributing. Suppose you happen to hack about a better GIL arbitration algorithm, or a sprawling and insanely rich asynchronous networking framework. You want that stuff to get out. I’d go so far as to say the source code itself wants to be out. It’s your baby, more often than not you think it’s great and unless something bad happens, you’ll push it. These are the kinds of things where you’re likely to find yourself obsessing over the stats of the website describing your latest gizmo or taking it personally when some loser on python-dev (like, say, that van-what’s-his-name-guy) says your implementation of goto for Python is not quite the next best thing since sliced lists.

The other, rather different kind, is that you run into something that is rather obviously a bug and wish to open-a-ticket-that-has-a-good-chance-to-be-committed for it. First of all, this is usually a far smaller patch. I doubt many people import threading, discover The Beazley Effect, rework the GIL and open a ticket with a patch. The use-case here is more like “I have a reproducible SIGSEGV” or “I wish import zipfile would support ZIP64″. Two interesting observations about this case: first, people are far less committed to their contribution, and second, more importantly, the realities of life dictate that the J. Random Hacker who ran into this either found a workaround or patched their own Python, so they sidestepped the bug. This is important. In practically all interesting cases, the reporter has already sidestepped the bug before or shortly after posting (sidestepped is a loose term, maybe they even moved to Perl…). I doubt anyone’s schedule is loose enough to allow them to wait without sidestepping a painful thorn even for the next bugfix release. This is a hundred times more true for documentation touchups – if you realized it’s wrong, you probably don’t have to fix it to keep working, you just use whatever knowledge you now know is right.

A rather pathological, tertiary case is the “I am not-Python-core, I have some free time, I wanna contribute to Python and I went bug-hunting in the tracker” one. I think its a pathological case of the second kind of contribution, and one that I suspect happens rather rarely. I’ll lump these two together.

If you agree so far, that we have a commit-driven-contribution (“damn this is so awesome I want this to be part of Python/twisted/Django/etc”) and a contribution-driven-commit (“damn Python is so awesome, it’s a shame to leave this wart unfixed, I’ll help”). As I said, I think very different reasons prevent people from doing either. I’ll start talking about the latter kind, both because it seemed to be the focus of Jesse’s original post and because it’s easiest to answer.

First, almost everything Jesse listed as reasons is true. Don’t know how, don’t know where, etc, almost all true. The best remedy here is to get as many people as possible to have, ugh, “broken their contribution cherry”, so to speak. The easier it will be to submit minor fixes for the first time, the more people will do it. The first time is important, psychologically and setup-ly. I think after a patch of yours has been committed, the fear of the technical part process is gone and the feeling of “gee, I can actually put stuff in Python!” kicks in, and you’re far more likely to start submitting more small patches. So if you want many more people to help with mundane issues, documentations touchups, etc, the community at large should make every effort to make this first time sweet.

How do we make it sweet? I don’t know for sure, but here is a short flurry of ideas which I’d be happy to discuss (and aid implementing!):

  • Easy step-by-step instructions for opening a bug report, submitting a patch, for Ubuntu, OSX and Windows, all concentrated in one place, from setup to bug tracker attachment. The “contributing to Python” post I mentioned earlier is a (small) step in what I think is the right direction. We can flesh it out a lot, but make sure it keeps the step-by-step cookbook get-it-done approach, rather than what exists today, which is good, but isn’t aimed at getting-things-done. Compare signing up to Facebook with applying for a Tourist Visa in some foreign country.
  • Small-time-Python-contribution-talks material to be made available. This is both to be consumed online in web-talks, but mainly aims to reach out and encourage such talks in LUGs and highschools/colleges (hmm, I love this idea, I should do this sometime…).
  • A bit on a limb here, but maybe even doing what’s possible to optimize the review process in favour of first-time contributors. This is quite debatable, and (intentionally) vague, but I cautiously think it will pay off rather quickly.

These means (and probably others I didn’t think of) could probably alleviate the problem of a “contribution-driven-commit”, as I called it. Which leaves us with your fabulous implementation of goto, or “commit-driven-contribution”. I think two factors come into play here, both of them nearly irrelevant for the previous type of contribution. The first is the feeling that whatever it is you’ve done, it’s not good enough (this usually breaks my balls). “Me? Send this? To python-dev? Get outta here.”. And the second, I think, is indeed the feeling of an ‘uphill battle’ against grizzled python-dev grey beards and sharp tongued lurkers that are, I suspect, more likely than not to shred your idea to bits. Let’s face it, hacker communities at large are pretty harsh, and generally for understandable reasons. However, I think at times this tough skin and high barrier for contributing anything significant hurts us.

I used to have a co-worker, a strong hacker, who made two significant open-source packages for Python. I humbly think both are exceptionally elegant and at least one of them could have been a strong addition to stdlib. Every time I hear/read the code of some poor soul who recreated the efforts of this guy with these two packages, I cringe. I wouldn’t like to disclose his name before talking to him, but when I asked him why aren’t these packages part of stdlib, he said something like: “blah, unless you’re part of the python-dev cognoscenti you’ve no chance of putting anything anywhere”. I think he might not have pushed these packages hard enough, I should raid python-dev’s archives to know, but looking at the finesse of these packages on the one hand, and the number of questions on #python at freenode which I can answer by uttering these packages’ names, I think maybe we’re missing out on something. His perception, even if downright wrong (and I suspect it isn’t accurate, but not so wrong) is the bad thing that can happen to make you not contribute that big masterpiece you’ve made in your back yard, and that’s a damn shame. Most people will not survive the School of Hard Knocks, and that’s not necessarily always a good thing.

The issue of contributing big stuff is far more delicate and complex, and I’d be the first to admit I’m probably not rad enough to even discuss it. But it feels wrong to write a post under a heading like the one this one bears without at least mentioning this hacker-subculture-centric and sensitive issue, which affects many OSS projects, and Python as well. Micro-contributions are important, they’re the water and wind which slowly erode the landscape of a project into something cleaner, stabler and more elegant, but let’s not forget them big commits from which the mountains are born, too.

So Jesse, or any other curious soul, this is my answer to you. Should the gauntlet be picked up (by you, me or both of us) regarding the list of items I suggested earler about making micro-contributions more accessible? How about taking this to python-dev?


Labour Updated

2010/04/16 § 1 Comment

I’ve had more time to work on Labour (originally posted here, inspired by this and that), the WSGI Server Durability Benchmark. I’m relatively happy with progress so far, and will probably try to ‘market’ Labour a bit shortly. Two things that are still sorely missing from Labour (and are blocking me from posting some news of it to mailing lists et al) are better reporting facilities (read: flashy graphs) and mod_wsgi support.

That said, Labour can already benchmark nine WSGI Servers with a single commandline just seconds after you pull it off the sourcecode repository, and that’s something (do note that you’d have to have the servers installed, Labour can benchmark all installed servers but won’t install them for you). The output is still not pretty, but you can unfold the item below to see sample results from an invocation of Labour. Do note that being a library, this is just one test configuration out of several. So while it’s possible to see numbers-divided-by-second-real-quick for all the benchmark junkies out there, if you want to do anything meaningful, you better read the README file (aww…).

4 test(s) against 9 server(s) finished.

Report of request failure percent and requests/sec:
|  Server  |         Plain          |      Heavy Sleep      |      Light Sleep       | SIGSEGV |
| Cogen    | f%: 0.00; RPS: 168.85  | f%: 0.00; RPS: 10.00  | f%: 0.00; RPS: 83.98   | Failed  |
| CherryPy | f%: 0.00; RPS: 476.51  | f%: 0.00; RPS: 30.78  | f%: 0.00; RPS: 269.99  | Failed  |
| Eventlet | f%: 0.00; RPS: 672.88  | f%: 0.00; RPS: 22.98  | f%: 0.00; RPS: 250.41  | Failed  |
| Tornado  | f%: 0.00; RPS: 995.38  | f%: 0.00; RPS: 43.49  | f%: 0.00; RPS: 501.09  | Failed  |
| Twisted  | f%: 0.00; RPS: 761.19  | f%: 0.00; RPS: 77.99  | f%: 0.00; RPS: 544.19  | Failed  |
| WSGIRef  | f%: 0.00; RPS: 1427.46 | f%: 0.00; RPS: 61.07  | f%: 0.00; RPS: 536.52  | Failed  |
| FAPWS3   | f%: 0.21; RPS: 2337.54 | f%: 0.00; RPS: 66.70  | f%: 0.00; RPS: 886.85  | Failed  |
| Gevent   | f%: 0.19; RPS: 2038.89 | f%: 0.00; RPS: 65.99  | f%: 0.00; RPS: 711.75  | Failed  |
| Paster   | f%: 0.17; RPS: 1866.36 | f%: 0.00; RPS: 152.45 | f%: 0.00; RPS: 1306.86 | Failed  |

We see that Labour benchmarked nine servers with four different tests. Each test starts the server and hits it with a certain number of HTTP Requests. The requests cause the Labour WSGI Application (contained in the server) to do all sorts of trouble. For example, the Plain test in the table above is comprised of 99% requests that will return normally (200 OK and a body) and 1% of requests that will return 404 NOT FOUND. The SIGSEGV test, on the other hand, is made of 50% OK requests and 50% requests that cause a SIGSEGV signal to be sent to whichever process running out application. The two sleeping tests cause the application to sleep(), eating up server workers, the heavy test more often and for longer times than the lighter variant.

Each cell in the result table contains two numbers – the percentage of ‘failed’ responses accumulated in the test, and the number of responses per second. In a Labout test, a “failed” response is an HTTP response which doesn’t match what’s expected from the Behaviour specified in the HTTP request that triggered it. For example, if we asked the server to respond with 404 NOT FOUND and the server responded 200 OK, that’s considered a failure. Some Behaviours allow quite a broad range of responses (I don’t expect anything from a SIGSEGV Behaviour, even a socket error would be counted as OK), as they are issued to measure their effect on other requests (maybe some WSGI Servers will insulate other requests from a SIGSEGV‘s). More than 10% of failed responses in a test will consider the whole test as failed.

Do note that the data in the table above should not be used to infer anything about the servers listed. First, it was created using only 1,000 requests per test (hey, I’m on vacation and all I got is a crummy lil’ Netbook). Second, so far Labour does nothing to try and tune the various servers with similar parameters. So, for example, maybe some of these servers have multiprocessing support which I didn’t enable, while others forked 16 kids. Third and last, I didn’t think much about the tests I’ve created so far, and am mostly focused with developing Labour itself rather than producing quality comparisons (yet).

That’s it, for now. You’re welcome to try Labour, and by all means I’d be happy to hear comments about this benchmarking methodology, the kind of test profiles you’d like to see constructed (try looking into labour.tester.factories and labour.behaviours, the syntax of test-construction is self-teaching), etc. I won’t wreak havoc on the sources on purpose, but no API is stable until declared otherwise (…).

Contributing to Python

2010/04/08 § 6 Comments

So you have some free time to donate to Python, and don’t know where to start. Good. If you follow these cookbook instructions, you can also be a Contributor to Python™ 60 minutes from now. Most of what I’ll cover here is covered elsewhere (I’ll link), but this concentrates everything together so you really don’t have any excuses for not contributing anymore. I assume you have at least passing knowledge in Python or C, any decent editor, any version control system (like svn, git, etc), any bug-tracking system and a typical build process (./configure, make, etc; or your platform’s equivalent). You don’t have to be an expert, but if this is your first day in any of these, maybe this isn’t the right document for you.

Let’s get our gear in order. In my experience, it’s easiest to get ready to start using Ubuntu, and Ubuntu examples are what I’m going to give. If you use something else you should do your platform’s equivalent (or, uh, ‘upgrade’…). I assume you have a toolchain (compiler, linker, etc) installed, so start by making sure you have svn (sudo apt-get install subversion), then checkout one of Python’s latest development trees. Checking out is covered here, but I found that explanation a bit tiring. So, briefly: for the foreseeable future, you’re going to want to work either on trunk (Python 2.x development branch, currently 2.7) or on py3k (Python 3.x development branch, currently 3.2). The commands you need are these two:

svn co http://svn.python.org/projects/python/branches/py3k
svn co http://svn.python.org/projects/python/trunk

Read below for stuff to do during the checkout, but as soon as Subversion will be done, you best configure and compile the source, to see that you can build Python. I assume you won’t be installing this copy of Python at all, so you can simply run ./configure && make without fiddling much with --prefix et al. By default, Python will not build some of the things you may be used to from whichever build you’ve been using so far. Fixing that isn’t mandatory for many bugs, but if you run into ‘missing features’, note two things. First, to include some modules you’ll have to edit ./Modules/Setup.dist and enable whichever modules you want. Second, Python may finish the build process successfully, but issue a notice like this towards the end of the build:

Python build finished, but the necessary bits to build these modules were not found:
_curses            _curses_panel      _dbm            
_gdbm              _sqlite3           _ssl            
_tkinter           bz2                readline        
To find the necessary bits, look in setup.py in detect_modules() for the module's name.

The most likely reason is that you don’t have the development version of the listed packages installed. I made almost everything listed above disappear on my system with sudo apt-get install libreadline-dev libbz2-dev libssl-dev libsqlite3-dev libncurses-dev libgdbm-dev. Re-read the error message and the apt-get I issued, you’ll see it’s rather easy to guess the development package name from Python’s error message. Much like you’ve done during the checkout, read below for stuff to do during the configuration / build. If your internet connection is blazing and your CPU screams or you otherwise took your time in choosing a bug and the build process finished before you chose a bug, you can also run the full test suite: ./python -m test.regrtest -uall and see that nothing beyond the really esoteric breaks.

Finding a bug to work on is not hard, but it’s not trivial. Register with the Python bug-tracking site and go to the search page. For a start, I recommend to use this search: Stage: needs patch; Keyword: easy. It’s easiest to start afresh (no patch submitted with the bug), and the bugs marked ‘easy’ are easy. You will notice that not many bugs return from the search – that’s because so many easy bugs already have a patch issued to them; we’ll get back to that later. Of the returned bugs, I chose issues #6610 and #7834. #6610 reports a potential flaw in subprocess; turns out that when forking out a coprocess from a process with closed standard streams, there’s a subtlety in dup2()-ing the child’s standard streams from the pipes the parent inherited to it. #7834 reports an issue with socketmodule.c‘s creation of AF_BLUETOOTH sockets; the implementation of getsockaddrarg() is fragile as it doesn’t zero out a returned sockaddr_l2 struct which might be left with garbage.

First, make sure you understand the bug. For #6610 I read an article that was linked to the bug about the subtlety of coprocess forking for daemons, as well as the relevant code in Python (here and here, the relevant bits are near the dup2 calls). Once you understand what happens, and sometimes even if you're not sure, the best thing is to write test code that will trigger the bug. I didn’t want to touch Python’s test suite quite yet, so I jotted a small script in /tmp that will create a parent with closed stdio streamed that then fork a /bin/cat coprocess. Writing the test forced me to think harder about the bug, and to realize that… it isn’t. Bug #6610 reports a potential bug, but as it turns out, the author(s) of subprocess already protected themselves against the bug’s manifestation. My test code, assuming it was correct, proved my hypothesis – the test worked fine.

Case closed, no? No. While its possible to comment “sorry, not a bug, move along”, I wanted to nail the bug. From what I’ve seen on other bugs, nothing will convince a trusted developer to close the bug more than working coverage of the alleged bug in Python’s suite of unit test. Before touching a test, make sure it runs on your system, otherwise you might waste lots of time hunting issues. After running the subprocess test, I took my jotted script from earlier, cleaned it up and fit it in Lib/test/test_subprocess.py. Important! Python, rightfully, is rather anal about the style of code they accept into the repository (tests are code just like any other code). Save yourself lots of grief, and make certain you abide to PEP8 and PEP7. You don’t have to read them to the letter to do small fixes, but make sure you caught the spirit of what’s in there and especially that your new code fits with its old surrounding as far as style is concerned. Once I was happy with how the new test looked, I re-ran this particular test (./python -m test.regrtest test_subprocess), created an svn patch (running svn diff | tee ~/test_subprocess_with_standard_fds_closed.diff after I modified subprocess’ test was all it took) and posted the patch along with the description of what I did and found.

#7834 is simpler in most regards but much harder in one important aspect. It’s rather obvious the person who submitted the bug really ran into it in real life and probably knows more about Bluetooth sockets than I do. Fixing the bug is trivial, but writing a test for it that will run even without using actual Bluetooth hardware is not. I looked into bluez’ sources and #bluez on Freenode for help, didn’t see a managable way to do it, prepared just a patch of the fix and submitted it. I did however mention my attempt at finding a way to write a test, so whomever reviews my bug will see the complexities involved and decide for themselves whether or not this testless fix is worthy of a commit.

Ha! Two bugs clos… no. Not closed, just updated in the bug tracker. Truth is, most easy/medium bugs have some patch waiting for them, and are pending review. It is my meager understanding that if you really want to help out, reviewing some of these patches, cleaning them up, adding tests and updating documentations are probably the best things you can do for Python. Python could use an alleviation of the GIL, but if you’re an early and casual contributor, I doubt you’ll be the one to do it. What you can do, especially if you’re a student or otherwise have time and are keen to learn, is to improve as many bugs as you can into pristine condition. The less small flaws there are in a bug, the more mature it looks and it becomes easier it is for a trusted/core developer to just commit them and be done with it. You’d like someone to do it with your bugs, why not do it for someone else. Besides, soon enough you’ll be tangled enough being nosy with so many bugs that you’ll get deeper into the scene, find more interesting aspects of seemingly simple bugs, and so on and so forth, until one day you can overthrowkneel before the BDFL and be knighted to say Ni!

Happy bug hunting!

Python’s Innards: Introduction

2010/04/02 § 22 Comments

A friend once said to me: You know, to some people, C is just a bunch of macros that expand to assembly. It’s been years ago (smartasses: it was also before llvm, ok?), but the sentence stuck with me. Do Kernighan and Ritchie really look at a C program and see assembly code? Does Tim Berners-Lee surf the Web any differently than you and me? And what on earth did Keanu Reeves see when he looked at all of that funky green gibberish soup, anyway? No, seriously, what the heck did he see there?! Uhm, back to the program. Anyway, what does Python look like in Guido van Rossum‘s1 eyes?

This post marks the beginning of what should develop to a series on Python’s internals, I’m writing it since I believe that explaining something is the best way to grok it, and I’d very much like to be able to visualize more of Python’s ‘funky green gibberish soup’ as I read Python code. On the curriculum is mainly CPython, mainly py3k, mainly bytecode evaluation (I’m not a big compilation fan) – but practically everything around executing Python and Python-like code (Unladen Swallow, Jython, Cython, etc) might turn out to be fair game in this series. For the sake of brevity and my sanity, when I say Python, I mean CPython unless noted otherwise. I also assume a POSIX-like OS or (if and where it matters) Linux, unless otherwise noted. You should read this if you want to know how Python works. You should read this if you want to contribute to CPython. You should read this to find all the mistakes I’ll make and snicker at me behind me back or write snide comments. I realize it’s just your particular way to show affection.

I gather I’ll glean pretty much everything I write about from Python’s source or, occasionally, other fine materials (documentation, especially this and that, certain PyCon lectures, searching python-dev, etc). Everything is out there, but I do hope my efforts at putting it all in one place to which you can RSS-subscribe will make your journey easier. I assume the reader knows some C, some OS theory, a bit less than some assembly (any architecture), a bit more than some Python and has reasonable UNIX fitness (i.e., feels comfortable installing something from source). Don’t be afraid if you’re not reasonably conversant in one (or more) of these, but I can’t promise smooth sailing, either. Also, if you don’t have a working toolchain to do Python development, maybe you’d like to head over here and do as it says on the second paragraph (and onwards, as relevant).

Let’s start with something which I assume you already know, but I think is important, at least to the main way I understand… well, everything that I do understand. I look at it as if I’m looking at a machine. It’s easy in Python’s case, since Python relies on a Virtual Machine to do what it does (like many other interpreted languages). Be certain you understand “Virtual Machine” correctly in this context: think more like JVM and less like VirtualBox (very technically, they’re the same, but in the real world we usually differentiate these two kinds of VMs). I find it easiest to understand “Virtual Machine” literally – it’s a machine built from software. Your CPU is just a complex electronic machine which receives all input (machine code, data), it has a state (registers), and based on the input and its state it will output stuff (to RAM or a Bus), right? Well, CPython is a machine built from software components that has a state and processes instructions (different implementations may use rather different instructions). This software machine operates in the process hosting the Python interpreter. Keep this in mind; I like the machine metaphor (as I explain in minute details here).

That said, let’s start with a bird’s eye overview of what happens when you do this: $ python -c 'print("Hello, world!")'. Python’s binary is executed, the standard C library initialization which pretty much any process does happens and then the main function starts executing (see its source, ./Modules/python.c: main, which soon calls ./Modules/main.c: Py_Main). After some mundane initialization stuff (parse arguments, see if environment variables should affect behaviour, assess the situation of the standard streams and act accordingly, etc), ./Python/pythonrun.c: Py_Initialize is called. In many ways, this function is what ‘builds’ and assembles together the pieces needed to run the CPython machine and makes ‘a process’ into ‘a process with a Python interpreter in it’. Among other things, it creates two very important Python data-structures: the interpreter state and thread state. It also creates the built-in module sys and the module which hosts all builtins. At a later post(s) we will cover all these in depth.

With these in place, Python will do one of several things based on how it was executed. Roughly, it will either execute a string (the -c option), execute a module as an executable (the -m option), or execute a file (passed explicitly on the commandline or passed by the kernel when used as an interpreter for a script) or run its REPL loop (this is more a special case of the file to execute being an interactive device). In the case we’re currently following, it will execute a single string, since we invoked it with -c. To execute this single string, ./Python/pythonrun.c: PyRun_SimpleStringFlags is called. This function creates the __main__ namespace, which is ‘where’ our string will be executed (if you run $ python -c 'a=1; print(a)', where is a stored? in this namespace). After the namespace is created, the string is executed in it (or rather, interpreted or evaluated in it). To do that, you must first transform the string into something that machine can work on.

As I said, I’d rather not focus on the innards of Python’s parser/compiler at this time. I’m not a compilation expert, I’m not entirely interested in it, and as far as I know, Python doesn’t have significant Compiler-Fu beyond the basic CS compilation course. We’ll do a (very) fast overview of what goes on here, and may return to it later only to inspect visible CPython behaviour (see the global statement, which is said to affect parsing, for instance). So, the parser/compiler stage of PyRun_SimpleStringFlags goes largely like this: tokenize and create a Concrete Syntax Tree (CST) from the code, transorm the CST into an Abstract Syntax Tree (AST) and finally compile the AST into a code object using ./Python/ast.c: PyAST_FromNode. For now, think of the code object as a binary string of machine code that Python VM’s ‘machinary’ can operate on – so now we’re ready to do interpretation (again, evaluation in Python’s parlance).

We have an (almost) empty __main__, we have a code object, we want to evaluate it. Now what? Now this line: Python/pythonrun.c: run_mod, v = PyEval_EvalCode(co, globals, locals); does the trick. It receives a code object and a namespace for globals and for locals (in this case, both of them will be the newly created __main__ namespace), creates a frame object from these and executes it. You remember previously that I mentioned that Py_Initialize creates a thread state, and that we’ll talk about it later? Well, back to that for a bit: each Python thread is represented by its own thread state, which (among other things) points to the stack of currently executing frames. After the frame object is created and placed at the top of the thread state stack, it (or rather, the byte code pointed by it) is evaluated, opcode by opcode, by means of the (rather lengthy) ./Python/ceval.c: PyEval_EvalFrameEx.

PyEval_EvalFrameEx takes the frame, extracts opcode (and operands, if any, we’ll get to that) after opcode, and executes a short piece of C code matching the opcode. Let’s take a closer look at what these “opcodes” look like by disassembling a bit of compiled Python code:

>>> from dis import dis # ooh! a handy disassembly function!
>>> co = compile("spam = eggs - 1", "<string>", "exec")
>>> dis(co)
  1           0 LOAD_NAME                0 (eggs)
              3 LOAD_CONST               0 (1)
              6 BINARY_SUBTRACT     
              7 STORE_NAME               1 (spam)
             10 LOAD_CONST               1 (None)
             13 RETURN_VALUE        

…even without knowing much about Python’s bytecode, this is reasonably readable. You “load” the name eggs (where do you load it from? where do you load it to? soon), and also load a constant value (1), then you do a “binary subtract” (what do you mean ‘binary’ in this context? between which operands?), and so on and so forth. As you might have guessed, the names are “loaded” from the globals and locals namespaces we’ve seen earlier, and they’re loaded onto an operand stack (not to be confused with the stack of running frames), which is exactly where the binary subtract will pop them from, subtract one from the other, and put the result back on that stack. “Binary subtract” just means this is a subtraction opcode that has two operands (hence it is “binary”, this is not to say the operands are binary numbers made of ‘0’s and ‘1’s).

You can go look at PyEval_EvalFrameEx at ./Python/ceval.c yourself, it’s not a small function by any means. For practical reasons I can’t paste too much code from there in here, but I will just paste the code that runs when a BINARY_SUBTRACT opcode is found, I think it really illustrates things:

            w = POP();
            v = TOP();
            x = PyNumber_Subtract(v, w);
            if (x != NULL) DISPATCH();

…pop something, take the top (of the operand stack), call a C function called PyNumber_Subtract() on these things, do something we still don’t understand (but will in due time) called “Py_DECREF” on both, set the top of the stack to the result of the subtraction (overwriting the previous top) and then do something else we don’t understand if x is not null, which is to do a “DISPATCH”. So while we have some stuff we don’t understand, I think it’s very apparent how two numbers are subtracted in Python, at the lowest possible level. And it took us just about 1,500 words to reach here, too!

After the frame is executed and PyRun_SimpleStringFlags returns, the main function does some cleanup (notably, Py_Finalize, which we’ll discuss), the standard C library deinitialization stuff is done (atexit, et al), and the process exits.

I hope this gives us a “good enough” overview that we can later use as scaffolding on which, in later posts, we’ll hang the more specific discussions of various areas of Python. We have quite a few terms I promised to get back to: interpreter and thread state, namespaces, modules and builtins, code and frame objects as well as those DECREF and DISPATCH lines we didn’t understand inside BINARY_SUBTRACT‘s implementation. There’s also a very crucial ‘phantom’ term that we’ve danced around all over this article but didn’t call by name – objects. CPython’s object system is central to understanding how it works, and I reckon we’ll cover that – at length – in the next post of this series. Stay tuned.

1 Do note that this series isn’t endorsed nor affiliated with anyone unless explicitly stated. Also, note that though Python was initiated by Guido van Rossum, many people have contributed to it and to derivative projects over the years. I never checked with Guido nor with them, but I’m certain Python is no less clear to many of these contributors’ eyes than it is to Guido’s.

Labour updated

2010/04/02 § Leave a comment

I just pushed an update to labour which supports forking multiple HTTP clients, alongside with some other small improvements. For the sake of this update I wrote a simple module called multicall which enables a process to distribute a callable among several subprocesses (POSIX only). More elegant solutions exist (‘import multiprocessing’ come to mind), but I think it’s hard to beat multicall’s simplicity, and it doesn’t use Python threads /at all/ (unlike multiprocessing), which lands some performance improvement.

I think the next area which requires work is probably reporting, and I’m not entirely sure what kind of reports would be useful. Labour isn’t a performance oriented benchmark, but a bit more like a cross between a benchmark and a test-suite for WSGI servers. I’m not sure if the output should be a graph (using behaviour set ‘foo’, servers X, Y and Z reach A, B and C hits per second) or a table (server X handles SIGSEGV well, server Y doesn’t, server X doesn’t handle a slow worker well, server Y does).

Also, and this is something that’s been troubling me for a while, I think I know what the results of most benchmarks will be. Let’s start with the fact that all servers that don’t fork out processes will not survive pretty much all the wedging/SIGSEGVing/memory leaking benchmarks. Second, I’m pretty sure most servers will handle somewhat different scenarios (IO/CPU/sleep) in exactly the same way, which is, not notice anything went wrong and have the worker pool reduced by one.

Assuming this is indeed the way the result matrix will look like, and assuming the response of the respective WSGI servers’ programmers’ will be as I expect (“fix the workers”) – then why am I doing this benchmark, anyway?

The main answer which comes to mind is that this is an ‘everyone fails’ exam which opens the door to a future where some will fail and some will succeed (or fail more gracefully). I’m pretty sure this is a future we should be heading to. I never liked the ‘fix the workers’ response, we’re hackers, not mathematicians – failing well is part of the job. That said, I’m not sure Labour will have enough weight in the WSGI/Python Web community to make this change.

If anyone wants to chip in, or even just send moral support, go ahead. I’ll post updates about Labour the next time I revisit this project.

Where Am I?

You are currently viewing the archives for April, 2010 at NIL: .to write(1) ~ help:about.