Python’s Innards: Hello, ceval.c!

2010/09/02 § 5 Comments

The “Python’s Innards” series owes its existence, at least in part, to hearing one of the Python-Fu masters in my previous workplace say something about a switch statement so large that it was needed to break it up just so some compilers won’t choke on it. I remember thinking then: “Choke the compiler with a switch? Hrmf, let me see that code.” Turns out that this switch can be found in ./Python/ceval.c: PyEval_EvalFrameEx and it switches over the current opcode, invoking its implementation. If I had to summarize all of CPython into one line, I’d probably choose that switch (actually I’d refuse, but humour me by assuming I was at gunpoint or something). This choice is rather subjective, as arguably there are more complex/interesting bits in Python’s object system (explored here and there) or parser/compiler related code. But I can’t help seeing that line, and its surrounding function and file, as the ‘do-work’ heart of CPython.

The reason I didn’t start the series from this heart is that I thought it would be too hard (mostly for the author…). Thanks to what we (well, at least I) learned in the previous posts, I think we can now understand it quite well. I’ll try to link backwards as necessary throughout the article, but if you haven’t followed the series so far, you’d probably do much better if you went back and read some of the previous articles before tackling this one. Also, for brevity’s sake in this post, I won’t qualify the file ./Python/ceval.c and the function PyEval_EvalFrameEx in it. Finally, remember that usually in the series when I quote code, I may note that I edited it, and in that case I often prefer clarity and brevity over accuracy; this is true for this post as well, only much more so, excerpts here might bear only slight resemblance to the real code.

So, where were we… Ah, yes, monstrous switch statement. Well, as I said, this switch can be found in the rather lengthy file ceval.c, in the rather lengthy function PyEval_EvalFrameEx, which takes more than half the file’s lines (it’s roughly 2,250 lines, the file is about 4,400). PyEval_EvalFrameEx implements CPython’s evaluation loop, which is to say that it’s a function that takes a frame object and iterates over each of the opcodes in its associated code object, evaluating (interpreting, executing) each opcode within the context of the given frame (this context is chiefly the associated namespaces and interpreter/thread states). There’s more to ceval.c than PyEval_EvalFrameEx, and we may discuss some of the other bits later in this post (or perhaps a follow-up post), but PyEval_EvalFrameEx is obviously the most important part of it.

Having described the evaluation loop in the previous paragraph, let’s see what it looks like in C (edited):

PyEval_EvalFrameEx(PyFrameObject *f, int throwflag)
{
    /* variable declaration and initialization stuff */
    for (;;) {
        /* do periodic housekeeping once in a few opcodes */
        opcode = NEXTOP();
        if (HAS_ARG(opcode)) oparg = NEXTARG();
        switch (opcode) {
            case NOP:
                goto fast_next_opcode;
            /* lots of more complex opcode implementations */
            default:
                /* become rather unhappy */
        }
        /* handle exceptions or runtime errors, if any */
    }
    /* we are finished, pop the frame stack */
    tstate->frame = f->f_back;
    return retval;
}

As you can see, iteration over opcodes is infinite (forever: fetch next opcode, do stuff), breaking out of the loop must be done explicitly. CPython (reasonably) assumes that evaluated bytecode is correct in the sense that it terminates itself by raising an exception, returning a value, etc. Indeed, if you were to synthesize a code object without a RETURN_VALUE at its end and execute it (exercise to reader: how?1), you’re likely to execute rubbish, reach the default handler (raises a SystemError) or maybe even segfault the interpreter (I didn’t check this thoroughly, but it looks plausible).

The evaluation loop may look fairly simple so far, but I kept back an important piece: I snipped about 1,450 lines of opcode implementations from within that big switch, all of them presumably more complex than a NOP. In order for you to be able to get a feel for what more serious opcode implementations look like, here’s the (edited) implementation of three more opcodes, illustrating a few more principles:

            case BINARY_SUBTRACT:
                w = *--stack_pointer; /* value stack POP */
                v = stack_pointer[-1];
                x = PyNumber_Subtract(v, w);
                stack_pointer[-1] = x; /* value stack SET_TOP */
                if (x != NULL) continue;
                break;
            case LOAD_CONST:
                x = PyTuple_GetItem(f->f_code->co_consts, oparg);
                *stack_pointer++ = x; /* value stack PUSH */
                goto fast_next_opcode;
            case SETUP_LOOP:
            case SETUP_EXCEPT:
            case SETUP_FINALLY:
                PyFrame_BlockSetup(f, opcode, INSTR_OFFSET() + oparg,
                           STACK_LEVEL());
                continue;

We see several things. First, we see a typical value manipulation opcode, BINARY_SUBTRACT. This opcode (and many others) works with values on the value stack as well as with a few temporary variables, using CPython’s C-API abstract object layer (in our case, a function from the number-like object abstraction) to replace the two top values on the value stack with the single value resulting from subtraction. As you can see, a small set of temporary variables, such as v, w and x are used (and reused, and reused…) as the registers of the CPython VM. The variable stack_pointer represents the current bottom of the stack (the next free pointer in the stack). This variable is initialized at the beginning of the function like so: stack_pointer = f->f_stacktop;. In essence, together with the room reserved in the frame object for that purpose, the value stack is this pointer. To make things simpler and more readable, the real (unedited by me) code of ceval.c defines several value stack manipulation/observation macros, like PUSH, TOP or EMPTY. They do what you imagine from their names.

Next, we see a very simple opcode that loads values from somewhere into the valuestack. I chose to quote LOAD_CONST because it’s very brief and simple, although it’s not really a namespace related opcode. “Real” namespace opcodes load values into the value stack from a namespace and store values from the value stack into a namespace; LOAD_CONST loads constants, but doesn’t fetch them from a namespace and has no STORE_CONST counterpart (we explored all this at length in the article about namespaces). The final opcode I chose to show is actually the single implementation of several different control-flow related opcodes (SETUP_LOOP, SETUP_EXCEPT and SETUP_FINALLY), which offload all details of their implementation to the block stack manipulation function PyFrame_BlockSetup; we discussed the block stack in our discussion of interpreter stacks.

Something we can observe looking at these implementations is that different opcodes exit the switch statement differently. Some simply break, and let the code after the switch resume. Some use continue to start the for loop from the beginning. Some goto various labels in the function. Each exit has different semantic meaning. If you break out of the switch (the ‘normal’ route), various checks will be made to see if some special behaviour should be performed – maybe a code block has ended, maybe an exception was raised, maybe we’re ready to return a value. Continuing the loop or going to a label lets certain opcodes take various shortcuts; no use checking for an exception after a NOP or a LOAD_CONST, for instance.

That’s pretty much it. I can’t really say we’re done (not at all), but this is pretty much the gist of PyEval_EvalFrameEx. Simple, eh? Well, yeah, simple, but I lied a bit with the editing to make it simpler. For example, if you look at the code itself, you will see that none of the case expressions for the big switch are really there. The code for the NOP opcode is actually (remember this series is about Python 3.x unless noted otherwise, so this snippet is from Python 3.1.2):

        TARGET(NOP)
            FAST_DISPATCH();

TARGET? FAST_DISPATCH? What are these? Let me explain. Things may become clearer if we’d look for a moment at the implementation of the NOP opcode in ceval.c of Python 2.x. Over there the code for NOP looks more like the samples I’ve shown you so far, and it actually seems to me that the code of ceval.c gets simpler and simpler as we look backwards at older revisions of it. The reason is that although I think PyEval_EvalFrameEx was originally written as a really exceptionally straightforward piece of code, over the years some necessary complexity crept into it as various optimizations and improvements were implemented (I’ll collectively call them ‘additions’ from now on, for lack of a better term).

To further complicate matters, many of these additions are compiled conditionally with preprocessor directives, so several things are implemented in more than one way in the same source file. In the larger code samples I quoted above, I liberally expanded some preprocessor directives using their least complex expansion. However, depending on compilation flags, these and other preprocessor directives might expand to something else, possibly a more complicated something. I can understand trading simplicity to optimize a tight loop which is used very often, and the evaluation loop is probably one of the more used loops in CPython (and probably as tight as its contributors could make it). So while this is all very warranted, it doesn’t help the readability of the code.

Anyway, I’d like to enumerate these additions here explicitly (some in more depth than others); this should aid future discussion of ceval.c, as well as prevent me from feeling like I’m hiding too many important things with my free spirited editing of quoted code. Fortunately, most if not all these additions are very well commented -actually, some of the explanations below will be just summaries or even taken verbatim from these comments, as I believe that they’re accurate (eek!). So, as you read PyEval_EvalFrameEx (and indeed ceval.c in general), you’re likely to run into any of these:

“Threaded Code” (Computed-GOTOs)

Let’s start with the addition that gave us TARGET, FAST_DISPATCH and a few other macros. The evaluation loop uses a “switch” statement, which decent compilers optimize as a single indirect branch instruction with a lookup table of addresses. Alas, since we’re switching over rapidly changing opcodes (it’s uncommon to have the same opcode repeat), this would have an adverse effect on the success rate of CPU branch prediction. Fortunately gcc supports the use of C-goto labels as values, which you can generally pass around and place in an array (restrictions apply!). Using an array of adresses in memory obtained from labels, as you can see in ./Python/opcode_targets.h, we create an explicit jump table and place an explicit indirect jump instruction at the end of each opcode. This improves the success rate of CPU prediction and can yield as much as 20% boost in performance.

Thus, for example, the NOP opcode is implemented in the code like so:

        TARGET(NOP)
            FAST_DISPATCH();

In the simpler scenario, this would expand to a plain case statement and a goto, like so:

        case NOP:
            goto fast_next_opcode;

But when threaded code is in use, that snippet would expand to (I highlighted the lines where we actually move on to the next opcode, using the dispatch table of label-values):

        TARGET_NOP:
            opcode = NOP;
            if (HAS_ARG(NOP))
                oparg = NEXTARG();
        case NOP:
            {
                if (!_Py_TracingPossible) {
                    f->f_lasti = INSTR_OFFSET();
                    goto *opcode_targets[*next_instr++];
                }
                goto fast_next_opcode;
            }

Same behaviour, somewhat more complicated implementation, up to 20% faster Python. Nifty.

Opcode Prediction

Some opcodes tend to come in pairs. For example, COMPARE_OP is often followed by JUMP_IF_FALSE or JUMP_IF_TRUE, themselves often followed by a POP_TOP. What’s more, there are situations where you can determine that a particular next-opcode can be run immediately after the execution of the current opcode, without going through the ‘outer’ (and expensive) parts of the evaluation loop. PREDICT (and a few others) are a set of macros that explicitly peek at the next opcode and jump to it if possible, shortcutting most of the loop in this fashion (i.e., if (*next_instr == op) goto PRED_##op). Note that there is no relation to real hardware here, these are simply hardcoded conditional jumps, not an exploitation of some mechanism in the underlying CPU (in particular, it has nothing to do with “Threaded Code” described above).

Low Level Tracing

An addition primarily geared towards those developing CPython (or suffering from a horrible, horrible bug). Low Level Tracing is controlled by the LLTRACE preprocessor name, which is enabled by default on debug builds of CPython (see --with-pydebug). As explained in ./Misc/SpecialBuilds.txt: when this feature is compiled-in, PyEval_EvalFrameEx checks the frame’s global namespace for the variable __lltrace__. If such a variable is found, mounds of information about what the interpreter is doing are sprayed to stdout, such as every opcode and opcode argument and values pushed onto and popped off the value stack. Not useful very often, but very useful when needed.

This is the what the low level trace output looks like (slightly edited):

>>> def f():
...     global a
...     return a - 5
... 
>>> dis(f)
  3           0 LOAD_GLOBAL              0 (a) 
              3 LOAD_CONST               1 (5) 
              6 BINARY_SUBTRACT      
              7 RETURN_VALUE         
>>> exec(f.__code__, {'__lltrace__': 'foo', 'a': 10})
0: 116, 0
push 10
3: 100, 1
push 5
6: 24
pop 5
7: 83
pop 5
# trace of the end of exec() removed
>>> 

As you can guess, you’re seeing a real-time disassembly of what’s going through the VM as well as stack operations. For example, the first line says: line 0, do opcode 116 (LOAD_GLOBAL) with the operand 0 (expands to the global variable a), and so on, and so forth. This is a bit like (well, little more than) adding a bunch of printf calls to the heart of VM.

Advanced Profiling

Under this heading I’d like to briefly discuss several profiling related additions. The first relies on the fact that some processors (notably Pentium descendants and at least some PowerPCs) have built-in wall time measurement capabilities which are cheap and precise (correct me if I’m wrong). As an aid in the development of a high-performance CPython implementation, Python 2.4’s ceval.c was instrumented with the ability to collect per-opcode profiling statistics using these counters. This instrumentation is controlled by the somewhat misnamed --with-tsc configuration flag (TSC is an Intel Pentium specific name, and this feature is more general than that). Calling sys.settscdump(True) on an instrumented interpreter will cause the function ./Python/ceval.c: dump_tsc to print these statistics every time the evaluation loop loops.

The second advanced profiling feature is Dynamic Execution Profiling. This is only available if Python was built with the DYNAMIC_EXECUTION_PROFILE preprocessor name. As ./Tools/scripts/analyze_dxp.py says, [this] will tell you which opcodes have been executed most frequently in the current process, and, if Python was also built with -DDXPAIRS, will tell you which instruction _pairs_ were executed most frequently, which may help in choosing new instructions. One last thing to add here is that enabling Dynamic Execution Profiling implicitly disables the “Threaded Code” addition.

The third and last addition in this category is function call profiling, controlled by the preprocessor name CALL_PROFILE. Quoting ./Misc/SpecialBuilds.txt again: When this name is defined, the ceval mainloop and helper functions count the number of function calls made. It keeps detailed statistics about what kind of object was called and whether the call hit any of the special fast paths in the code.

Extra Safety Valves

Two preprocessor names, USE_STACKCHECK and CHECKEXC include extra assertions. Testing an interpreter with these enabled may catch a subtle bug or regression, but they are usually disabled as they’re too expensive.

These are the additions I found, grepping ceval.c for #ifdef. I think we’ll call it a day here, although we’re by no means finished. For example, I’d like to devote a separate post to exceptions, which is where we can discuss the tail of the evaluation loop (everything after the big switch and before the end of the big for), which we merely skimmed today. I’d also like to devote a whole post to locking and synchronization (including the GIL), which we touched upon before but never covered properly. Last but really not least, there’s about 2,000 other lines in ceval.c which we didn’t cover today; none of them are as important as PyEval_EvalFrameEx, but we need to talk at least about some of them.

All these things taken into account, I think we can say that today we finally conquered the evaluation loop. This isn’t the end of the series, far from it, but I do see it as a milestone. “Hooray”, I believe the saying goes. I hope you’re enjoying the show, thanks for the supportive comments (they keep me going), and I’ll see you in the next post.


I would like to thank Nick Coghlan for reviewing this article; any mistakes that slipped through are my own.

1Lazy or timid readers may choose to defer to Nick Coghlan’s example of one way he did it; I urge you not to look there and solve it on your own, it’s rather easy.

Python’s Innards: Interpreter Stacks

2010/07/22 § 4 Comments

Those of you who have been paying attention know that this series is spiraling towards what can be considered the core of Python’s Virtual Machine, the “actually do work function” ./Python/ceval.c: PyEval_EvalFrameEx. The (hopefully) last hurdle on our way there is to understand the three significant stack data structures used for CPython’s code evaluation: the call stack, the value stack and the block stack (I’ve called them collectively “Interpreter Stacks” in the title, this isn’t a formal term). All three stacks are tightly coupled with the frame object, which will also be discussed today. If you give me a minute to put on my spectacles, I’ll read to you what Wikipedia says about call stacks in general: In computer science, a call stack is a stack data structure that stores information about the active subroutines of a computer program… A call stack is composed of stack frames (…). These are machine dependent data structures containing subroutine state information. Each stack frame corresponds to a call to a subroutine which has not yet terminated with a return. Hrmf. Jim, I don’t understand… how does this translate to a virtual machine?

Well, since CPython implements a virtual machine, its call stack and stack frames are dependant on this virtual machine, not on the physical machine it’s running on. And also, as Python tends to do, this internal implementation detail is exposed to Python code, either via the C-API or pure Python, as frame objects (./Include/frameobject.h: PyFrameObject). We know that code execution in CPython is really the evaluation (interpretation) of a code object, so every frame represents a currently-being-evaluated code object. We’ll see (and already saw before) that frame objects are linked to one another, thus forming a call stack of frames. Finally, inside each frame object in the call stack there’s a reference to two frame-specific stacks (not directly related to the call stack), they are the value stack and the block stack.

The value stack (you may know this term as an ‘evaluation stack’) is where manipulation of objects happens when object-manipulating opcodes are evaluated. We have seen the value stack before on various occasions, like in the introduction and during our discussion of namespaces. Recalling an example we used before, BINARY_SUBTRACT is an opcode that effectively pops the two top objects in the value stack, performs PyNumber_Subtract on them and sets the new top of the value stack to the result. Namespace related opcodes, like LOAD_FAST or STORE_GLOBAL, load values from a namespace to the stack or store values from the stack to a namespace. Each frame has a value stack of its own (this makes sense in several ways, possibly the most prominent is simplicity of implementation), we’ll see later where in the frame object the value stack is stored.

This leaves us with the block stack, a fairly simple concept with some vaguely defined terminology around it, so pay attention. Python has a notion called a code block, which we have discussed in the article about code objects and which is also explained here. Completely unrelatedly, Python also has a notion of compound statements, which are statements that contain other statements (the language reference defines compound statements here). Compound statements consist of one or more clauses, each made of a header and a suite. Even if the terminology wasn’t known to you until now, I expect this is all instinctively clear to you if you have almost any Python experience: for, try and while are a few compound statements.

So where’s the confusion? In various places throughout the code, a block (sometimes “frame block”, sometimes “basic block”) is used as a loose synonym for a clause or a suite, making it easier to confuse suites and clauses with what’s actually a code block or vice versa. Both the compilation code (./Python/compile.c) and the evaluation code (./Python/ceval.c) are aware of various suites and have (ill-named) data structures to deal with them; but since we’re more interested in evaluation in this series, we won’t discuss the compilation-related details much (or at all). Whenever I’ll think wording might get confusing, I’ll mention the formal terms of clause or suite alongside whatever code term we’re discussing.

With all this terminology in mind we can look at what’s contained in a frame object. Looking at the declaration of ./Include/frameobject.h: PyFrameObject, we find (comments were trimmed and edited for your viewing pleasure):

typedef struct _frame {
   PyObject_VAR_HEAD
   struct _frame *f_back;   /* previous frame, or NULL */
   PyCodeObject *f_code;    /* code segment */
   PyObject *f_builtins;    /* builtin symbol table */
   PyObject *f_globals;     /* global symbol table */
   PyObject *f_locals;      /* local symbol table */
   PyObject **f_valuestack; /* points after the last local */
   PyObject **f_stacktop;   /* current top of valuestack */
   PyObject *f_trace;       /* trace function */

   /* used for swapping generator exceptions */
   PyObject *f_exc_type, *f_exc_value, *f_exc_traceback;

   PyThreadState *f_tstate; /* call stack's thread state */
   int f_lasti;             /* last instruction if called */
   int f_lineno;            /* current line # (if tracing) */
   int f_iblock;            /* index in f_blockstack */

   /* for try and loop blocks */
   PyTryBlock f_blockstack[CO_MAXBLOCKS];

   /* dynamically: locals, free vars, cells and valuestack */
   PyObject *f_localsplus[1]; /* dynamic portion */
} PyFrameObject;

We see various fields used to store the state of this invocation of the code object as well as maintain the call stack’s structure. Both in the C-API and in Python these fields are all prefixed by f_, though not all the fields of the C structure PyFrameObject are exposed in the pythonic representation. I hope some of the fields are intuitively clear to you, since these fields relate to many topics we have already covered. We already mentioned the relation between frame and code objects, so the f_code field of every frame points to precisely one code object. Insofar as structure goes, frames point backwards thus that they create a stack (f_back) as well as point “root-wards” in the interpreter state/thread state/call stack structure by pointing to their thread state (f_tstate), as explained here. Finally, since you always execute Python code in the context of three namespaces (as discussed there), frames have the f_builtins, f_globals and f_locals fields to point to these namespaces. These are the fields (I hope) we already know.

Before we dig into the other fields of a frame object, please notice frames are a variable size Python object (they are a PyObject_VAR_HEAD). The reason is that when a frame object is created it should be dynamically allocated to be large enough to contain references (pointers, really) to the locals, cells and free variables used by its code object, as well as the value stack needed by the code objects ‘deepest’ branch. Indeed, the last field of the frame object, f_localsplus (locals plus cells plus free variables plus value stack…) is a dynamic array where all these references are stored. PyFrame_New will show you exactly how the size of this array is computed.

If the previous paragraph doesn’t sit well with you, I suggest you read the descriptions I wrote for co_nlocals, co_cellvars, co_freevars and co_stacksize – during evaluation, all these ‘dead’ parts of the inert code object come to ‘life’ in space allocated at the end of the frame. As we’ll probably see in the next article, when the frame is evaluated, these references at the end of the frame will be used to get (or set) “fast” local variables, free variables and cell variables, as well as to the variables on the value stack (“fast” locals was explained when we discussed namespaces). Looking back at the commented declaration above and given what I said here, I believe you should now understand f_valuestack, f_stacktop and f_localsplus.

We can now look at f_blockstack, keeping in mind the terminology clarification from before. As you can maybe imagine, compound statements sometimes require state to be evaluated. If we’re in a loop, we need to know where to go in case of a break or a continue. If we’re raising an exception, we need to know where is the innermost enclosing handler (the suite of the closest except header, in more formal terms). This state is stored in f_blockstack, a fixed size stack of PyTryBlock structures which keeps the current compound statement state for us (PyTryBlock is not just for try blocks; it has a b_type field to let it handle various types of compound statements’ suites). f_iblock is an offset to the last allocated PyTryBlock in the stack. If we need to bail out of the current “block” (that is, the current clause), we can pop the block stack and find the new offset in the bytecode from which we should resume evaluation in the popped PyTryBlock (look at its b_handler and b_level fields). A somewhat special case is a raised exception which exhausts the block stack without being caught, as you can imagine, in that case a handler will be sought in the block stack of the previous frames on the call stack.

All this should easily click into place now if you read three code snippets. First, look at this disassembly of a for statement (this would look strikingly similar for a try statement):

>>> def f():
...     for c in 'string':
...             my_global_list.append(c)
...
>>> diss(f)
 2           0 SETUP_LOOP              27 (to 30)
             3 LOAD_CONST               1 ('string')
             6 GET_ITER
       >>    7 FOR_ITER                19 (to 29)
            10 STORE_FAST               0 (c)

 3          13 LOAD_GLOBAL              0 (my_global_list)
            16 LOAD_ATTR                1 (append)
            19 LOAD_FAST                0 (c)
            22 CALL_FUNCTION            1
            25 POP_TOP
            26 JUMP_ABSOLUTE            7
       >>   29 POP_BLOCK
       >>   30 LOAD_CONST               0 (None)
            33 RETURN_VALUE
>>>

Next, look at how the opcodes SETUP_LOOP and POP_BLOCK are implemented in ./Python/ceval.c. Notice that SETUP_LOOP and SETUP_EXCEPT or SETUP_FINALLY are rather similar, they all push a block matching the relevant suite unto the block stack, and they all utilize the same POP_BLOCK:

       TARGET_WITH_IMPL(SETUP_LOOP, _setup_finally)
       TARGET_WITH_IMPL(SETUP_EXCEPT, _setup_finally)
       TARGET(SETUP_FINALLY)
       _setup_finally:
           PyFrame_BlockSetup(f, opcode, INSTR_OFFSET() + oparg,
                      STACK_LEVEL());
           DISPATCH();

       TARGET(POP_BLOCK)
           {
               PyTryBlock *b = PyFrame_BlockPop(f);
               UNWIND_BLOCK(b);
           }
           DISPATCH();

Finally, look at the actual implementation of ./Object/frameobject.c: PyFrame_BlockSetup and ./Object/frameobject.c: PyFrame_BlockPop:

void
PyFrame_BlockSetup(PyFrameObject *f, int type, int handler, int level)
{
   PyTryBlock *b;
   if (f->f_iblock >= CO_MAXBLOCKS)
       Py_FatalError("XXX block stack overflow");
   b = &f->f_blockstack[f->f_iblock++];
   b->b_type = type;
   b->b_level = level;
   b->b_handler = handler;
}

PyTryBlock *
PyFrame_BlockPop(PyFrameObject *f)
{
   PyTryBlock *b;
   if (f->f_iblock <= 0)
       Py_FatalError("XXX block stack underflow");
   b = &f->f_blockstack[--f->f_iblock];
   return b;
}

There, now you’re smart. If you keep the terminology straight, f_blockstack turns out to be rather simple, at least in my book.

We’re left with the rather esoteric fields, some simpler, some a bit more arcane. In the ‘simpler’ range we have f_lasti, an integer offset into the bytecode of the last instructions executed (initialized to -1, i.e., we didn’t execute any instruction yet). This index lets us iterate over the opcodes in the bytecode stream. Heading towards the ‘more arcane’ area we see f_trace and f_lineno. f_trace is a pointer to a tracing function (see sys.settrace; think implementation of a tracer or a debugger). f_lineno contains the line number of the line which caused the generation of the current opcode; it is valid only when tracing (otherwise use PyCode_Addr2Line). Last but not least, we have three exception fields (f_exc_type, f_exc_value and f_exc_traceback), which are rather particular to generators so we’ll discuss them when we discuss that beast (there’s a longer comment about these fields in ./Include/frameobject.h if you’re curious right now).

On a parting note, we can mention when frames are created. This happens in ./Objects/frameobject.c: PyFrame_New, usually called from ./Python/ceval.c: PyEval_EvalCodeEx (and ./Python/ceval.c: fast_function, a specialized optimization of PyEval_EvalCodeEx). Frame creation occurs whenever a code object should be evaluated, which is to say when a function is called, when a module is imported (the module’s top-level code is executed), whenever a class is defined, for every discrete command entered in the interactive interpreter, when the builtins eval or exec are used and when the -c switch is used (I didn’t absolutely verify this is a 100% exhaustive list, but it think it’s rather complete).

Looking at the list in the previous paragraph, you probably realized frames are created very often, so two optimizations are implemented to make frame creation fast: first, code objects have a field (co_zombieframe) which allows them to remain associated with a ‘zombie’ (dead, unused) frame object even when they’re not evaluated. If a code object was already evaluated once, chances are it will have a zombie frame ready to be reanimated by PyFrame_New and returned instead of a newly allocated frame (trading some memory to reduce the number of allocations). Second, allocated and entirely unused stack frames are kept in a special free-list (./Objects/frameobject.c: free_list), frames from this list will be used if possible, instead of actually allocating a brand new frame. This is all kindly commented in ./Objects/frameobject.c.

That’s it, I think. Oh, wait: if you’d like to play with frames in your interpreter, take a look at the inspect module, maybe especially this part of it. In gdb, I used a rather crude method to look at the call stack (I dereferenced the global variable interp_head and went on from there). There’s probably a better way, but I didn’t bother looking. Now that’s really it. In fact, I believe at last we covered enough material to analyze ./Python/ceval.c: PyEval_EvalFrameEx. Ladies and Gentlemen, we can read it. We have the technology.

But, alas, we’ll only do it in the next post, and who knows when that will arrive. And until it does, do good, avoid doing bad and keep clearing your mind. Siddhārtha Gautama said that, and I tend to think that if that particular bloke lived today he’d have some serious Python-Fu going for him, so heed his words.


I would like to thank Nick Coghlan for reviewing this article; any mistakes that slipped through are my own.

Where Am I?

You are currently browsing entries tagged with evaluation at NIL: .to write(1) ~ help:about.

Follow

Get every new post delivered to your Inbox.

Join 33 other followers