zsh and virtualenv

2010/10/14 § 8 Comments

A week ago or so I finally got off my arse and did the pragmatic programmer thing, setting aside those measly ten minutes to check out virtualenv (well, I also checked out buildout, but I won’t discuss it in this post). I knew pretty much what to expect, but I wanted to get my hands dirty with them so I could see what I assumed I’ve been missing out on for so long (and indeed I promptly kicked myself for not doing it sooner, yada yada, you probably know the drill about well-known-must-know-techniques-and-tools-that-somehow-you-don’t-know). Much as I liked virtualenv, there were two things I didn’t like about environment activation in virtualenv. First, I found typing ‘source bin/activate’ (or similar) cumbersome, I wanted something short and snazzy that worked regardless of where inside the virtualenv I am so long as I’m somewhere in it (it makes sense to me to say that I’m ‘in’ a virtualenv when my current working directory is somewhere under the virtualenv’s directory). Note that being “in” a virtualenv isn’t the same as activating it; you can change directory from virtualenv foo to virtualenv bar, and virtualenv foo will remain active. Indeed, this was the second problem I had: I kept forgetting to activate my virtualenv as I started using it or to deactivate the old one as I switched from one to another.

zsh to the rescue. You may recall that I already mentioned the tinkering I’ve done to make it easier to remember my current DVCS branch. Briefly, I have a function called _rprompt_dvcs which is evaluated whenever zsh displays my prompt and if I’m in a git/Mercurial repository it sets my right prompt to the name of the current branch in blue/green. You may also recall that while I use git itself to tell me if I’m in a git repository at all and which branch I’m at (using git branch --no-color 2> /dev/null | sed -e '/^[^*]/d' -e 's/* \(.*\)/(\1)/'), I had to resort to a small C program (fast_hg_root) in order to decide whether I’m in a Mercurial repository or not and then I manually parse the branch with cat. As I said in the previous post about hg and prompt, I’m not into giving hg grief about speed vs. git, but when it comes to the prompt things are different.

With this background in mind, I was perfectly armed to solve my woes with virtualenv. First, I changed fast_hg_root to be slightly more generic and search for a user-specified “magic” filename upwards from the current working directory (I called the outcome walkup, it’s really simple and nothing to write home about…). For example, to mimic fast_hg_root with walkup, you’d run it like so: $ walkup .hg. Using $ walkup bin/activate to find my current virtualenv (if any at all), I could easily add the following function to my zsh environment:

act () {
        if [ -n "$1" ]
        then
                if [ ! -d "$1" ]
                then
                        echo "act: $1 no such directory"
                        return 1
                fi
                if [ ! -e "$1/bin/activate" ]
                then
                        echo "act: $1 is not a virtualenv"
                        return 1
                fi
                if which deactivate > /dev/null
                then
                        deactivate
                fi
                cd "$1"
                source bin/activate
        else
                virtualenv="$(walkup bin/activate)" 
                if [ $? -eq 1 ]
                then
                        echo "act: not in a virtualenv"
                        return 1
                fi
                source "$virtualenv"/bin/activate
        fi
}

Now I can type $ act anywhere I want in a virtualenv, and that virtualenv will become active; this saves figuring out the path to bin/activate and ending up typing something ugly like $ source ../../bin/activate. If you want something that can work for you without a special binary on your host, there’s also a pure-shell version of the same function in the collapsed snippet below.

function act() {
    if [ -n "$1" ]; then
        if [ ! -d "$1" ]; then
            echo "act: $1 no such directory"
            return 1
        fi
        if [ ! -e "$1/bin/activate" ]; then
            echo "act: $1 is not a virtualenv"
            return 1
        fi

        if which deactivate > /dev/null; then
            deactivate
        fi
        cd "$1"
        source bin/activate
    else
        stored_dir="$(pwd)"
        while [ ! -f bin/activate ]; do
            if [ $(pwd) = / ]; then
                echo "act: not in a virtualenv"
                cd "$stored_dir"
                return 1
            fi
            cd ..
        done
        source bin/activate
        cd "$stored_dir"
    fi
}

This was nice, but only solved half the problem: I still kept forgetting to activate the virtualenv, or moving out of a virtualenv and forgetting that I left it activated (this can cause lots of confusion, for example, if you’re simultaneously trying out this, this, this or that django-facebook integration modules, more than one of them thinks that facebook is a good idea for a namespace to take!). To remind me, I wanted my left prompt to reflect my virtualenv in the following manner (much like my right prompt reflects my current git/hg branch if any):

  1. If I’m not in a virtualenv and no virtualenv is active, do nothing.
  2. If I’m in a virtualenv and it is not active, display its name as part of the prompt in white.
  3. If I’m in a virtualenv and it is active, display its name as part of the prompt in green.
  4. If I’m not in a virtualenv but some virtualenv is active, display its name in yellow.
  5. Finally, if I’m in one virtualenv but another virtualenv is active, display both their names in red.

So, using walkup, I wrote the virtualenv parsing functions:

function active_virtualenv() {
    if [ -z "$VIRTUAL_ENV" ]; then
        # not in a virtualenv
        return
    fi

    basename "$VIRTUAL_ENV"
}

function enclosing_virtualenv() {
    if ! which walkup > /dev/null; then
        return
    fi
    virtualenv="$(walkup bin/activate)"
    if [ -z "$virtualenv" ]; then
        # not in a virtualenv
        return
    fi

    basename $(grep VIRTUAL_ENV= "$virtualenv"/bin/activate | sed -E 's/VIRTUAL_ENV="(.*)"$/\1/')
}

All that remained was to change my lprompt function to look like so (remember I have setopt prompt_subst on):

function _lprompt_env {
    local active="$(active_virtualenv)"
    local enclosing="$(enclosing_virtualenv)"
    if [ -z "$active" -a -z "$enclosing" ]; then
        # no active virtual env, no enclosing virtualenv, just leave
        return
    fi
    if [ -z "$active" ]; then
        local color=white
        local text="$enclosing"
    else
        if [ -z "$enclosing" ]; then
            local color=yellow
            local text="$active"
        elif [ "$enclosing" = "$active" ]; then
            local color=green
            local text="$active"
        else
            local color=red
            local text="$active":"$enclosing"
        fi
    fi
    local result="%{$fg[$color]%}${text}$rst "
    echo -n $result
}

function lprompt {
    local col1 col2 ch1 ch2
    col1="%{%b$fg[$2]%}"
    col2="%{$4$fg[$3]%}"
    ch1=$col1${1[1]}
    ch2=$col1${1[2]}

    local _env='$(_lprompt_env)'

    local col_b col_s
    col_b="%{$fg[green]%}"
    col_s="%{$fg[red]%}"

    PROMPT="\
$bgc$ch1\
$_env\
%{$fg_bold[white]%}%m:\
$bgc$col2%B%1~%b\
$ch2$rst \
$col2%#$rst "
}

A bit lengthy, but not very difficult. I suffered a bit until I figured out that I should escape the result of _lprompt_virtualenv using a percent sign (like so: "%{$fg[$color]%}${text}$rst "), or else the ANSII color escapes are counted for cursor positioning purposes and screw up the prompt’s alignment. Meh. Also, remember to set VIRTUAL_ENV_DISABLE_PROMPT=True somewhere, so virtualenv’s simple/default prompt manipulation functionality won’t kick in and screw things up for you, and we’re good to go.

The result looks like so (I still don’t know how to do a terminal-“screenshot”-to-html, here’s a crummy png):

Voila! Feel free to use these snippets, and happy zshelling!

Eulogy to a server

2010/10/01 § 2 Comments

You don’t know it, but I’ve started writing this blog several times before it actually went live, and every time I scraped whatever post I started with (the initial run was on blogger.com). I just didn’t think these posts were all too interesting, they were about my monstrous home server, donny. Maybe this is still not interesting, but I’m metaphorically on the verge of tears and I just have to tell someone of what happened, to repent me of my horrible sin. You may not read if you don’t want to. I bought donny about 2.5-3 years ago, to replace my aging home storage server (I had about 3x250GB at the time, no RAID). There’s not much to say about donny’s hardware (Core 2 Duo, 2GB of RAM, Asus P5K-WS motherboard), other than the gargantuan CoolerMaster Stacker 810 chassis with room for 14 (!) SATA disks. Initially I bought 8×0.5TB SATA Hitachi disks for it, and added more as I had the chance. I guess I bought it because at the time I’d hang around disks all day long, I must’ve felt the need to compensate for something (my job at the time was mostly around software, but still, you can’t ignore the shipping crates of SATA disks in lab).

Anyway, most of its life donny ran OpenSolaris. One of our customers had a big ZFS deployment, I’ve always liked Solaris most of all the big Unices (I never thought it really better than Linux, it just Sucked Less™ than the other big iron Unices), I totally drank the cool-aid about “ZFS: the last word in File System” (notice how the first Google hit for this search term is “Bad Request” :) and dtrace really blew me away. So I chose OpenSolaris. Some of those started-but-never-finished posts were about whether I was happy with OpenSolaris and ZFS or not, I never found them interesting enough to even finish them. So even if I don’t wanna discuss that particularly, it should be noted that if we look at how I voted with my feet, I ended up migrating donny to Ubuntu 10.04.1/mdadm RAID5/ext4 when my wife and I got back from our long trip abroad.

Migration was a breeze, the actual migration process convinced me I’ve made the right choice in this case. Over the time with ZFS (both at work and at home) I realized it’s probably good but certainly not magical and not the end of human suffering with regard to storage. In exchange for giving up zfs and dtrace I received the joys of Ubuntu, most notably a working package management system and sensible defaults to make life so much easier, along with the most vibrant eco-system there is. I bought donny 4×2.0TB SATA WD Cavier Green disks, made a rolling upgrade for the data while relying on zfs-fuse (it went well, despite a small and old bug) and overall the downtime was less than an hour for the installation of the disks. At the time of the disaster, donny held one RAID5 array made of 4x2TB, one RAID5 array made of 4x.5TB, one soon-to-be-made RAID5 array made of 3x1TB+1×1.5TB (I bought a larger drive after one of the 1TB failed a while ago), and its two boot disks. I was happy. donny, my wife and me, one happy family. Until last night.

I was always eyeing donny’s small boot disks (what a waste of room… and with all these useless SATA disks I’ve accumulated over the years and have lying about…), so last night I wanted to break the 2x80GB mirror and roll-upgrade to a 2x1TB boot configuration, planning on using the extra space for… well, I’ll be honest, I don’t know for what. I’ll admit it – I got a bit addicted to seeing the TB suffix near the free space column of df -h at home (at work you can see better suffixes :). I just have hardware lying around, and I love never deleting ANYTHING, and I love keeping all the .isos of everything ever (Hmm… RHEL3 for Itanium… that must come in handy some day…) and keeping an image of every friend and relative’s Windows computer I ever fixed (it’s amazing how much time this saves), and never keeping any media I buy in plastic… and, well, the fetish of just having a big server. Heck, it sure beats farmville.

So, indeed, last night I broke that mirror, and installed that 1TB drive, and this morning I started re-mirroring the boot, and while I was at it I started seeing some of the directory structure was wrong so I redistributed stuff inside the RAIDs, and all the disks where whirring merrily at the corner of the room, and I was getting cold so I turned off the AC, and suddenly donny starts beeping (I didn’t even remember I installed that pcspkr script for mdadm) and I get a flurry of emails regarding disks failures in the md devices. WTF? Quickly I realized that donny was practically boiling hot (SMART read one of the disks at 70 degrees celsius), at which point I did an emergency shutdown and realized… that last night I disconnected the power cable running from the PSU to several fans, forgot to reconnect it, and now I’ve effectively cooked my server. Damn.

I’m not sure what to do now. I still have some friends who know stuff about harddisks (like, know the stuff you have to sign NDAs with the disk manufacturers in order to know), and I’m trying to pick my network about what to do next. Basically, from what I hear so far, I should keep donny off, let the disks cool down, be ready with lots of room on a separate host to copy stuff out of it, boot it up in a cool room, take the most critical stuff out and then do whatever, it doesn’t matter, cuz the disks are dead even if they seem alive. I’m told never to trust any of the disks that were inside during the malfunction (that’s >$1,000USD worth of disks…), once a disk reached 70 degrees, even far less, don’t get near it, even if it’s new. Admittedly, these guys are used to handling enterprise disk faults, where $1,000USD in hardware costs (and even many many times that amount) is nothing compared to data loss, but this is the gist of what I’m hearing so far. If you have other observations, let me know. It’s frustratingly difficult to get reliable data about disk failures on the Internet; I know just what to do in case of logical corruption of any sort; but I don’t know precisely what to do in a case like this, and in case of a controller failure, and a head crash, and so on, and so forth. I know it’s a lot about luck, but what’s the best way to give donny the highest chance of survival?

On a parting note, I’ll add that I’m a very sceptic kind of guy, but when it comes to these things I’m rather mystical. It all stems from my roots as a System Administrator; what else can comfort a lonely 19-year-old sysadmin trying to salvage something from nothing in a cold server room at 03:27AM on a Saturday? So now I blame all of this for the name I gave donny. I named it so because I name all my hosts at home after characters from Big Lebowski (I’m typing this on Dude, my laptop), and I called the server donny. The email address I gave it (so it could send me those FzA#$%! S.M.A.R.T reports it never did!) was named Theodore Donald Kerabatsos. The backup server, which is tiny compared to donny and doesn’t hold nearly as much stuff as I’d like to have back now, is called Francis Donnelly. The storage pools (and then RAID volumes) were called folgers, receptacle, modest and cookies (if you don’t understand, you should have paid more attention to Donny’s funeral in The Big Lebowski). And, indeed, as I predicted without knowing it, it ended up being friggin’ cremated. May Stallman bless its soul.

I guess I’m a moron for not thinking about this exact scenario; I was kinda assuming smartmontools and would be SMART (ha!) enough to shutdown when the disks reach 50 degrees, and maybe there is such a setting and I just didn’t enable it… I guess by now it doesn’t matter. I’m one sad hacker. I can’t believe I did this to myself.

Where Am I?

You are currently viewing the archives for October, 2010 at NIL: .to write(1) ~ help:about.

Follow

Get every new post delivered to your Inbox.

Join 33 other followers