Aug 142014
 

Over the next few months, the labscript suite will undergo some fairly large changes. These are generally along the lines of making it more maintainable for the future. The broad goals are as follows:

  • Port remaining GTK programs to Qt
  • Port to Python 3
  • Re-architect labscript’s instruction and timing model
  • Make installation more painless

 Port to Qt

GTK, though slated as a cross platform GUI toolkit, proved to not be the right solution for us. Development of the Python bindings has lagged on Windows, and GTK itself has some long outstanding Windows bugs that have taken serious effort to work around. The rest of the Python world appears to have settled on Qt as the GUI toolkit of choice, and so we’ll be porting the remaining GTK GUI programs to use the Qt with the PyQt4 bindings.

Port to Python 3

Python 3 appears to have reached a point of inflection in its adoption curve, and there seems to no longer be any reason to not use it. All libraries we rely on now appear to support Python 3, and so now is the time to port. The plan is to for the moment write code that will work on both Python 2 and 3, but if this proves difficult we will target Python 3 only.

Re-architect labscript’s instruction/timing model

The original design of labscript assumed that there was only a single pseudoclock, and that each experiment shot had a fixed duration. Since then we’ve added the capability for multiple pseudoclocks, which trigger each other to begin, and for ‘waits’ which pause the experiment mid-shot until a trigger is received somewhere.  These have added a fair bit of complexity. Rather than make data structures more complex to accommodate all this, labscript’s compilation process has instead become somewhat destructive – replacing old data with new data as it is processed. So applying time offsets to account for triggering delays does not preserve the original time the instruction was issued but rather replaces it. This is a problem for many reasons. It limits where you can put code in the future: if data gets replaced as processing goes on, then you need to do your processing before the data it needs is replaced, and after other data it needs is produced.

It also makes labscript unable to provide informative error messages when it hasn’t kept around enough information to point the user to the source of a problem. Below is a post I made to our issue tracker (private at the time, we now use bitbucket) that goes through some of this in detail:

I’m currently implementing a device here in Tübingen, and have run into some conflicts with the current design of labscript.

I’m going to do a hacky workaround for the moment, but thought I’d outline a longer term plan for fixing this as well as a few other things I don’t really like about labscript, making it more maintainable. None of this should affect the actual labscript API.

One problem is that we assume devices are dumb, that they can’t evaluate ramps themselves and are only fed lists of numbers. This is true for our devices, and I think a good thing to try to stick to when choosing hardware. But it’s not always true, and it isn’t with the device we’re using in Tübingen (It evaluates its own ramps internally based on ramp parameters, and with a fixed sample rate). We also initially assumed in labscript‘s design that instructions were dumb – they were simply a number at a certain time, maybe a function to be later evaluated at certain times. But our instructions have become smarter, taking on unit conversions, rounding of timepoints and time shifts to account for triggering delays.

So really all I want to do is be able to store some extra metadata with instructions. Instructions are currently dictionaries. I could store metadata as more dictionary keys, but I think that making an Instruction class would solve a lot of existing problems.

For example, at the moment we round all times to the nearest 0.1ns or something, and this happens in Output.add_instruction(). It would be cleaner for this to happen in the __init__()method of an Instruction class. Similarly, we run Output.apply_calibration on instructions, and this runs in totally different parts of code depending on whether the instruction is a ramp or not. There is the potential for subtle bugs here, as labscript occasionally uses data from ramps to create normal points (like the one at the end of a ramp). Unit conversions I think would be better performed internally by passing a unit calibration to an evaluate() method of an Instruction class.

This would allow you to keep the old information around and thus determine the values in base or derived units at any time in labscript‘s multi-step compilation process. This is a better situation than having to know already whether the values have been converted at a certain point in labscript‘s compilation, and being forced to put your code earlier or later in the compilation cycle accordingly. If you (while modifying labscript itself) make a mistake at the moment, you might accidentally use the unconverted values instead of the converted ones, or vice-versa, leading you to accidentally convert values twice or not at all. This mistake would not be immediately visible (bugs like this are probably already present I would guess). By storing more data in the Instruction class itself rather than the code driving it, we can detect problems like this sooner (like you getting an AttributeError if you try to access the converted values when the conversion hasn’t happened yet) and debug them more easily.

There is other information that gets lost during compilation due to this unspoken policy of replacing old data with new data as it is processed (we never explicitly decided on this, it was just natural to try and keep the data structures simple). One of these is time offsets. Error messages in labscript that mention times are currently slightly wrong in experiments with multiple pseudoclclocks. This is because instructions are offset in time to account for triggering delays, and the original times are not kept around. We should of course keep them. Having the instruction know what’s already happened to it means that labscript development is not so sensitive to mistakes in external code that is supposed to keep track of this stuff.

Other data that we should be storing with instructions pertain to the code context when the instruction was created. We should store a full traceback so that error messages raised by labscript during compilation can print two tracebacks: one pointing to where in compilation the error was raised, and the other pointing to where in the user’s labscript code the instruction(s) pertaining to the error were originally created.

This all starts to get a bit complex if instructions are dictionaries, but is simple if instructions become their own classes. I think it would be logical to push some methods into instruction classes rather than where they currently are strewn throughout labscript‘s compile sequence, such as unit conversions and time offsets.

A generic Instruction class also allows for subclassing for devices that have more intelligent programming, such as sending an array of data to an SLM, or parameters for a device to evaluate ramps or respond dynamically to events (like the sometimes-mentioned hypothetical arduino that responds to MOT fluorescence with a digital pulse).

This is, in coding terms, not very hard at all (or so it seems to me right now). But because the change would affect so many interacting parts of the labscript code I would not want to apply it without being able to test fairly extensively. And I’m kind of in a hurry. So I won’t do this yet, and I’m posting this issue to just outline my intentions and invite discussion.

So yeah, this post is just a mind-dump of my intentions. I imagine this won’t be too controversial. Some of labscript’s recent changes have made it somewhat hard to maintain, so I’ve been thinking about something along these lines for a while.

For the moment I’m instead writing an Instruction class that looks like a dictionary externally, so that labscript interacts with it as normal, but it is actually a class that I can store a bunch of metadata in for this device I’m implementing!

Make installation more painless

Our install process is needlessly complex. Many of our modules and packages should be able to be wrapped up as proper Python packages rather than being cloned via mercurial. In fact, probably all of them can. Our install process should be able to create directories and configuration files across the system rather than have the user manually do so. So we’d like to make an installer that automates as much as possible.

Conclusion

I’ll be posting updates here as time goes on, and encourage discussion and feedback in the comments.

The current development effort is made possible by the Joint Quantum Institute, at which I am currently a research exchange visitor being paid a stipend by the University of Maryland.

 Leave a Reply

(required)

(required)

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>