Aug 142014
 

The Monash BEC labs use several rf sources, including the Novatech DDS9m.

The DDS9m has four outputs, the first two of which can be stepped through a pre-programmed table, with the remaining two only controllable by software commands (and hence static during buffered runs). We use a revision 1.3 board, which supports external timing of the table mode, as detailed in AN002.

The clocking of the DDS9m through the table entries is non-trivial, however we have converged on an implementation which reliably updates the output on rising clock edges. Here I will detail the hardware involved, along with the software commands sent from the BLACS tab, and the resulting behaviour of the device.

Hardware

We have installed each DDS9m board in a box with a power supply, rf switches and rf amplifiers, creating what we refer to as a Supernova. Each channel of the DDS9m is fed into a switch, with one output port going to the amplifier, and the other going directly to the front panel for debugging (though this isn’t necessary, we don’t use this feature often). The direction of the output (amp. vs test) is determined by a toggle switch for each channel. The on/off state of each switch is then determined by a second toggle switch for each channel, which can switch between on, off and TTL modes. In TTL mode the state of the switch is determined by the high/low state of a TTL line connected to a BNC port on the front panel. We use these TTL lines to switch our rf during the experiment, since it saves on lines in the DDS9m’s table, and allows some control of the static channels.

To step through the table, we use a TTL clocking line, along with a “table enable” TTL line, to drive a tri-state driver chip, which in turn drives pins 10 and 14 of the DDS9m. The roles of the pins (for the rev. 1.3 boards and later) when in table mode, with hardware output enabled, are as follows: falling edges on pin 10 cause the next table entry to be loaded into the buffer, and rising edges on pin 14 cause the values in the buffer to be output. Since pin 14 is usually an output when I e hardware output has not been enabled, it should not be directly connected to pin 10, as this interferes with operation during manual mode (and possibly programming of the table?). For this same reason, you should not hold pin 14 high or low when not in hardware table mode, hence the use of a tri-state buffer.

We use an M74HC125B1R quad tri-state driver in the following configuration:
schemeit-project

The clock line used to step through the table is sent to two channels of the buffer, which are connected to pins 10 and 14 of the DDS9m. Our table enable line passes through another channel of the buffer and has its output inverted by a transistor before feeding the disable lines of the other channels of the buffer. The result is that when the enable line is low, the buffer is disabled, meaning that the DDS9m pins see a high impedance, and importantly, are isolated from each other since they are on their own channels. When the enable pin is high, the buffer is enabled, and the signal from the clock line is sent to both pins.

Since the one clock line feeds both pins, when it goes high the output is updated, and when it goes low the next value is loaded into the buffer in preparation for the next clock tick.

Software implementation

Manual/static mode

When the Novatech BLACS tab is in static mode, the device operates in “automatic update” mode, having had the I a command called. When front panel values are changed, the appropriate Fn, Vn, or Pn command is sent, and the output updates without the need for any extra hardware trigger.

Table/buffered mode

When the Novatech BLACS tab transitions to buffered mode, it executes commands in a very specific order. Firstly, the “static” channels (2 & 3) are programmed using the same method as manual mode, then the values for the buffered channels (0 & 1) are programmed in. Since it takes a considerable amount of time to program a full table over the slow RS232 connection, we have implemented “smart programming”, where the table to be programmed is compared with the last table programmed into the device. Only the lines which have changed are reprogrammed, overwriting those values in the DDS9m’s table, but keeping all other previous values as they are. If you suspect that your table has become corrupt you can always force a “fresh program” where BLACS‘ “smart cache” is cleared and the whole table is programmed.

Once the table has been written, we sent the mt command to the board, which places it in table mode. Since we are still in I a auto update mode at this point, the first entry of the table is not only loaded into the buffer, but output too. At this point, all channels on the board are outputting the instruction set at their initial output time for the experiment to be run. We now send the I e command to switch to hardware updating, and wait for the experiment to begin.

As the experiment starts, the table enable line must be set to go high at the board’s initial output time, and the clocking line will go high too. This initial rising edge will do nothing, since the device is already outputting the values in its buffer. The first falling edge will then load the second line of the table into the buffer, ready for the second rising edge to trigger the output of the second value, and so on.

On transition to manual, at the end of the experiment, m 0 is sent to put the board back into manual mode, and I a is sent to turn automatic updating of the outputs again. The last values of the experiment are then programmed in via the normal manual update mode to keep the front panel consistent with the output.

Future improvements

External referencing

At the moment, we rely on the internal reference crystal. The DDS9m does support external referencing, however it is non-trivial, as you have to play with the PLL settings yourself, and scale frequency commands accordingly. In principle, there is nothing stopping us from implementing this, however we haven’t had the need for an absolute calibration in our labs, since we mostly use the boards for controlling acousto-optic modulators or doing simple frequency sweeps for rf-induced evaporation.

Asymmetric clocking

Currently, labscript‘s clocking signals are symmetric, meaning the line is high for as long as it is low. This actually limits the rate at which we can clock the DDS9m to approximately half its maximum rate. The manual specifies that after the falling edge which triggers the update of the buffer, you must allow up to 100 microseconds for the request to be processed. This means that the low time of the clock must be at least 100 microseconds, before the rising edge which triggers the output to update. This rising edge, however, only has to be held high for 10 nanoseconds. With asymmetric clocking we could achieve an update rate of 9999 Hz, compared with the symmetric clock limit of 5000 Hz.

Implementing this would require all devices on a peseudoclock to report their minimum low and minimum high times, and careful calculations to ensure that all devices receiving the clock have their requirements met at all times (and should be discussed further elsewhere!).

Aug 142014
 

Over the next few months, the labscript suite will undergo some fairly large changes. These are generally along the lines of making it more maintainable for the future. The broad goals are as follows:

  • Port remaining GTK programs to Qt
  • Port to Python 3
  • Re-architect labscript’s instruction and timing model
  • Make installation more painless

 Port to Qt

GTK, though slated as a cross platform GUI toolkit, proved to not be the right solution for us. Development of the Python bindings has lagged on Windows, and GTK itself has some long outstanding Windows bugs that have taken serious effort to work around. The rest of the Python world appears to have settled on Qt as the GUI toolkit of choice, and so we’ll be porting the remaining GTK GUI programs to use the Qt with the PyQt4 bindings.

Port to Python 3

Python 3 appears to have reached a point of inflection in its adoption curve, and there seems to no longer be any reason to not use it. All libraries we rely on now appear to support Python 3, and so now is the time to port. The plan is to for the moment write code that will work on both Python 2 and 3, but if this proves difficult we will target Python 3 only.

Re-architect labscript’s instruction/timing model

The original design of labscript assumed that there was only a single pseudoclock, and that each experiment shot had a fixed duration. Since then we’ve added the capability for multiple pseudoclocks, which trigger each other to begin, and for ‘waits’ which pause the experiment mid-shot until a trigger is received somewhere.  These have added a fair bit of complexity. Rather than make data structures more complex to accommodate all this, labscript’s compilation process has instead become somewhat destructive – replacing old data with new data as it is processed. So applying time offsets to account for triggering delays does not preserve the original time the instruction was issued but rather replaces it. This is a problem for many reasons. It limits where you can put code in the future: if data gets replaced as processing goes on, then you need to do your processing before the data it needs is replaced, and after other data it needs is produced.

It also makes labscript unable to provide informative error messages when it hasn’t kept around enough information to point the user to the source of a problem. Below is a post I made to our issue tracker (private at the time, we now use bitbucket) that goes through some of this in detail:

I’m currently implementing a device here in Tübingen, and have run into some conflicts with the current design of labscript.

I’m going to do a hacky workaround for the moment, but thought I’d outline a longer term plan for fixing this as well as a few other things I don’t really like about labscript, making it more maintainable. None of this should affect the actual labscript API.

One problem is that we assume devices are dumb, that they can’t evaluate ramps themselves and are only fed lists of numbers. This is true for our devices, and I think a good thing to try to stick to when choosing hardware. But it’s not always true, and it isn’t with the device we’re using in Tübingen (It evaluates its own ramps internally based on ramp parameters, and with a fixed sample rate). We also initially assumed in labscript‘s design that instructions were dumb – they were simply a number at a certain time, maybe a function to be later evaluated at certain times. But our instructions have become smarter, taking on unit conversions, rounding of timepoints and time shifts to account for triggering delays.

So really all I want to do is be able to store some extra metadata with instructions. Instructions are currently dictionaries. I could store metadata as more dictionary keys, but I think that making an Instruction class would solve a lot of existing problems.

For example, at the moment we round all times to the nearest 0.1ns or something, and this happens in Output.add_instruction(). It would be cleaner for this to happen in the __init__()method of an Instruction class. Similarly, we run Output.apply_calibration on instructions, and this runs in totally different parts of code depending on whether the instruction is a ramp or not. There is the potential for subtle bugs here, as labscript occasionally uses data from ramps to create normal points (like the one at the end of a ramp). Unit conversions I think would be better performed internally by passing a unit calibration to an evaluate() method of an Instruction class.

This would allow you to keep the old information around and thus determine the values in base or derived units at any time in labscript‘s multi-step compilation process. This is a better situation than having to know already whether the values have been converted at a certain point in labscript‘s compilation, and being forced to put your code earlier or later in the compilation cycle accordingly. If you (while modifying labscript itself) make a mistake at the moment, you might accidentally use the unconverted values instead of the converted ones, or vice-versa, leading you to accidentally convert values twice or not at all. This mistake would not be immediately visible (bugs like this are probably already present I would guess). By storing more data in the Instruction class itself rather than the code driving it, we can detect problems like this sooner (like you getting an AttributeError if you try to access the converted values when the conversion hasn’t happened yet) and debug them more easily.

There is other information that gets lost during compilation due to this unspoken policy of replacing old data with new data as it is processed (we never explicitly decided on this, it was just natural to try and keep the data structures simple). One of these is time offsets. Error messages in labscript that mention times are currently slightly wrong in experiments with multiple pseudoclclocks. This is because instructions are offset in time to account for triggering delays, and the original times are not kept around. We should of course keep them. Having the instruction know what’s already happened to it means that labscript development is not so sensitive to mistakes in external code that is supposed to keep track of this stuff.

Other data that we should be storing with instructions pertain to the code context when the instruction was created. We should store a full traceback so that error messages raised by labscript during compilation can print two tracebacks: one pointing to where in compilation the error was raised, and the other pointing to where in the user’s labscript code the instruction(s) pertaining to the error were originally created.

This all starts to get a bit complex if instructions are dictionaries, but is simple if instructions become their own classes. I think it would be logical to push some methods into instruction classes rather than where they currently are strewn throughout labscript‘s compile sequence, such as unit conversions and time offsets.

A generic Instruction class also allows for subclassing for devices that have more intelligent programming, such as sending an array of data to an SLM, or parameters for a device to evaluate ramps or respond dynamically to events (like the sometimes-mentioned hypothetical arduino that responds to MOT fluorescence with a digital pulse).

This is, in coding terms, not very hard at all (or so it seems to me right now). But because the change would affect so many interacting parts of the labscript code I would not want to apply it without being able to test fairly extensively. And I’m kind of in a hurry. So I won’t do this yet, and I’m posting this issue to just outline my intentions and invite discussion.

So yeah, this post is just a mind-dump of my intentions. I imagine this won’t be too controversial. Some of labscript’s recent changes have made it somewhat hard to maintain, so I’ve been thinking about something along these lines for a while.

For the moment I’m instead writing an Instruction class that looks like a dictionary externally, so that labscript interacts with it as normal, but it is actually a class that I can store a bunch of metadata in for this device I’m implementing!

Make installation more painless

Our install process is needlessly complex. Many of our modules and packages should be able to be wrapped up as proper Python packages rather than being cloned via mercurial. In fact, probably all of them can. Our install process should be able to create directories and configuration files across the system rather than have the user manually do so. So we’d like to make an installer that automates as much as possible.

Conclusion

I’ll be posting updates here as time goes on, and encourage discussion and feedback in the comments.

The current development effort is made possible by the Joint Quantum Institute, at which I am currently a research exchange visitor being paid a stipend by the University of Maryland.