Digital MTS using the USRP2
Below I will be chronicling my ECSE final-year project. This will involve replacing all the MTS electronics with a wholly digital system, with its associated benefits.
My thesis has officially been handed in. It is attached below.
Over the weekend I will be taking photos and videos to present at my talk next week. Maybe I will try to optimise the laser TF, time permitting, but most likely not!
A long interlude will occur from now until mid-November, due to exams. Future updates will be intermittent and only take place when significant events have occurred.
-- VladimirNegnevitski - 13 Oct 2010
I will be taking a final bunch of data today, consisting of
-SPA view of error signal before demodulation (i.e. out of the photodetector)
-TF of the laser, using current injection as input and sat abs as output (will align to the edge of a peak)
-TF of the laser using MTS, after adjusting the cutoff to
-anything else I can think of at the time!
To be continued...
UPDATE: I have obtained TFs of the laser using both sat abs and MTS using the VNA! This is a major project milestone - almost too late to include in my thesis, but not quite
First the calibration TF; I used long BNC cables and 40dB attenuation on the input to the injection cct so there is some phase lag:
This shows that there is around 15 degrees of lag per MHz due to the cables/electronics. No distortion except below 200 kHz.
Next the saturated absorption transfer function; the laser was manually aligned to the edge of the first big hyperfine transition to the left of the closed transition of Rb87.
There is an interesting kink at 2.3 MHz, which is due to the laser physics. It causes a phase shift of only around 40 degrees, less than was expected. A future controller will need to compensate for this.
Next a broadband MTS scan, with mod freq = 5.3 MHz. This means that everything above the filter cutoff, 3 MHz, is essentially noise; nonetheless the filter amplitude TF is visible.
Note the phase crosses 180 degrees at only 700 kHz - this bodes poorly for controller design, as the max BW can only be 400-500 kHz. After cable and digital filter phase has been taken into account, it is clear that a pi phase shift would occur at around 1 MHz - thus the total latency is a whopping 500 ns! I am not sure why it is so long, but my gut feeling is the DAC. I can shave maybe 30 ns off by removing a few buffer registers from my design, but this would be almost pointless.
An MTS scan from 0 to 5 MHz is shown below:
This shows that though there is a large phase lag, the MTS amplitude is reasonably linear across the passband. If I can somehow get around the phase delay, probably using the second DAC previously mentioned, then perhaps not all is lost!
On the weekend, if I have time I will attempt to set the 0 dB point of the system at roughly 500 kHz - this should ensure stability using direct current injection. There's nowhere near enough time to get the second DAC connected/tested/amplified/debugged however!
-- VladimirNegnevitski - 06 Oct 2010
I am in the midst of thesis writing, so progress updates have been/will be rarer until the end of this project. Martijn and I have made our current modulation circuits (thanks for your help, Martijn!), and connected them directly into the internal laser SMA connectors with good results. Just by observing the MTS spectrum, I have gotten a sense for how the current modulation affects the system.
When an external sinusoid is used to modulate the current, the MTS signal 'vibrates' back and forth at the modulation frequency, with no additional noise. However, when the modulation frequency is raised above ~500 kHz the signal begins to behave strangely - 'noise' appears across the entire MTS line. My hypothesis is that because current modulation affects both the amplitude and frequency of the laser diode, it results in very rapid amplitude modulation - this is demodulated by the digital electronics into a signal at . This in turn causes a 'fuzz' on the MTS spectrum.
Two questions are raised, which I probably won't have time to investigate during this project:
-- VladimirNegnevitski - 29 Sep 2010
I am taking 'official' data for my thesis today; I am characterising the USRP2 FM performance. In particular, I will take the following data:
source -> 9dB attenuator (Mini-Circuits HAT-9+) -> Isolation transformer (Mini-circuits FTB-1-6*A15+) -> Spectrum analyser (Anritsu MS2721A?).
These data will satisfy the 'FM generation' portion of my requirements analysis.
UPDATE: The above were modified to reflect the data actually taken, which was guided by the observations. Interestingly the SFDR grew as the signal power was reduced; it may be that the amplifiers perform more linearly at lower power. Not sure how this reflects on MTS spectra - will find out later.
UPDATE 2: After playing around with the optical setup/alignment, I realised that rotating the double-pass lambda/4 plate changes the intensity of the open vs closed transitions of the MTS! So by adjusting it, you can largely eliminate the open transitions. This is useful indeed, but I do not understand how it comes about.
-- VladimirNegnevitski - 16 Sep 2010
Again it's been a while since my last update, due to coursework. I am nearing the end of my project, and am currently trying to accomplish several goals before the end:
-- VladimirNegnevitski - 11 Sep 2010MOGLabs test page which shows noise suppression up to ~20kHz, and a servo bump (noise enhancement) from 20-50 kHz or so.
By comparison, your MTS lock is managing noise suppression up to 150-200 kHz with servo bump from 200-400kHz. This is almost a full order of magnitude more spectral banwidth, as would be expected from 10x the modulation frequency. And this is (almost certainly) limited by the MOG current control BW (which turns out to be quite wide given that the MOGs lower modulation frequency). This all augurs very well for direct feedback to diode current (well, direct via a FET).
Note also that in the hf limit the MOG spectra are about 20dB above the off-resonant noise floor, which is comparable to the MTS results. We should be able to estimate shot noise floor pretty accurately and work out where the electronic noise is coming in. MOG off-resonant noise is ~10dB above dark level, ours is much closer, indicating we are dominated by electronic noise somewhere. Unsurprising perhaps given the much wider detector bandwidth we are operating with. Maybe we'll need a custom photodetector after all...?
The plot below shows some error signals I've obtained. I don't trust the data below 10 kHz, see the plots further down for this region.
A few interesting features. It is clear that the optimal slow + current trace tracks the off-resonant noise (the theoretical minimum in our case) quite well up to 100 kHz, when it rises up and peaks at ~230 kHz. This is 'noise squeezing'; the noise content is pushed above the control bandwidth of the system. Also, the unlocked noise was at around -60 dBm across the frequency spread, indicating that the SNR of the frequency reference was roughly 12-15 dB at high frequencies. This may be a little low, and I am not sure of the cause - it is worth investigating further. Maybe the only solution is to generate a stronger MTS signal.
I took an off-resonant photodiode reading as well, which is the signal at the bottom. Since the peak-to-peak excursion of the sat abs peaks was around 400 mV while the MTS error signal was 800 mV, I have added 6 dBm to the trace to simulate the level required to get equal performance (in an ideal world; MTS has many advantages over sat abs). The off-resonant MTS trace shows noise higher by around 20 dB from the off-resonant sat abs trace, which is roughly what we would expect due to the +24dB pre-amplifier used on the sat abs signal before it enters the USRP2 - this shows that the digital processing is probably adding no significant noise to the error signal.
The other two locking setups show profiles as we would expect; the plot below shows locking performance below 20 kHz.
The slow bandwidth extends to around 2 kHz, above which the fast takes over. I did not take a dark/off-resonant measurement, but I expect they lie at around -110 dBm based on the 0-400 kHz figure.
These plots show noise attenuation on the error signal, which is a clear indicator that the MOGbox controller is doing its job. The system as it stands offers many areas to investigate:
-- VladimirNegnevitski - 26 Aug 2010
I have obtained the first digital MTS spectra! Everything seems to work fine, and by adjusting the amplitudes/phases of the FM sidebands I am able to flexibly alter the MTS spectrum much more easily than I could with the analogue system.
The below spectrum is obtained using the following CPU commands:
XGpio_DiscreteWrite(&Gpio0, 1, 0x33330800); //16b fc, 16b fm
This is clean enough to lock the MOGbox to, which I will be doing soon. Then comes the task of characterising the laser/designing a suitable control function.
-- VladimirNegnevitski - 24 Aug 2010
I have taken some more spectrum analyser data, and now have a decent understanding of the demodulator response.
The following shows the white-noise response of the demodulator:
The following shows the response to a frequency comb:
The following two plots show responses to 3 MHz sinewaves, with digital gain to maximise SNR.
The basic system is complete and verified; the next steps in my project are to connect the system to the AOM and photodiode (with some gain), and model the system parameters of the MOGbox + laser. I will be quite busy with other subject work for the next week, so there may be a delay before the next update!
-- VladimirNegnevitski - 24 Aug 2010
I have unified the various elements of my digital design further, and have successfully tested the demodulation/filtering section of the datapath. My demodulation unit accepts a 14-bit ADC input, 'shifts it left' by anything from 0 to 15 bits, then multiplies by a 16-bit sinewave whose frequency is set by the same GPIO as the modulation frequency of the FM generator. The 16 upper bits of the result are output from the demodulator into the filter stages. By bit shifting the ADC input, I truncate significant bits - however for weak signals this does not matter, as the signal is not stored there.
I have been encountering annoying latency issues caused by the two filter stages, as they just refuse to meet timing no matter what I do! While they seem to be functional with no problems
The system is reasonably stable to overflow, i.e. when a few most significant bits of the multiplication result are lost, and tuning the 'digital gain' is easily done to compensate for this using the CPU. I have tested it using a 4.0 mVpp signal from a function generator at 3.124998 MHz, setting the gain to 2^10 (i.e. 10 bit-shifts) and observing the outputs both in ChipScope? and on an oscilloscope. The next key step will be inputting a 2.0 MHz sinewave, observing its spectrum before the demodulator system, and observing its spectrum afterwards. Then with the addition of a +24 dB amplifier I can directly test out the MTS demodulation on the USRP2 by feeding in the photodiode signal, even while the laser light's being modulated by the analogue MTS box. My goal throughout my measurements will be keeping a good grasp of the noise sources inside the entire system.
Another 'tweak' to the system would be to add a digital 2nd-order lowpass stage in parallel with the 4th-order filter setup; this would be quite easy, and with a cutoff of ~50 kHz it would allow in-depth investigation of the MTS signal using digital hardware. Perhaps I could set it up to require only a rerouting of bits for the pole denominator coefficients; this would reduce the filter's latency slightly and hopefully avoid the annoying latency issues of the other filters.
I have made progress in several areas of the digital design over the last week, and most of the digital system's individual elements are now functional. In addition, Lincoln has given me permission to use a single set of MTS optics for the analogue and digital MTS setups, as long as I don't affect the work of the Honours students that need a stably-locked laser to run their experiments later in the semester. I plan to set up a stable analogue lock as soon as the loop filter arrives, since the analogue MTS box has already been finished by the workshop - hopefully I can easily switch between analogue and digital MTS by simply changing the photodiode and AOM connections. I believe this will be a fairly painless solution.
I discuss the individual areas where I have made progress below.
Previously I had been testing various elements of the system (CPU control, FM generation, Chipscope interface, and other miscellaneous tests) in isolation. This week I have combined them all into a single CPU-controlled system, with the CPU directly controlling the clock manager and DAC settings (through an SPI interface) and the FM signal generator (through several general-purpose I/O blocks). This will be the container project for my final MTS system.
I have tested out the FM on a spectrum analyser, and it behaves exactly as predicted. When the amplitudes are set too high, overflow occurs as predicted - this is visible on the analyser trace as a comb of high-frequency images. With the CPU controlling initialisation settings, I no longer have to run Chipscope every time I need to change a setting.
My lowpass filter (discussed below) is almost complete, but remains in a separate project. Its coefficients are hard-coded, and I plan to alter it to allow CPU input (of six 16-bit words, three for each biquad stage).
As mentioned previously I have decided to leave the CPU-Ethernet interfacing and the automatic boot-up for later, as it is a low priority. Instead, over the next few weeks I will focus on the signal datapath, since the optics are almost complete.
DAC control and upconversion
Over the last week I have studied the DAC registers in detail, and realised a way in which I could achieve both baseband and 80 MHz output on both channels simultaneously. First I describe the DAC's settings in a way that is hopefully easier to understand for a non-expert than the datasheet (which has caused me no end of grief!). Analog Devices have a very useful simulator applet.
The DAC data inputs can handle input frequencies from <50 MHz to 125 MHz. I am using 100 MHz. Internally, the DAC has a PLL, interpolation filters and modulators; the PLL can be set to output an internal clock rate of either 200 or 400 MHz. When the DAC's interpolation registers are set to 2x or 4x (or 8x, but this applies only to clocks of <50 MHz), the PLL automatically outputs either 200 or 400 MHz. At the same time, the interpolation filters insert intermediate samples into the input data stream, essentially broadening the bandwidth of the DAC outputs but keeping a baseband response.
For instance, I feed a (digital) input stream of [8, 4, 0, -4, -8] into the DAC at 100 MHz; thus the inputs arrive at intervals of 10 ns. If the DAC is on 1x interpolation with the PLL off, the (analogue) output will follow the input signal faithfully, updating every 10 ns.
If I now set the interpolation to 4x, the DAC will internally alter the input stream to [8, (7,6,5,) 4, (3,2,1,) 0, etc]; increasing the data by a factor of 4. Because the clock is 4x faster, the stream updates every 2.5 ns - thus the analogue output remains the same as before. Thus, baseband signals remain baseband when the DAC is set to interpolate.
Let us examine interpolation more closely. Say we have an input signal with = 20 MHz, with interpolation/modulation turned off. The Nyquist frequency is 50 MHz, so there is an alias at 80 MHz in the analogue output (which must be filtered out using an analogue lowpass filter if it is not desired, but in our case it is!). Now we turn on 2x interpolation. The Nyquist frequency of the signal is now 100 MHz, and now a digital filter can be used (and is used internally by the DAC) to remove undesired images and spurs from 50 MHz to 150 MHz. A new alias is created at 180 MHz, however - this may or may not be desirable, but cannot be removed with digital filtering. With 4x interpolation, the Nyquist frequency is 200 MHz - the signal is clean from 50 MHz to 350 MHz, but has a (weak) alias at 380 MHz.
Next I discuss the DAC modulation settings. Modulation is accomplished by multiplying the input stream by the vector [1,-1] for fs/2 interpolation, and [1 0 -1 0] for fs/4 interpolation. If we set the DAC for 1x interpolation, fs/2 modulation, then an input stream of [8 8 8 8 8 8 8 8] (DC in the frequency domain) becomes [8 -8 8 -8 8 -8 8 -8] on the analogue output. This has a period of 20 ns (since the samples are again 10 ns apart) so there is an output peak at 50 MHz. For fs/4 modulation, the input stream becomes [8 0 -8 0 8 0 -8 0]; a period of 40 ns means there is an output peak at 25 MHz (and an alias at 75 MHz). I currently think of modulation as upconverting the first Nyquist band of the signal (0-50 MHz for no interpolation, 0-100 Mhz for 2x interpolation, etc.) by odd ratios (1x, 3x, etc) of the modulation frequency, and downconverting the next Nyquist band (50-100 MHz for no interpolation, 100-200 MHz for 2x interpolation, etc.) by the same amounts. This process is easiest to appreciate using the previously mentioned applet.
Previously I had been achieving an 80 MHz signal using 2x interpolation and fs/2 modulation of a 20 MHz signal. The interpolation shifted the alias from 80 MHz to 180 MHz, and the modulation shifted it down to 80 MHz with little attenuation. I had realised that there should be an 80 MHz alias without any interpolation/modulation, but whenever I tried to put the DAC into this mode, I obtained seemingly white noise. My problem was leaving the PLL on - I had no idea that it was the cause of the problem until I tried disabling it!
In summary, the DAC can output both baseband and 80 MHz with the correct settings. The 80 MHz output is attenuated by around 15 dB, and the simultaneous 20 MHz output drains power from the 80 MHz signal - despite this, these settings are a good compromise to allow the second DAC channel to output a DC control signal.
Simple in retrospect, but it took me far too long to trace the issue to a simple PLL control bit!
I have designed and implemented a 4th-order IIR filter in Verilog. Xilinx whitepaper wp330? gives a good overview of IIR filters. This one was quite basic, consisting of two biquadratic (biquad) stages separated by pipeline registers.
My first step was selecting two transfer functions in the z-domain for the two biquad stages, of the form
My main criterion was a pair of zeros at and , where is the modulation frequency of the FM source (and thus the laser light) of 3.125 MHz. Thus I needed zeros at and , where is the Nyquist sampling frequency (system clock frequency) of 100 MHz. I also needed 20 dB attenuation in the stopband above 3 MHz, and a reasonably smooth passband with little phase rolloff until 2 MHz. The design was carried out in MATLAB using the filter design toolbox GUI (fdatool in the command window). I chose a 4th-order elliptical filter, and shifted the higher zero to where I required it. The poles were automatically placed but I had to slightly tweak their locations to reduce the peaking near the edge of the passband. Given the pole and zero locations in the z-plane, MATLAB automatically produced the coefficients: required:
a = c = 1, b = -1.84777832, d = -1.8180542, e = 0.8372192382 for the first stage,
a = c = 1, b = -1.9616089, d = -1.9494018, e = 0.98010253 for the second stage.
My next step was quantising the filter transfer function and implementing it in Verilog. MATLAB has a quantisation feature inside the filter design GUI which simulates how the filter would operate under fixed-point arithmetic inside a digital system. First I set the filter topology to Direct Form I, which can be represented by rearranging the transfer function to
which, when inverse z-transformed, gives the simple difference equation,
I quantised the filter to 16-bit input and output (as required by the system) and chose the number of bits for fractions etc. I picked standard options in the HDL generation menu, but enabled pipelining and also chose 'CSD' (canonic signed digit) multipliers instead of ordinary multipliers. After compiling the Verilog in Xilinx ISE, the estimated propagation delay through the filter was around 30-40 ns, and the design could not be implemented due to the stringent maximum latency of 10 ns (to suit the 100 MHz clock). After trying different filter topologies and options, all of which failed, I began writing the filter from scratch.
Several design choices were made; in particular the rounding and word lengths were chosen to satisfy the 18bit x 18bit hardware multipliers on the Spartan 3 FPGA. My first attempt did not compile due to the latency being 12 ns, (a factor of 3 improvement), so I optimised my HDL by replacing the a*U and c*U multipliers with simple bit shifts. I will not discuss the details of the IIR algorithm and the truncations/scaling chosen, but after several hours of debugging (mainly the scaling and bit-shifting; I had to compare step responses between my simulations in MATLAB and Verilog) I succeeded in getting a stage to simulate and implement correctly. After a pipeline register, the next stage also implemented correctly.
The MATLAB-simulated transfer function is shown below. Note that the phase will be slightly worse with the addition of extra pipeline registers, which will be necessary in the final design. Some compensation for this may be made.
I will put up an actual measurement once I connect the VNA to the USRP2. This may take several weeks, but I have observed the predicted attenuation of signals on the spectrum analyser.
I am quite confident that the filter is doing its job, however it may need adjustment of the passband amplitude at the passband edge to achieve a shallower rolloff suitable for use as a control system.
It has been quite a while since my last update, due to exams/conference/holidays. I have two current priorities: to get a fully implemented version of the system in Xilinx ISE, including FM generation, CPU and demodulation filters (all with suitable debugging) and to set up the laser optics needed for implementing my final system. Currently I am testing out the CPU and DAC, putting a test 20-MHz output to the DAC channels, and will attempt to up-convert the signal using the CPU to program the DAC.
I have tested out upconversion on the DAC. I am writing to the registers 3, 2 and 1, with the following 32-bit word: 0x43000054. The default (non-upconverting) word is 0x43000084. The DAC outputs the sine wave exactly as required, but for some reason it is only at -20 dBm. This is most likely an issue on the DAC settings or the BasicTX? daughterboard config. I am currently investigating this.
-- VladimirNegnevitski - 04 Aug 2010
Possible DACsMaybe using a less-than-bleeding-edge DAC will make life easier.
Rather than fabricating a board, it seems that some older chips like the AD9754: 14-Bit, 100 MSPS+ TxDAC® D/A Converter (circa 1999) might be a good idea. This chip is fast enough and has an "output propagation delay" of 1 ns. Whether this is the actual latency or not is another question!
There is a full-featured evaluation board. The EVB is documented thoroughly in AD Appnote AN-420. It has digitlal input headers, and might be an easy way to attach a DAC to the USRP2, especially as the whole thing is going to go into a 19" rack enclosure anyway. -- LincolnTurner - 08 Jun 2010
It's been quite a while since my last update, chiefly due to uni commitments. I have not made much real progress, however much of the remainder of my project has been planned out.
I have submitted the 'analogue MTS' box to the electronics workshop (in the School of Physics); it will hopefully be ready in a few weeks. Then I will need to wire it up and align the optics, testing them out and paving the way for a digital MTS setup.
Bad news on the DAC front: I have realised that the DAC can either be in modulation mode for both channels or neither, but I cannot set it up to output DC on one and 80 or 160 MHz on the other. Thus I will probably need to set up a custom DAC board, which will sit on top of the BasicTX? and allow me to output at DC, hopefully with a much lower latency than the 24 clock cycles of the AD9777 DAC.
I have been learning how to use SVN today, which I will use for the remainder of my project. It is quite annoying to use it in conjunction with the Xilinx ISE package, since ISE makes literally hundreds of irrelevant files in a project folder - this is compounded by the huge volume of files churned out by the EDK. I need to find out from Lincoln/Russ how to remove particular revisions, otherwise my setup will rapidly get clogged up with files.
I have solved the ISE 12.1 issue with Microblaze CPUs. The trick is to first set the CPU up fully, then generate its netlist, generates its libraries, and only then export it to the SDK. In the SDK, it's a straightforward process to create a board description and a new project - then everything appears to work properly. I will be saving the fundamental C files separately from the SVN - otherwise it is too difficult to keep track of them.
Currently I am designing yet another Microblaze project in the EDK. The following peripheral components are being utilised:
I have made a carrier + 2-sideband 'FM' generator. The carrier and modulation frequencies are adjustable, as are the three gains and relative phases. When the sideband amplitudes are below 10% of the carrier, the signal closely resembles true FM; this makes sense, as the missing higher-order sidebands present in 'real' FM are very weak for this carrier-to-sideband ratio.
Let us describe the output wave as
I modify the phases and of the two sidebands relative to the carrier by presetting a phase upon initialisation. When all three phases are set to 0, the signal appears strongly amplitude-modulated; when the phases of the sidebands are set such that , AM appears to be minimised and the time-domain signal resembles FM. The following are two example images; the carrier frequency is 25 MHz and the modulation frequency is 780 kHz. The amplitude ratios and are 0.1.
No parameters other than phase were changed between images.
I have also run some experiments (read: had some fun) using a hardware multiplier as a demodulator; in lieu of having an MTS signal to demodulate I sent in a sine wave from an external function generator and multiplied it by a DDS signal whose frequency was deliberately matched. On the oscilloscope the DC and frequency-doubled components of the output signal were clearly distinguishable. By fiddling with the function generator I was able to get the demodulated signal very close to DC (with a frequency below 0.1 Hz). By hovering my finger over the USRP2's crystal oscillator, I was able to make the demodulated signal increase in frequency - a very sensitive test of the relative oscillator frequencies. The frequency gradually decreased again once I removed my finger.
Though not a particularly indicative test (I varied the function generator amplitude from 10 mVpp to 4 Vpp), it is a nice 'proof-of-concept' for digital demodulation.
-- VladimirNegnevitski - 14 May 2010
Several issues have been resolved in the last week. I have found a differential buffer in the original Ettus code that had clk_fpga_n and clk_fpga_p as inputs, with a single clk_fpga as the output. Having copied this, and used the buffered clock as a global clock, it seems as if my timing issues are vastly reduced if not gone altogether.
I have written a Verilog module for generating FM. A DDS runs at the modulation frequency; its output is added to the phase advance word of a consecutive DDS, which is thus frequency modulated. By adjusting the amplitude and frequency of the first DDS, the modulation index and frequency may be easily altered.
I have put the Verilog SPI development and solving the Flash issue temporarily on hold; since the timing issues seem much diminished it is probably easiest just to use the Microblaze again. It'll also make it easy to vary the FM parameters and loop filters directly. The question remains whether the Microblaze can cope with a 25 Mhz clock temporarily until it programs the clock manager to output the standard 100 MHz; I hope very much that it can.
Now that the 'obvious' FM generation method works, I will have a go at creating the carrier + 2 sidebands manually. This shouldn't be much of a problem.
-- VladimirNegnevitski - 10 May 2010
I've just gotten the SD card programming to work with a very basic program. The process was quite involved. I wasn't able to find any information whatsoever about doing the process in Windows, so I did the following steps:
from inside the shared folder, where I stored both the utility and the bitstream. A verification was done using
sudo ./u2_flash_tool --dev=/dev/sdb -t fpga u2_rev3.bin -v
The rest was straightforward. I have tested several bitstreams, and unfortunately they work inconsistently - it's most likely the same timing issue cropping up as before.
-- VladimirNegnevitski - 03 May 2010
Progress in the last week has been slower than ideal, due to other commitments! Since measuring the basic latency, I have made several incremental improvements:
Several goals have been reached in the last few days (chiefly due to the Easter break!):
Top light-blue trace: function generator.
Middle green trace: MSB of DAC input.
Bottom dark-blue trace: DAC output.
The previously-mentioned link suggests the DAC latency for default settings is much lower; the discrepancy brings me to an overarching issue: bootstrapping the USRP2 with my own bitstream/firmware once the project is complete.
Over the last few weeks, I have been uploading my bitstreams by first putting in the supplied SD card, supplying power to the USRP2, waiting for the LEDs to blink, then connecting to the FPGA through JTAG Boundary Scan and uploading my own bitstream. This method is fine for rapid development, but eventually I hope my setup to be completely standalone.
It is annoying to admit, but my early focus on programming the clock manager over SPI was partially misguided. Each time the USRP2 had been booted from the SD card bitstream, it had immediately set the clock manager and DAC to a set of non-default settings, rather than the defaults I thought were being set up. Thus the FPGA clock was already at 100 MHz, the ADC and DAC clocks were already turned on, and the logic levels were set up correctly. I had unwittingly been relying on this for my own bitstreams; a preset 100-MHz clock enabled my MicroBlaze? CPU to run predictably, and I could avoid the issue of 'bootstrapping' the clock manager from 25 MHz to 100 MHz without a running CPU to utilise. In future when I am ready to complete the project, this will be an essential step; I am not yet sure how I'll carry it out. One possibility is returning to my old Verilog SPI controller idea; I can quite easily write one that would simply read data from a block RAM and write it out through the control pins. This will hopefully be independent of clock rate changes, as the CPU clearly is not.
I have noticed several other mysterious issues. One is the strange behaviour of a debugging LED, and sporadic disturbances in various inputs and outputs. Another is the odd pattern of failures that affect the CPU debugger. A third is a very strange pattern of glitches on the DAC when it is fed with a sine from a DDS. A fourth is an occasional set of data from ChipScope? that seems to be garbled or filled with random glitches. A fifth is the occasional failure of the FPGA to accept a programming configuration over JTAG, which is an issue that cannot be explained by a fault in the FPGA bitstream.
It is almost as if something is 'glitching', affecting the entire FPGA - I need to gather more information on this, otherwise I will never have confidence in the reliable operation of my design.
Other than that, progress has been made, and I have in principle achieved my first milestone: measuring the USRP2 latency. This is tempered by it being so high - if I can find the DAC setting responsible, I have high hopes that the problem will be avoided.
-- VladimirNegnevitski - 06 Apr 2010 --
Getting closer to working SPI. So far I've gotten the entire system to work, and am able to read from/write to the clock manager's buffer registers (see the AD9510 datasheet). Resetting the clock manager (writing 0x30 to the 0x00 register) works properly. However, when I try to update the actual control registers (by writing 0x01 to register 0x5A) nothing seems to change. It may be a problem in what I'm asking, though, as reading the buffers shows that my updates are being applied. I see no reason why writing to 5A isn't working - maybe I'm not writing in a valid combination of settings.
Below is an example of the FPGA writing 0x804A00 to the AD9510's SPI input, and reading 0x000022 on its output pin. I have previously programmed the 4A register (the Output 1 dividers) with the values 0x22, so all is working as expected.
Port 0 is the SPI clock (clocked by the FPGA, acting as SPI master), 1 is the MOSI (FPGA output), 2 is the MISO (FPGA input), and 3 is the clock manager slave select (active-low). ChipScope? triggers when 3 goes low, thus the slave select actually begins high. See the AD9510 datasheet for details.
As the AD9510 is definitely getting updates, I'm sure this problem will be solved sooner rather than later.
-- VladimirNegnevitski - 01 Apr 2010
Made a bit of progress with the Microblaze and Chipscope in the last few days. I have reached several minor milestones:
-- VladimirNegnevitski - 26 Mar 2010
I've made my first important design choice: I'm going to use a Xilinx Microblaze CPU to read and write the SPI bus, rather than writing my own custom code. If I run into severe problems, I'll return to the 'verilog from scratch' option - but it shouldn't cause too many problems I hope, once I can learn the SPI driver.
I have instantiated a CPU project, given it a few peripherals, and had a go at following the workflow. The CPU is definitely running (i.e. I can set debugging breakpoints, view variables, etc) but so far I've been unsuccessful in getting a Hello World program to work. I think the problem lies in the STDOUT/STDIN variables of the 'software platform' (an automatically-generated set of C libraries and drivers for my custom CPU design); hopefully I'll solve it before too long.
-- VladimirNegnevitski - 21 Mar 2010
Over the last week I have not been spending as much time as I should have on my project. I have gotten a basic PWM module working, and learnt the development and simulation procedure for Xilinx ISIM.
I have learnt the basics of Simulink, and will be using it to model my control system (including ADC, DAC, and various internal settings). A tricky challenge is making an 'MTS' block and a laser noise block - these are required for properly modelling the MTS system. It should not be too difficult to model the USRP2 itself however.
-- VladimirNegnevitski - 19 Mar 2010
Yesterday I successfully got a ChipScope? implementation working on the board. It allows me to change the on-board LEDs remotely; Chipscope is slightly annoying to use but performs well. The project consists of two Chipscope IP cores: an ICON (Chipscope master controller) and a VIO (virtual input/output). The ICON interfaces directly with the FPGA over JTAG through boundary scan, and controls the VIO.
One annoying problem was that my main development computer has non-functional Chipscope drivers. For a long time I thought it was my fault for doing something wrong in ISE; I only discovered the problem when testing Chipscope from my other computer ( a laptop running Vista). For now I'm running Chipscope through the laptop.
-- VladimirNegnevitski - 11 Mar 2010
Main goal today is to figure out the clock system on the USRP2 board, identify what I have to write to the clock control pins, and attempt to set up flashing LEDs.
The USRP2 Rev 4.0 schematic is here. The clock circuitry is described on page 5.
I will now work backwards through the FPGA schematic, and try to figure out the Verilog files responsible for the clocking control. --
My project will be a digital control system, controlling the frequency of a 780 nm diode laser to be used in Bose-Einstein Condensate (BEC) research in the School of Physics. My supervisors are Assoc Prof Lindsay Kleeman (ECSE) and Dr Lincoln Turner (Physics).
To implement the control system, I will be using an Ettus Research USRP2 software radio peripheral. This is essentially a Xilinx Spartan 3 FPGA and some good-quality ADCs and DACs, nicely mounted on a PCB in a box. Currently the FPGA hosts a softcore CPU and several hardware-based maths algorithms. The basic principles of the USRP2 are as follows:
I plan to develop a new FPGA bitstream for the USRP2, to transform it into a flexible, low-latency diode laser controller using the technique of modulation transfer spectroscopy. This is a slightly tricky problem: the equipment to create a control error signal involves an 80 MHz FM signal source with a bandwidth of several MHz, and various demodulators. I will have to implement these using direct digital synthesis (DDS) on the USRP2 itself. (The design of this equipment was my physics honours project last year, so I am quite familiar already with what is needed. The challenges will lie in the digital implementation.)
Thus my project consists of two fairly well-separated elements: the design of the digital modulation/demodulation hardware, and the design of the laser controller. Some of my project milestones in the modulation/demodulation will be:
-- VladimirNegnevitski - 07 Mar 2010
注：Digital MTS using the USRP2 （英文原文出处，以上翻译整理仅供参考 Email: firstname.lastname@example.org !）