The third week of GSoC comes to an end and I have been making further progress. Most of the time was consumed by debugging the Signal Separator block, that showed unintended outputs in the unit tests. Also, I changed the format of the RF map and implemented a single pole IIR averaging filter in the Signal Detector block to average the PSD and make detection more reliable. To be able to test the Signal Separator, I created a prototype of the Signal Extractor block, which is able to extract the samples of the selected signal in the messages from the Signal Separator. For the Signal Detector I implemented some callbacks to be able to change parameters during runtime. This is most interesting when changing the threshold after seeing the actual spectrum. Last but not least, I started to implement the QT GUI Inspector Sink.
The RF Map’s format is now changed from (start frequency, stop frequency) to (center frequency, bandwidth) to be able to quantize the bandwidth. This leads to less recalculation of the filter taps in the Signal Separator. The quantization can be passed as a block parameter relative to the sampling rate in the Signal Detector.
To be able to average the PSD efficiently, a single pole IIR filter was implemented in the Signal Detector. The following equation describes it’s averaging behaviour:
The parameter alpha can be set as a block parameter.
In order to QA test the Signal Separator, I needed to implement a basic Signal Extractor block, which takes the signal index as the a block parameter and passes the samples of this signal as complex stream. I could also have done this directly in the python QA code, but since I need the block later anyway, I chose to implement it directly. During writing the QA test it became clear, that the Signal Separator does not output what was expected. I spent most of the week debugging this block and could finally achieve a similar behavior to the Xlating FIR Filter block:
For now the block works by managing it’s own history of down-mixed input samples. Therefore, there is a history vector which holds one array for each signal with the rotated input samples. Each buffer gets filtered afterwards by its corresponding FIR filter and the decimated output samples are stored in a temp buffer, before being appended to the output message. I did not choose to use the GR history here, because that would have meant more CPU load to mix-down the history samples again, which I want to spare for the FIR filter calculations. At the end, the last (ntaps-1) samples get memcopied to the beginning of the history buffer.
The Signal Separator now has callbacks to change parameters during runtime. This is most interesting for changing threshold or sensitivity of auto-threshold to adapt the algorithm to the current spectrum. Tests already worked nicely.
I stated with the GUI Sink, but mostly dealt with reading through existing source files and learn-by-doing. So far there is nothing to show (only a window pops up with an empty plot). I will try to include my GUI in any existing QT windows (like gr-qtgui does).
For the next week, the following tasks are waiting:
- Continue programming the GUI
- Finish QA tests for Signal Separator
- Write documentation for all blocks
- Think of a more efficient filter rebuild logic in the Signal Separator (currently all filters are re-calculated when a new RF Map message is received)