As seen in my last blog post, my first task within GSoC was to implement a signal detection block. In this week, I have prototyped the Signal Detector in Python and have done over-the-air testing to verify it’s functionality. Also, I have started with the Signal Separator block, which will separate all detected signals from the input signal and pass them serialized within a message (or maybe tagged stream). As a first shot, I have chosen the second approach from the last blog post, where the signal frequency information is not stored in a central RF map, but passed between blocks with messages (see figure below). This way, I don’t have to deal with write/read access coordination of the object. Also, if a GUI is used, the detected frequency bands can be altered by manual band selection before the signals get separated.
Signal Detector Block
Here is how the Signal Detector works. The input spectrum PSD gets estimated by a periodogram (which will eventually be replaced by a welch method). The PSD then gets normalized to it’s maximum, because the threshold value is then defined between 0 and 1 for any signal.
The threshold can be entered manually (between 0 and 1), or the automatic threshold function can be used. If this is activated, a sensitivity must be entered instead. The threshold then gets calculated by ordering the frequency bins by their absolute value and and search for a difference between two succeeding samples greater than 1-sensitivity (since high sensitivity means low threshold). As the maximum jump (=difference) can be 1, the sensitivity is also defined between 0 and 1. The value at the jump position gets used as a threshold then.
Next, the adjacent frequency bins get interpreted as one signal by searching succeeding bins and write start and stop positions in an array.
Last, it is checked wether the frequency bands have changed since the last calculation (within 1% tolerance because if jiggling). If so, the array gets converted to a pmt vector and passed as a message. This way, all succeeding blocks know there was only a change in detection when a message is received.
Simulation with three cosines works well (see figure below). An over-the-air test will soon be provided.
- Noise is still a problem. If noise is dominant in the input signal, the false detection is way too high. Maybe I will abstain from normalizing the PSD, because then noise gets mapped to values between 0 to 1. My target here is to have no detections when only noise is present.
- Error handling (if PSD = 0) needs to be improved
- Set input length as parameter
- To avoid jiggling, the frequency values can be quantized
- Output PSD estimation as stream
- C++ implementation
Signal Separator Block
The second block I have dealt with this week is the Signal Separator. I plan to make it work with a Xlating FIR filter for every detected signal. For this purpose, the block contains a vector with all the filters for the current signals. In the work() function, the input signal gets filtered by every filter in this vector and the output gets appended to a result vector, that will be passed as a message (along with the signal number and frequency information).
The filters get build every time a message with band information is received, which only happens on actual detection changes. I use firdes to generate the lowpass taps and pass them to the new filters.
- Use the FIR filter kernel instead of Xlating FIR filter block and implement a xlating solution myself for performance reasons
- Optimize filter building
- Compare if signal was there before, but only new signal appeared an reuse previous filters.
- Build a dictionary of taps for different cutoff frequencies and steepnesses to reuse them (possible with quantized frequency steps)
- Write unit tests
For the next week, I will deal with the ToDos mentioned in the two sections above. I hope to get started also with the GUI in the 3rd week.
As always, please feel free to give feedback here and on my previous post as well.