Couple of years ago I did some GPS related work taking data from the analogue-to-digital converters in the radio receivers to the binary data stream. I think I'm going to (very slowly) tweet a series of tweets on the large amount of pretty mathematics involved at each stage.
Used box like this ettus.com/product-catego…
Measures varying electric (I think) field E(t) in antenna. Challenge: we want 1.5GHz signal. Nyquist sampling theorem says we need 3GHz samples. How to do that on a PC? en.wikipedia.org/wiki/Nyquist%E…
Here's trick: full Nyquist theorem says you can reconstruct signal from samples if all frequencies lie in the range [f₁,f₂] and you sample at rate 2(f₂-f₁). If analogue hardware filters signal to narrow range before sampling we're fine eg. 1.5GHz±20MHz en.wikipedia.org/wiki/Nyquist–S…
If the field at the antenna is E(t) you probably expect that the samples given by the hardware are E(t₁), E(t₂),... at equally spaced intervals. In fact the data produced by SDRs are typically complex-valued samples...
What actually happens is this: if hardware receives the signal cos(2πft), you can think of this as (exp(2πift)+exp(-2πift))/2, the sum of two frequencies. The hardware throws away the negative frequency part and gives you samples from exp(2πift). I should say how it does this...
Interlude: I've mentioned a couple of theorems so far. If you hadn't noticed, this is my primary focus: the amazing array of theorems involved in getting GPS info from satellites to us.
If we start with real valued E(t) and remove the negative frequency parts we end up with E(t)+iF(t). For example, if E(t)=cos(2πift) then F(t)=sin(2πift) and E(t)+iF(t) is exp(2πift). The operation that gives us F from E is called the Hilbert transform.
en.wikipedia.org/wiki/Hilbert_t…
So from a functional perspective a software defined radio gives us samples of the electric field E and also samples from its Hilbert transform F. As a developer this was my starting point, receiving these samples. But why do we want exp(2πift) instead of cos(2πift)?
(BTW I don't know how an SDR physically computes the Hilbert transform. In principle it's just a convolution and engineers have a long history of building circuits to compute convolutions. But it's a particularly hard convolution to compute.)
Correction: I put to many i's in! I meant exp(2πift), cos(2πft) and sin(2πft).
Suppose we want to transmit some music a(t). An AM radio station does this by broadcasting the field a(t)cos(2πft). The cos(2πft) is called the carrier (f might be 100MHz, say) and the music is slowly varying by comparison (maybe it has components up to 20KHz).
So if I receive E(t)=a(t)cos(2πft) at my antenna, how can I recover a(t) to play through the speaker?
Old school way: it's easy (*) to build a circuit to compute |E(t)| from E(t) and another to smooth out the high frequency oscillations leaving us with a(t)/2.
(*) easy=a clumsy kid of age 10 could build one from discrete components by following instructions in a kit
In SDR input field cos(2πft) is converted to samples of exp(2πift). For a linear system this means for a slowly varying a(t), E(t)=a(t)cos(2πft) is converted to E(t)+iF(t)=a(t)exp(2πift).
(Here's a theorem about what "slowly varying means":
en.wikipedia.org/wiki/Hilbert_t…)
This means that in principle you can decode AM radio in a few lines of code by asking the SDR for its complex valued samples E(t)+iF(t)=a(t)exp(2πift), finding the absolute value |E(t)+iF(t)|=|a(t)|, and interpreting that as audio. In the absence of noise that might actually work
(Aside: FM encodes the audio in the phase, not the absolute value. You can look at actual code for a low-cost SDR here: github.com/dpiponi/mini_fm By convention the real and imaginary parts are called I and Q and those names are used in that code.)
(Correction: in some tweets above I put too many i's in! I meant exp(2πift), cos(2πft) and sin(2πft).)
If your SDR gives you samples u(t)=a(t)exp(2πift) then another way to recover a(t) is to compute u(t)exp(-2πift). Because we have complex samples removing the carrier is just complex multiplication! But there's a catch. The signal we actually receive is u(t)=a(t)exp(2πift)+noise
So multiplying by exp(-2πift) gives us a(t)+other noise. We've turned our problem into another: detecting a slowly varying message a(t) in the presence of additive noise.
Suppose we can solve this problem. Here's something really cool: if you don't know the exact carrier...
...we can try looking for a message in u(t)exp(-2πift) for different values of f. In other words, after we've collected our samples we can easily try many different carriers just by multiplication. For GPS this turns out to be really useful. The satellites are moving at km/s...
...so there's significant doppler shift. If we don't know the velocity we need to try many different frequencies. Having complex valued samples makes this easy.
But now I need to talk about that noise term. OMG "noise" is an understatement!
The GPS satellites (~20000 km away) transmit an electromagnetic field which ultimately causes electrons in your antenna to oscillate. This is what a radio receiver detects. But the these oscillations are so tiny they're smaller than the motion due to the heat of the antenna!
And it gets worse. The GPS satellites all transmit using the same carrier frequency. We need to disentangle multiple signals that have been added together and buried under thermal noise. To do this we'll need some probability theory...
Pick N numbers independently uniformly at random from set {-1,1}. Call them (a(1),...,a(N)). Pick N more called (b(1),...,b(N)). Use ∑ to mean sum i=1 to N. Let E be expectation. You can show:
E[∑a(i)a(i)]=N (trivial :)
E[∑a(i)b(i)]=0
Standard deviation of ∑a(i)b(i) = 1/√N
I meant sqrt(N)!
In an ideal scenario, if you know a, and receive sequence u(i)=Aa(i)+Bb(i) with unknown A and B you compute Z=∑a(i)u(i). You can show E[Z]=A. Also, if N is large enough, N >> √N so the contribution to Z due to ∑a(i)b(i) is relatively small and E[Z] is a good estimate for A.
Less ideal: suppose a(i) is being transmitted on a repeating loop but we don't know when the loop starts. What we actually receive now is u(i)=Aa(i+r mod N)+Bb(i) where we don't know r.
Because I said a(i) are independent, when r≠0 mod N, E[∑a(i)a(i+r)] = 0 with std dev √N. When we compute Z=∑a(i)u(i) we get E[Z]=N for r=0 mod N, otherwise 0. So if N >> √N and we keep computing Z for the last N numbers we've received we get a peak whenever the a(i) restart
Here I've used a discrete variable i to index a as if we receive a(i) neatly one at a time, but in practice the EM field is a function of time. The signal being sent is a periodic "squarish" shaped wave g(t)=a(floor(ft) mod N) where f is some frequency...
(BTW we can do better than picking a(i) randomly, but I don't want to talk about that yet as it deserves its own separate section.)
Now with pictures. Let's just look at an example "random" function g(t) and a copy of this function offset by some number r, g(t+r). The animation of the GIF shows what happens as you vary r. I've illustrated g(t) and g(t+r) in the top left.
At the top right is the product of these two functions, and at the bottom left is a graph showing how when we sum g(t)*g(t+r) over the period of g, that sum varies with r. The vertical line shows the value of r at that moment in time in the upper graphs.
I hope you can see that the graph at the bottom left shows how well g(t) and g(t+r) match up. There are some "accidental" peaks where g(t) looks similar to itself a=but the big peak happens when r=0 mod 20 (20 being the period of g in this example).
Share this Scrolly Tale with your friends.
A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.
