[Ground-station] half-precision floats?

Phil Karn karn at ka9q.net
Wed Jul 25 23:43:16 PDT 2018


Does anybody have experience with (or opinions about) using
half-precision 16-bit floating point? It was developed for computer
graphics but I think they might be useful in SDRs.

It's a fairly recent addition to IEEE-754 floating point, and newer x86
processors have instructions to convert between them (which you'd only
use as an interchange or storage format) and 32- and 64-bit floats
(which you'd still use for computation). There's a sign bit, a 5 bit
binary exponent and 10 bit mantissa with an implicit leading '1' (when
normalized).

Ordinary 16-bit integers have a dynamic range of about 100 dB, which is
plenty for most RF A/Ds. But most radio receivers need more than 100 dB
of dynamic range, so you need an analog AGC ahead of the A/D.

That raises of the question of how to pass the analog gain settings
downstream with the (scaled) sampled data. This is useful for displaying
(roughly) absolute signal levels to the user and avoiding unnecessary
pumping of the digital AGC operating on the final, filtered signal.

I currently pass the analog gain settings as metadata with each packet
of 16-bit integer I/Q samples on the IP multicast stream from front end
module to receiver module. The latter converts to single precision float
for processing, applying those gain settings so I can work with 32-bit
floating point samples that reflect absolute signal levels irrespective
of the analog AGC settings (assuming those gains are accurate, of course
-- but that's another topic). All this is working very well.

If I used floating point as my I/Q exchange format, I could apply those
gain settings directly to each sample and eliminate the explicit
metadata. But 32-bit single-precision floats would double the network
bit rate, and that's a significant penalty. Hence my interest in 16-bit
floats. (In either case there's a hardware-dependent timing skew between
analog gain changes and when they actually take effect on the A/D
output. This seems best done close to the A/D, i.e., on the sending end
of the I/Q multicast stream. Incorporating the gain settings into
floating point samples would allow gain changes on other than packet
boundaries.)

I see only one drawback: a 16-bit half-precision floating point
obviously cannot have the same 100 dB *instantaneous* (spur-free)
dynamic range as a 16-bit integer. If you consider only normalized
mantissas with 11 bits, the spur-free range would be about 68 dB. But
floating point numbers can be non-normalized, so I'm not sure how to
compute the ENOB (effective number of bits).

Few (if any) A/Ds have an ENOB anywhere close to 16 bits anyway. OTOH,
to keep network traffic down I will often transfer some highly decimated
sample streams with much higher ENOBs than the A/Ds producing them.
Hence my dilemma.

So, are 16-bit floats usable for SDR I/Q sample exchange?

Phil




More information about the Ground-Station mailing list