Beamforming Explained – a generalist approach to array signal processing

Beamforming has long been a topic for physicists and signal processing researchers. By this talk, Tobas Dahl aim to open up the principles of beamforming for researchers with backgrounds from multiple quantitative disciplines; partial differential equations, statistics, machine learning and data analysis, chemometrics, psychometrics, cybernetics and others who feel they could understand the basics without taking on a (new) master's degree in physics or digital signal processing.

By Davidjessop (Own work) [CC BY-SA 4.0], via Wikimedia Commons

'Beamforming' can be seen as a way of processing signals in audio and acoustic (or radar) based image applications. Receiving multiple 'copies' of signals emerging from multiple sources in all directions with an array of sensors , so-called 'array focussing' methods are used to 'home in' on signals or reflectors of interest while suppressing the effect of interfering signals. Frequently, these methods are related to physical wave propagation and physical 'sampling' principles, not too different from a layman's understanding of how parabola antenna reception or lenses in optics work.

However, once a set of signals have been sampled in a computer, the problem of estimating sound sources or reflectors can also be seen as a statistical estimation problem. Using a very general approach that can be understood only using high-school physics, we can derive a compact model inviting statistical estimation. Interestingly, the (ill-posed) estimation procedures resulting from this model are different from the typical 'beam forming' approaches and algorithms representing the current state-of-the-art.

Soon, many of us will own a 'microphone array' at home, as smart speakers such as Amazon Echo and Google Home become popular (1 in 6 americans now own such a smart speaker). Speech recognition with microphone arrays is therefore used as an example throughout the lecture. In fact, array signal processing is a skill in such demand that big companies like Google and Microsoft now recruit 'beam forming' experts from France and Germany to serve their internal engineering needs.

By Chetvorno (Own work) [CC0], via Wikimedia Commons

This talk is educational in its nature, and it aims to provide a good understanding of array-based estimation for researchers who are not familiar with digital signal processing. The model and approach chosen is such that we avoid introducing plane wave physics, extensive use of complex numbers, Fourier transforms and sampling theory. I hope to open up the space for researchers with backgrounds from multiple quantitative disciplines; partial differential equations, statistics, machine learning and data analysis, chemometrics, psychometrics, cybernetics and others who feel they could understand the basics without taking on a (new) master's degree in physics or digital signal processing.

Finally, the statistical estimation approach used also serves as a 'lens' through which one can observe and criticize the current practices used for beam forming. In this lecture we look at how the evolved tricks of the 'beamforming trade' can be traced back to the days of analog signal processing and to assumptions around stationarity (which is not always the case). We also consider how estimation could be improved as more compute power becomes available.

This presentation is a result of my research-in-progress. Feel free to input, ask and challenge!

Published Feb. 28, 2018 10:02 AM - Last modified Feb. 28, 2018 10:20 AM