Multispectral imaging is an attractive sensing modality for small unmanned aerial vehicles (UAVs) in numerous military and civilian applications such as reconnaissance, target detection, and precision agriculture. Cameras based on patterned filters in the focal plane, such as conventional colour cameras, represent the most compact architecture for spectral imaging, but image reconstruction becomes challenging at higher band counts. We consider a camera configuration where six bandpass filters are arranged in a periodically repeating pattern in the focal plane. In addition, a large unfiltered region permits conventional monochromatic video imaging that can be used for situational awareness (SA), including estimating the camera motion and the 3D structure of the ground surface. By platform movement, the filters are scanned over the scene, capturing an irregular pattern of spectral samples of the ground surface. Through estimation of the camera trajectory and 3D scene structure, it is still possible to assemble a spectral image by fusing all measurements in software. The repeated sampling of bands enables spectral consistency testing, which can improve spectral integrity significantly. The result is a truly multimodal camera sensor system able to produce a range of image products. Here, we investigate its application in tactical reconnaissance by pushing towards on-board real-time spectral reconstruction based on visual odometry (VO) and full 3D reconstruction of the scene. The results are compared with offline processing based on estimates from visual simultaneous localisation and mapping (VSLAM) and indicate that the multimodal sensing concept has a clear potential for use in tactical reconnaissance scenarios.
We propose a method for jointly estimating intrinsic calibration and internal clock synchronisation for a pantilt- zoom (PTZ) camera using only data that can be acquired in the field during normal operation. Results show that this method is a promising starting point towards using software to replace costly timing hardware in such cameras. Through experiments we provide calibration and clock synchronisation for an off-the-shelf low-cost PTZ camera, and observe a greatly improved directional accuracy, even during mild manoeuvres.
Morrison, Aiden; Sokolova, Nadezda; Haavardsholm, Trym Vegard; Hagen, Ove Kent; Opsahl, Thomas Olsvik & Ånonsen, Kjetil Bergh
(2017).
Collaborative indoor navigation for emergency services personnel.
IEEE Aerospace Conference. Proceedings.
ISSN 1095-323X.
2017-June.
doi: 10.1109/AERO.2017.7943729.
Vis sammendrag
First responders and other emergency services personnel must often enter buildings which prevent the use of GPS or other satellite navigation signals for positioning. Loss of navigation capability combined with the fact that the buildings are often unknown to the personnel in question makes it more difficult for individual team members to coordinate with one another, and difficult or impossible for the team leader to monitor and direct the actions of each team member. While inertial navigation or pedestrian dead reckoning provide for some degree of navigation in GPS signal denied environments, these solutions degrade with time and may require prohibitively large and expensive inertial solutions to navigate over extended periods, while also allowing each individual user to accumulate independent positioning errors and thereby appearing to ‘drift away’ from one another. This paper presents an implementation of a collaborative navigation system utilizing each of user-to-user radio links, Global Navigation Satellite Systems (GNSS) when available, inertial navigation, pedestrian dead reckoning, as well as camera based Simultaneous Location and Mapping (SLAM) to provide a team of users with absolute and relative situational awareness for themselves and their team. The application of collaborative navigation to such a team provides the triple benefits of providing improved absolute navigation accuracy, improved relative navigation accuracy, and greatly enhanced situational awareness for all cooperating team members.
Haavardsholm, Trym Vegard; Smestad, Ragnar; Larsen, Martin Vonheim; Thoresen, Marius & Dyrdal, Idar
(2016).
Scene Understanding for Autonomous Steering,
STO-MP-IST-127: Intelligence and Autonomy In Robotics.
NATO Science and Technology Organization.
ISSN 978-92-837-2068-3.
Ringaby, Erik; Friman, Ola; Forssén, Per-Erik; Opsahl, Thomas Olsvik; Haavardsholm, Trym Vegard & Kåsen, Ingebjørg
(2014).
Anisotropic scattered data interpolation for pushbroom image rectification.
IEEE Transactions on Image Processing.
ISSN 1057-7149.
23(5),
s. 2302–2314.
doi: 10.1109/TIP.2014.2316377.
(2014).
Compact camera for multispectral and conventional imaging based on patterned filters.
Applied Optics.
ISSN 1559-128X.s. C64–C71.
doi: 10.1364/AO.53.000C64.
A research platform with four cameras in the infrared and visible spectral domains is under development at the Norwegian Defence Research Establishment (FFI). The platform will be mounted on a high-speed jet aircraft and will primarily be used for image acquisition and for development and test of automatic target recognition (ATR) algorithms. The sensors on board produce large amounts of data, the algorithms can be computationally intensive and the data processing is complex. This puts great demands on the system architecture; it has to run in real-time and at the same time be suitable for algorithm development.
In this paper we present an architecture for ATR systems that is designed to be
exible, generic and efficient. The architecture is module based so that certain parts, e.g. specific ATR algorithms, can be exchanged without affecting the rest of the system. The modules are generic and can be used in various ATR system configurations.
A software framework in C++ that handles large data fl
ows in non-linear pipelines is used for implementation. The framework exploits several levels of parallelism and lets the hardware processing capacity be fully utilised. The ATR system is under development and has reached a first level that can be used for segmentation algorithm development and testing. The implemented system consists of several modules, and although their content is still limited, the segmentation module includes two different segmentation algorithms that can be easily exchanged. We demonstrate the system by applying the two segmentation algorithms to infrared images from sea trial recordings.
The paper describes the georeferencing part of an airborne hyperspectral imaging system based on pushbroom scanning. Using ray-tracing methods from computer graphics and a highly efficient representation of the digital elevation model (DEM), georeferencing of high resolution pushbroom images runs in real time by a large margin. By adapting the georeferencing to match the DEM resolution, the camera field of view and the flight altitude, the method has potential to provide real time georeferencing, even for HD video on a high resolution DEM when a graphics processing unit (GPU) is used for processing.
An airborne system for hyperspectral target detection is described. The main sensor is a HySpex pushbroom hyperspectral imager for the visible and near-infrared spectral range with 1600 pixels across track, supplemented by a panchromatic line imager. An optional third sensor can be added, either a SWIR hyperspectral camera or a thermal camera. In real time, the system performs radiometric calibration and georeferencing of the images, followed by image processing for target detection and visualization. The current version of the system implements only spectral anomaly detection, based on normal mixture models. Image processing runs on a PC with a multicore Intel processor and an Nvidia graphics processing unit (GPU). The processing runs in a software framework optimized for large sustained data rates. The platform is a Cessna 172 aircraft based close to FFI, modified with a camera port in the floor.
We have developed and tested a standoff biological aerosol detection demonstrator employing ultraviolet laser-induced fluorescence. It is based on commercially available components including a pulsed 355-nm laser and an intensified charge-coupled device camera. Biological warfare simulants and interferents were released and measured in open air field and closed-chamber laboratory tests. We analyzed the experimental data at different spectral resolutions, using statistics-based anomaly detection, and spectral angle mapping algorithms. The results show that less than 20 spectral channels in the 350-700-nm spectral region are sufficient in order to discriminate between the agents released using these methods. This corresponds to sacrificing high spectral resolution for the benefit of more photons in each channel and reduced computation time.
Tarabalka, Yulija; Haavardsholm, Trym Vegard; Kåsen, Ingebjørg & Skauli, Torbjørn
(2009).
Real-time anomaly detection in hyperspectral images using multivariate normal mixture models and GPU processing.
Journal of Real-Time Image Processing.
ISSN 1861-8200.
4(3),
s. 287–300.
doi: 10.1007/s11554-008-0105-x.
Vis sammendrag
Hyperspectral imaging, which records a detailed spectrum of light arriving in each pixel, has many potential uses in remote sensing as well as other application areas. Practical applications will typically require real-time processing of large data volumes recorded by a hyperspectral imager. This paper investigates the use of graphics processing units (GPU) for such real-time processing. In particular, the paper studies a hyperspectral anomaly detection algorithm based on normal mixture modelling of the background spectral distribution, a computationally demanding task relevant to military target detection and numerous other applications. The algorithm parts are analysed with respect to complexity and potential for parallellization. The computationally dominating parts are implemented on an Nvidia GeForce 8800 GPU using the Compute Unified Device Architecture programming interface. GPU computing performance is compared to a multicore central processing unit implementation. Overall, the GPU implementation runs significantly faster, particularly for highly data-parallelizable and arithmetically intensive algorithm parts. For the parts related to covariance computation, the speed gain is less pronounced, probably due to a smaller ratio of arithmetic to memory access. Detection results on an actual data set demonstrate that the total speedup provided by the GPU is sufficient to enable real-time anomaly detection with normal mixture models even for an airborne hyperspectral imager with high spatial and spectral resolution.
We have performed a field trial to evaluate technologies for stand-off detection of biological aerosols, both in daytime and at night. Several lidar (light detection and ranging) systems were tested in parallel. We present the results from three different lidar systems; one system for detection and localization of aerosol clouds using elastic backscattering at 1.57 μm, and two systems for detection and classification of aerosol using spectral detection of ultraviolet laser-induced fluorescence (UV LIF) excited at 355 nm. The UV lidar systems were utilizing different technologies for the spectral detection, a photomultiplier tube (PMT) array and an intensified charge-coupled device (ICCD), respectively. During the first week of the field trial, the lidar systems were measuring towards a semi-closed chamber at a distance of 230 m. The chamber was built from two docked standard 20-feet containers with air curtains in the short sides to contain the aerosol inside the chamber. Aerosol was generated inside the semi-closed chamber and monitored by reference equipments, e.g. slit sampler and particle counters. Signatures from several biological warfare agent simulants and interferents were measured at different aerosol concentrations. During the second week the aerosol was released in the air and the reference equipments were located in the centre of the test site. The lidar systems were measuring towards the test site centre at distances of either 230 m or approximately 1 km. In this paper we are presenting results and some preliminary signal processing for discrimination between different types of simulants and interference aerosols.
Haavardsholm, Trym Vegard; Opsahl, Thomas Olsvik; Skauli, Torbjørn & Stahl, Annette
(2022).
Compact multimodal multispectral sensor system for tactical reconnaissance.
(2018).
Compact multispectral multi-camera imaging system for small UAVs.
Morrison, Aiden; Sokolova, Nadezda; Haavardsholm, Trym Vegard; Hagen, Ove Kent; Opsahl, Thomas Olsvik & Ånonsen, Kjetil Bergh
(2017).
Collaborative indoor navigation for emergency services personnel.
Morrison, Aiden; Haavardsholm, Trym Vegard; Hagen, Ove Kent; Opsahl, Thomas Olsvik; Ånonsen, Kjetil Bergh & Eriksen, Erik Holthe
(2017).
Collaborative Indoor Navigation for Emergency Services Personnel.
(2014).
The NFU pod: An airborne research platform for algorithm development and testing.
Vis sammendrag
In the Norwegian Captive Carry program (Nytt flymåleutstyr, or NFU, in
Norwegian) a pod to be carried by a jet aircraft is under construction. The
NFU pod is an airborne research platform incorporating infrared cameras,
a high quality navigation system, and an onboard real-time processing system.
Data from the various sensors can also be recorded for later processing.
This system provides us with a realistic environment for hardware conditions
for sensors and scenes for analysis.
An important activity in the NFU program is to develop and test algorithms
for automatic target recognition. In order to achieve this, we have designed
a software architecture for image processing, allowing real-time control
of the camera platform. The system is built up around a pipeline structure.
Each module in the pipeline represents an algorithm, which can easily be
exchanged with alternative modules for flexible testing and development.
In this way, alternative algorithms can be tested - even in the air.
The presentation will focus on the software architecture for image processing.
As an example, we will present an algorithm for segmentation of objects in
the scene.
(2013).
Multispectral and conventional imaging combined in a compact camera by using patterned filters in the focal plane.
A research platform with four cameras in the infrared and visible spectral domains is under development at the Norwegian Defence Research Establishment (FFI). The platform will be mounted on a high-speed jet aircraft and will primarily be used for image acquisition and for development and test of automatic target recognition (ATR) algorithms. The sensors on board produce large amounts of data, the algorithms can be computationally intensive and the data processing is complex. This puts great demands on the system architecture; it has to run in real-time and at the same time be suitable for algorithm development.
In this paper we present an architecture for ATR systems that is designed to be
exible, generic and efficient. The architecture is module based so that certain parts, e.g. specific ATR algorithms, can be exchanged without affecting the rest of the system. The modules are generic and can be used in various ATR system configurations.
A software framework in C++ that handles large data fl
ows in non-linear pipelines is used for implementation. The framework exploits several levels of parallelism and lets the hardware processing capacity be fully utilised. The ATR system is under development and has reached a first level that can be used for segmentation algorithm development and testing. The implemented system consists of several modules, and although their content is still limited, the segmentation module includes two different segmentation algorithms that can be easily exchanged. We demonstrate the system by applying the two segmentation algorithms to infrared images from sea trial recordings.
(2012).
Improving anomaly detection with Multinormal Mixture Models in shadow.
Denstedt, Martin Anders F.; Haavardsholm, Trym Vegard; Skauli, Torbjørn & Randeberg, Lise Lyngsnes
(2011).
A hyperspectral imaging system for real-time analysis of skin – capabilities and prospects.
Resta, Salvatore; Acito, Nicola; Diani, Marco; Corsini, Giovanni; Opsahl, Thomas Olsvik & Haavardsholm, Trym Vegard
(2011).
Detection of Small Changes in Airborne Hyperspectral Imagery: Experimental Results over Urban Areas.
(2011).
Hyperspectral imaging technology and systems, exemplified by airborne real-time target detection.
(2010).
Architecture of the real-time target detection processing in an airborne hyperspectral demonstrator system.
Vis sammendrag
An airborne demonstrator for real-time hyperspectral target detection has been developed at FFI. The real-time image processing is challenging, not only due to the computational complexity of the algorithms, but also due to the sustained high data rate. A software framework has been designed in C++ to handle large data flows in a nonlinear pipeline architecture. The cross-platform framework enables full exploitation of multicore processors and graphics processing units (GPU), and even distribution among multiple computers. Object oriented design enables flexible reconfiguration of the pipeline. Tests demonstrate sustained real-time performance of complex anomaly detection processing.
The Norwegian Defense Research Establishment (FFI) is developing a technology demonstrator for airborne real-time hyperspectral target detection. The system includes two nadir-pointing line scan cameras. The line scanned images are georeferenced in real-time by intersecting rays cast from the cameras with a 3D model of the terrain underneath. The georeferenced images may then easily be ortho-rectified (e.g by using texture mapping in OpenGL) and overlaid digital maps. This poster presents the performance of a cuda implementation of the georeferencing method.
(2010).
Real time direct georeferencing and orthorectification of images from airborne pushbroom cameras.