

Objective
Leverage advances in artificial intelligence and other image processing algorithms to generate higher quality longwave thermal, fused thermal and near-infrared suitable for use on embedded hardware systems for Soldier-borne use.
Description
This topic seeks to leverage advances in algorithms, processing techniques, and embedded hardware to improve image quality for human consumption of thermal longwave (LWIR) and LWIR fused with near-infrared imagery (NIR). The primary objectives of this work are to reduce cognitive burden during long duration missions and improve user acceptance of systems which employ LWIR and NIR sensors.
A secondary objective is to provide information to the higher-level system, for example an augmented reality display, regarding detections of certain classes of targets or events of interest so that they can be brought to the Soldier’s attention as needed.
The algorithms should be capable of generating the best possible imagery under a range of illumination and ambient conditions, and potentially provide feedback to the system to change camera settings to match the scene and usage conditions (e.g., presence or not of ego motion, dynamic scene content, lighting conditions, etc.). It should be assumed the Soldier is utilizing the sensors as part of a mobile system with possible rapid motion, though it is anticipated that such processing would be also useful for static camera emplacements.
It is imperative that the proposed processing schema be capable of running on very low size, weight, power, and cost (SWAP-C) embedded hardware. While the hardware is not the focus of this topic, performers must minimally show that their solution can be run on actual hardware though system design and analysis.
Phase I
Generate a detailed description of your proposed solution and describe its applicability to the Soldier-borne system and the benefits of use, including a high-level imaging pipeline architecture, training data requirements, and other information as applicable. Estimate the computational power required and suggest example hardware which could potentially run the proposed algorithms. Show proof of principle of various algorithm components on example image frames.
Phase II
Leveraging the high-level design and demonstrated components from phase 1, complete the image processing pipeline, including any necessary training. Surrogate hardware with pre-recorded video feeds is acceptable, but a design including potential end-use embedded hardware with real-time video is preferred. The pipeline must process at least video from a microbolometer type uncooled camera at 640×480 or greater resolution and 30 Hz minimum (1280×1024 at 120 Hz preferred), though a fused longwave/ near-infrared solution is preferred. Discuss ways in which the algorithm or pipeline might fail when used in a military system and potential mitigations.
Phase III
As part of an Army applied research program, instantiate the image pipeline on relevant low SWAP-C embedded hardware to perform real time image processing with sensors of interest. The choice of hardware and image pipeline optimization should be performed to reflect the target system of interest. This will likely require close collaboration with the sensor vendor. Transition the resulting sensor and image pipeline to the Prime performer for the system of interest. Such uncooled thermal and near-infrared sensors are typically dual-use items; explore commercial applications of the sensor and pipeline combination in parallel.
Submission Information
For more information, and to submit your full proposal package, visit the DSIP Portal.
SBIR|STTR Help Desk: usarmy.sbirsttr@army.mil
References:
Objective
Leverage advances in artificial intelligence and other image processing algorithms to generate higher quality longwave thermal, fused thermal and near-infrared suitable for use on embedded hardware systems for Soldier-borne use.
Description
This topic seeks to leverage advances in algorithms, processing techniques, and embedded hardware to improve image quality for human consumption of thermal longwave (LWIR) and LWIR fused with near-infrared imagery (NIR). The primary objectives of this work are to reduce cognitive burden during long duration missions and improve user acceptance of systems which employ LWIR and NIR sensors.
A secondary objective is to provide information to the higher-level system, for example an augmented reality display, regarding detections of certain classes of targets or events of interest so that they can be brought to the Soldier’s attention as needed.
The algorithms should be capable of generating the best possible imagery under a range of illumination and ambient conditions, and potentially provide feedback to the system to change camera settings to match the scene and usage conditions (e.g., presence or not of ego motion, dynamic scene content, lighting conditions, etc.). It should be assumed the Soldier is utilizing the sensors as part of a mobile system with possible rapid motion, though it is anticipated that such processing would be also useful for static camera emplacements.
It is imperative that the proposed processing schema be capable of running on very low size, weight, power, and cost (SWAP-C) embedded hardware. While the hardware is not the focus of this topic, performers must minimally show that their solution can be run on actual hardware though system design and analysis.
Phase I
Generate a detailed description of your proposed solution and describe its applicability to the Soldier-borne system and the benefits of use, including a high-level imaging pipeline architecture, training data requirements, and other information as applicable. Estimate the computational power required and suggest example hardware which could potentially run the proposed algorithms. Show proof of principle of various algorithm components on example image frames.
Phase II
Leveraging the high-level design and demonstrated components from phase 1, complete the image processing pipeline, including any necessary training. Surrogate hardware with pre-recorded video feeds is acceptable, but a design including potential end-use embedded hardware with real-time video is preferred. The pipeline must process at least video from a microbolometer type uncooled camera at 640×480 or greater resolution and 30 Hz minimum (1280×1024 at 120 Hz preferred), though a fused longwave/ near-infrared solution is preferred. Discuss ways in which the algorithm or pipeline might fail when used in a military system and potential mitigations.
Phase III
As part of an Army applied research program, instantiate the image pipeline on relevant low SWAP-C embedded hardware to perform real time image processing with sensors of interest. The choice of hardware and image pipeline optimization should be performed to reflect the target system of interest. This will likely require close collaboration with the sensor vendor. Transition the resulting sensor and image pipeline to the Prime performer for the system of interest. Such uncooled thermal and near-infrared sensors are typically dual-use items; explore commercial applications of the sensor and pipeline combination in parallel.
Submission Information
For more information, and to submit your full proposal package, visit the DSIP Portal.
SBIR|STTR Help Desk: usarmy.sbirsttr@army.mil
References: