Sensors, ASA(ALT), Phase I

Autonomous Optical Sensors

Release Date: 03/05/2024
Solicitation: 24.4
Open Date: 03/20/2024
Topic Number: A244-016
Application Due Date: 04/23/2024
Duration: 6 months
Close Date: 04/23/2024
Amount Up To: $250,000

Objective

This project will develop a portable optical sensor that can capture high-quality, real-time imagery data during missile tests. The sensor will sit near a missile launcher during the launch or near the target to analyze the terminal phase of the flight. The missile tests will occur in remote locations that do not have proper test infrastructure available. The Autonomous Optical Sensor system will also incorporate several high-speed imaging cameras with advanced artificial intelligence and machine learning capabilities.

These features will enable the sensor to calibrate, manage itself and assist in accurate positioning. The system will operate autonomously for an extended period with either a battery or a renewable energy source. The sensor will wirelessly receive setup and calibration data from a centralized command and control center. The command center will provide guidance or queuing data for the AOS to initiate its track of a System Under Test. The AOS system’s innovative technology will make it possible to collect accurate and reliable data, even in the most challenging test conditions.

Description

The sensor will work with minimal or no intervention from an operator. Once deployed, it will capture imagery data of a SUT using advanced geospatial and optical sensor auto-calibration technologies. The sensor will offer organic computing, distributed network, and power systems to manage the positioning, collection, processing and transport of real-time imaging data.

This eliminates the need for transporting raw data to a centralized location for processing and analysis. Furthermore, it will have minimal setup and calibration since the sensor will self-align and calibrate itself before test operations. The sensor will transmit the results of the computing work done at the edge, such as real-time imagery, sensor calibration updates or other actionable information, to the main data center for review and analysis after the test.

Phase I

In Phase I of this project, vendors will research and define an integrated AOS configuration that includes several types of optical sensors, such as visible and electro-optical/infrared, as well as data processing, networking and power systems. Additionally, the Army will help analyze how an AI framework that employs specialized algorithms and techniques can manage the system.

These algorithms will facilitate positioning, calibration, real-time management and control of the overall design. Moreover, the awardee will define the control method to include the sensor’s feasibility for learning different support configurations or adaptive learning. This will require the design of processes prioritizing the training of algorithms to adapt to changing conditions or new datasets. By the end of Phase I, the awardee will have defined the optimal configuration of AOS and AI framework necessary to satisfy AOS requirements.

Phase II

In Phase II of the project, the awardee will create a prototype of the AOS based on the analysis conducted during Phase I. However, integrating AI-enabled or cognitive projects into existing operations can be challenging. Adapting the AOS to current T&E infrastructures may require refining an integrated system design (AI software/hardware) to achieve optimal performance, accuracy and reliability. The Army expects the AI will need iterative refinement and optimization based on the Phase I designs.

Functional testing in an operational context is a crucial part of system development. This will facilitate the AI-optimization process for this type of system since it involves an ongoing learning approach to development. The prototype should be able to achieve self-localization and alignment, obtain queuing or positioning data from an external sensor of an SUT, and maintain track of an SUT.

Both self-localization and alignment are critical for AI-enabled systems to understand and navigate within their environment effectively. By accurately determining their position and aligning their measurements and actions with a common reference frame, these systems can interact with other devices, objects, or entities and perform tasks such as mapping, object recognition, navigation, or coordination.

Phase III

The primary commercial dual-use potential relates to collecting real-time imagery supporting air traffic management at airports or surveillance of defined sensitive areas.

  • Monitoring and managing air traffic flow: Help track flights in real-time using radar data or other surveillance systems primarily to identify incursions by small UAS.
  • Assisting in airspace coordination: It can provide information about airspace restrictions, temporary flight restrictions, and other limitations in the defined sensitive areas. This can help ensure aircraft stay within designated airspace and avoid potential conflicts.
  • Alerting operators of potential safety or security concerns: Notify operators of any unusual behavior, deviations from flight plans, or potential security threats. This can help maintain the safety and security of the defined sensitive areas.

Submission information

All eligible businesses must submit proposals by noon p.m. ET.

To view full solicitation details, click here.

For more information, and to submit your full proposal package, visit the DSIP Portal.

SBIR|STTR Help Desk: usarmy.sbirsttr@army.mil

References:

Trajectory Analysis and Optimization Software (TAOS) TAOS by Sandia National Labs: Describes a tool designed to be a three-degree-of-freedom or six-degree-of-freedom trajectory — possible application to sensor placement and calibration.

https://www.sandia.gov/taos/

Reinforcement Learning Applications in Unmanned Vehicle Control A_Comprehensive_Overviewby Kiumarsi, B. et al. (2019): This paper addresses research in reinforcement learning techniques in control systems, providing insights into their potential applications and challenges.

https://www.researchgate.net/%20publication/%20361572362_Reinforcement_Learning_Applications_in_Unmanned_Vehicle_Control_A%20_Comprehensive_Overview

How to train your robot with deep reinforcement learning: lessons we have learned by Levine, S. et al. (2021): This research paper delves into applying deep learning algorithms for control tasks, showcasing their capabilities and discussing their limitations.

https://journals.sagepub.com/doi/epub/10.1177/0278364920987859  

Model Predictive Control with Artificial Neural Networks by Scokaert, P. O., et al. (2005): This paper investigates the integration of artificial neural networks with model predictive control techniques, presenting a novel approach for control system design.

https://link.springer.com/chapter/10.1007/978-3-642-04170-9_2  

Artificial Intelligence; Adaptive Learning; Autonomous Control; Self-Alignment and localization; Intelligent Instrumentation

Objective

This project will develop a portable optical sensor that can capture high-quality, real-time imagery data during missile tests. The sensor will sit near a missile launcher during the launch or near the target to analyze the terminal phase of the flight. The missile tests will occur in remote locations that do not have proper test infrastructure available. The Autonomous Optical Sensor system will also incorporate several high-speed imaging cameras with advanced artificial intelligence and machine learning capabilities.

These features will enable the sensor to calibrate, manage itself and assist in accurate positioning. The system will operate autonomously for an extended period with either a battery or a renewable energy source. The sensor will wirelessly receive setup and calibration data from a centralized command and control center. The command center will provide guidance or queuing data for the AOS to initiate its track of a System Under Test. The AOS system’s innovative technology will make it possible to collect accurate and reliable data, even in the most challenging test conditions.

Description

The sensor will work with minimal or no intervention from an operator. Once deployed, it will capture imagery data of a SUT using advanced geospatial and optical sensor auto-calibration technologies. The sensor will offer organic computing, distributed network, and power systems to manage the positioning, collection, processing and transport of real-time imaging data.

This eliminates the need for transporting raw data to a centralized location for processing and analysis. Furthermore, it will have minimal setup and calibration since the sensor will self-align and calibrate itself before test operations. The sensor will transmit the results of the computing work done at the edge, such as real-time imagery, sensor calibration updates or other actionable information, to the main data center for review and analysis after the test.

Phase I

In Phase I of this project, vendors will research and define an integrated AOS configuration that includes several types of optical sensors, such as visible and electro-optical/infrared, as well as data processing, networking and power systems. Additionally, the Army will help analyze how an AI framework that employs specialized algorithms and techniques can manage the system.

These algorithms will facilitate positioning, calibration, real-time management and control of the overall design. Moreover, the awardee will define the control method to include the sensor’s feasibility for learning different support configurations or adaptive learning. This will require the design of processes prioritizing the training of algorithms to adapt to changing conditions or new datasets. By the end of Phase I, the awardee will have defined the optimal configuration of AOS and AI framework necessary to satisfy AOS requirements.

Phase II

In Phase II of the project, the awardee will create a prototype of the AOS based on the analysis conducted during Phase I. However, integrating AI-enabled or cognitive projects into existing operations can be challenging. Adapting the AOS to current T&E infrastructures may require refining an integrated system design (AI software/hardware) to achieve optimal performance, accuracy and reliability. The Army expects the AI will need iterative refinement and optimization based on the Phase I designs.

Functional testing in an operational context is a crucial part of system development. This will facilitate the AI-optimization process for this type of system since it involves an ongoing learning approach to development. The prototype should be able to achieve self-localization and alignment, obtain queuing or positioning data from an external sensor of an SUT, and maintain track of an SUT.

Both self-localization and alignment are critical for AI-enabled systems to understand and navigate within their environment effectively. By accurately determining their position and aligning their measurements and actions with a common reference frame, these systems can interact with other devices, objects, or entities and perform tasks such as mapping, object recognition, navigation, or coordination.

Phase III

The primary commercial dual-use potential relates to collecting real-time imagery supporting air traffic management at airports or surveillance of defined sensitive areas.

  • Monitoring and managing air traffic flow: Help track flights in real-time using radar data or other surveillance systems primarily to identify incursions by small UAS.
  • Assisting in airspace coordination: It can provide information about airspace restrictions, temporary flight restrictions, and other limitations in the defined sensitive areas. This can help ensure aircraft stay within designated airspace and avoid potential conflicts.
  • Alerting operators of potential safety or security concerns: Notify operators of any unusual behavior, deviations from flight plans, or potential security threats. This can help maintain the safety and security of the defined sensitive areas.

Submission information

All eligible businesses must submit proposals by noon p.m. ET.

To view full solicitation details, click here.

For more information, and to submit your full proposal package, visit the DSIP Portal.

SBIR|STTR Help Desk: usarmy.sbirsttr@army.mil

References:

Trajectory Analysis and Optimization Software (TAOS) TAOS by Sandia National Labs: Describes a tool designed to be a three-degree-of-freedom or six-degree-of-freedom trajectory — possible application to sensor placement and calibration.

https://www.sandia.gov/taos/

Reinforcement Learning Applications in Unmanned Vehicle Control A_Comprehensive_Overviewby Kiumarsi, B. et al. (2019): This paper addresses research in reinforcement learning techniques in control systems, providing insights into their potential applications and challenges.

https://www.researchgate.net/%20publication/%20361572362_Reinforcement_Learning_Applications_in_Unmanned_Vehicle_Control_A%20_Comprehensive_Overview

How to train your robot with deep reinforcement learning: lessons we have learned by Levine, S. et al. (2021): This research paper delves into applying deep learning algorithms for control tasks, showcasing their capabilities and discussing their limitations.

https://journals.sagepub.com/doi/epub/10.1177/0278364920987859  

Model Predictive Control with Artificial Neural Networks by Scokaert, P. O., et al. (2005): This paper investigates the integration of artificial neural networks with model predictive control techniques, presenting a novel approach for control system design.

https://link.springer.com/chapter/10.1007/978-3-642-04170-9_2  

Artificial Intelligence; Adaptive Learning; Autonomous Control; Self-Alignment and localization; Intelligent Instrumentation

Autonomous Optical Sensors

Scroll to Top