Objective
The Explosive Ordnance Disposal Visual Ordnance Identification Database (EODVOID) will develop an automated photogrammetry method to greatly increase the speed of scanning and creating 3D models for 1000’s of pieces of ordnance samples. This would enable the development of a much-needed authoritative ordnance database and serve as a baseline standard for training and developing AI/ML detection and classification algorithms.
Description
platforms (ground or air). Currently all identification of ordnance is done visually and relies on the expertise of EOD operators, that typically utilize printed reference materials which include photographs and line drawings of the potential explosive hazards.
The problem of identifying threats is further complicated by the fact that there are tens of thousands of different types of threat items worldwide, and some may be made to appear like other ordnance but functions differently. To add to the issue, once ordnance is fired, the physical characteristics (shape) may change and key identifying features such as markings may be altered or destroyed.
If ordnance has been left in certain environments for extended periods of time, there may be a significant degradation to the appearance of the item due to rust and damage. Once the ordinance is properly identified, the EOD Technician can proceed to the next phase of their mission.
The EODVOID will utilize photogrammetry methods developed by this SBIR and have cameras mounted on a robot to capture high resolution files of ordnance and their components. We will develop an automated system that will scan, take photos and create 3D models in one complete action.
Once captured these images will be stored, along with all the metadata of each item, in a database with the ability to be geographically tailored to subset databases for any regional deployment. It is extremely important to have hi-resolution images for the purpose of training the CV algorithms. We will be generating (NOT SIMULATING) hi-resolution images that have been aged, rusted, dented, broken and placed in different environments as well as all aspect angles of the ordnance.
Phase I
This topic is only accepting Direct to Phase II proposals for a cost up to $2,000,000 for a 24-month period of performance. Proposers interested in submitting a DP2 proposal must provide documentation to substantiate that the scientific and technical merit and feasibility equivalent to a Phase I project has been met. Documentation can include data, reports, specific measurements, success criteria of a prototype, etc.
Phase II
Companies are expected to develop a fully automated photogrammetry scanning program for ordnance to create 3D models that can be populated into a database. This SBIR proposal will create a controlled automated environment, using high end DSL cameras, computers, turntables and proper lighting, this will ensure all the shadowing and detail on the ordnance is correct.
Because of this process we will be able to generate (NOT SIMULATE) hi-resolution 3D images as well as age, dent, rust, brake apart items and show what actual fired ordnance would look like on the battlefield. Being able to create this process will make it extremely faster and more comprehensive for the purpose of training the CV algorithms.
This proposal will leverage highly mature computer vision approaches that have not previously been applied to EOD applications. Once the process to establish a database has been created, transfer learning methods will be employed as a first step towards achieving the required level of detection and classification.
Convolutional Neural Networks (CNNs) are currently integrated as an important tool within the industrial base and have automated image and video recognition tasks, resulting in a high degree of effectiveness and efficiency across multiple sectors. For example, CNNs are now integrated in a variety of industries, to include retail, automotive, healthcare and manufacturing.
CNNs can be used in medical imaging applications and in manufacturing for monitoring and ensuring quality. Automotive manufacturers use related methods for the design of autopilot capabilities and autonomous driving applications. The development of the EODVOID and corresponding metadata standard for training deep learning algorithms is fundamental to the development of AI/ML detection algorithms.
The primary enabling technology for the autonomous recognition of ordnance is an automated scanning solution to populate the database. Such technology has been proven to a TRL 6 at Picatinny Arsenal and the DEVCOM AC EOD in 2024.
Companies must have extensive knowledge on photogrammetry methods and related automation software. Need to be able to speed up overall process of taking photos of ordnance and transferring images into complete 3D models automatically. Currently around 20-25 complete models can be produced a day. Need to bring overall process up to around 50 – 75 if not more.
Companies need to know how to work with Convolutional Neural Networks (CNNs) so that all the data that is being captured through the photogrammetry method can eventually be used in a database that will lead to ordnance identification on the battlefield.
Phase III
Submission Information
For more information, and to submit your full proposal package, visit the DSIP Portal.
SBIR|STTR Help Desk: usarmy.sbirsttr@army.mil
References:
Objective
The Explosive Ordnance Disposal Visual Ordnance Identification Database (EODVOID) will develop an automated photogrammetry method to greatly increase the speed of scanning and creating 3D models for 1000’s of pieces of ordnance samples. This would enable the development of a much-needed authoritative ordnance database and serve as a baseline standard for training and developing AI/ML detection and classification algorithms.
Description
platforms (ground or air). Currently all identification of ordnance is done visually and relies on the expertise of EOD operators, that typically utilize printed reference materials which include photographs and line drawings of the potential explosive hazards.
The problem of identifying threats is further complicated by the fact that there are tens of thousands of different types of threat items worldwide, and some may be made to appear like other ordnance but functions differently. To add to the issue, once ordnance is fired, the physical characteristics (shape) may change and key identifying features such as markings may be altered or destroyed.
If ordnance has been left in certain environments for extended periods of time, there may be a significant degradation to the appearance of the item due to rust and damage. Once the ordinance is properly identified, the EOD Technician can proceed to the next phase of their mission.
The EODVOID will utilize photogrammetry methods developed by this SBIR and have cameras mounted on a robot to capture high resolution files of ordnance and their components. We will develop an automated system that will scan, take photos and create 3D models in one complete action.
Once captured these images will be stored, along with all the metadata of each item, in a database with the ability to be geographically tailored to subset databases for any regional deployment. It is extremely important to have hi-resolution images for the purpose of training the CV algorithms. We will be generating (NOT SIMULATING) hi-resolution images that have been aged, rusted, dented, broken and placed in different environments as well as all aspect angles of the ordnance.
Phase I
This topic is only accepting Direct to Phase II proposals for a cost up to $2,000,000 for a 24-month period of performance. Proposers interested in submitting a DP2 proposal must provide documentation to substantiate that the scientific and technical merit and feasibility equivalent to a Phase I project has been met. Documentation can include data, reports, specific measurements, success criteria of a prototype, etc.
Phase II
Companies are expected to develop a fully automated photogrammetry scanning program for ordnance to create 3D models that can be populated into a database. This SBIR proposal will create a controlled automated environment, using high end DSL cameras, computers, turntables and proper lighting, this will ensure all the shadowing and detail on the ordnance is correct.
Because of this process we will be able to generate (NOT SIMULATE) hi-resolution 3D images as well as age, dent, rust, brake apart items and show what actual fired ordnance would look like on the battlefield. Being able to create this process will make it extremely faster and more comprehensive for the purpose of training the CV algorithms.
This proposal will leverage highly mature computer vision approaches that have not previously been applied to EOD applications. Once the process to establish a database has been created, transfer learning methods will be employed as a first step towards achieving the required level of detection and classification.
Convolutional Neural Networks (CNNs) are currently integrated as an important tool within the industrial base and have automated image and video recognition tasks, resulting in a high degree of effectiveness and efficiency across multiple sectors. For example, CNNs are now integrated in a variety of industries, to include retail, automotive, healthcare and manufacturing.
CNNs can be used in medical imaging applications and in manufacturing for monitoring and ensuring quality. Automotive manufacturers use related methods for the design of autopilot capabilities and autonomous driving applications. The development of the EODVOID and corresponding metadata standard for training deep learning algorithms is fundamental to the development of AI/ML detection algorithms.
The primary enabling technology for the autonomous recognition of ordnance is an automated scanning solution to populate the database. Such technology has been proven to a TRL 6 at Picatinny Arsenal and the DEVCOM AC EOD in 2024.
Companies must have extensive knowledge on photogrammetry methods and related automation software. Need to be able to speed up overall process of taking photos of ordnance and transferring images into complete 3D models automatically. Currently around 20-25 complete models can be produced a day. Need to bring overall process up to around 50 – 75 if not more.
Companies need to know how to work with Convolutional Neural Networks (CNNs) so that all the data that is being captured through the photogrammetry method can eventually be used in a database that will lead to ordnance identification on the battlefield.
Phase III
Submission Information
For more information, and to submit your full proposal package, visit the DSIP Portal.
SBIR|STTR Help Desk: usarmy.sbirsttr@army.mil
References: