Machine Learning Assisted Object Detection and Identification for Video: Defense and public safety organizations seek to shorten the sensor to threat engagement timeline. Traditional ML techniques require a large training image database comprised of a diverse set of specific targets each captured under a comprehensive set of unique conditions (e.g. background terrain, target pose, lighting, partial occlusion, etc.). This limits the ability to detect new targets or trained targets under new/untrained conditions. What is needed are solutions that have optimized ML assisted object detection and recognition capabilities (including algorithms) for the robust (i.e. reliable, intuitive, and adaptive) detection of both trained and untrained threats and conditions. This deliberately prioritizes the reliable and timely detection of classes of threats, over the reliable identification of specific targets while potentially missing valid threats due to insufficient training data.
Patriot Labs is interested in exploring customer and end-user opportunities requiring video-based automatic detection and alerting of both moving and stationary target vehicles, or human signature. Preferred requirements will include support for multiple sensor feeds (e.g., color and/or thermal live video) from ground and/or arial platforms. This may include video analytics products or technologies that are commercially available or in advanced stages of development.
Requirements must include cybersecurity compliance to safeguard against potential threats; and performance standards for stationary and moving platforms operating under day, night, and/or limited visibility conditions. Attributes may also include: (i) the reliable detection of objects in open terrain with a false alarm rate that exceeds or matches the minimal standard for human performance; (ii) output data to populate object locations on command and control and/or targeting systems (e.g., ten-digit grid of the target’s location); and (iii) compliance with MOSA and/or GCIA standards for system interoperability and adaptability.
Approaches sought could include: (i) support for a range of video data types, parameters, streaming protocols, container formats, and compression codecs; (ii) supervised and semi-supervised methods for object detection; (iii) integrated detection, classification, tracking, and re-identification capabilities; (iv) minimal reliance on pre-annotated training data; (v) support for switching between various concurrent video streams, and the dynamic integration of new feeds during operation; and (vi) flexible, functional scaling on various hardware profiles. Special consideration given to requirements that include the integration of mission video meta data for training new/future machine learning or autonomous capabilities.