While several definitions exist for situational awareness and situational understanding, they all share the ideas expressed in the Reconnaissance, Surveillance, and Target Acquisition (RSTA) squadron definitions provided by Major Brad C. Dostal, a military analyst at the U.S. Center for Army Lessons Learned (CALL). Situational awareness is “the ability to maintain a constant, clear mental picture of relevant information and the tactical situation, including friendly and threat situations as well as terrain.”
According to Dostal, situational understanding allows leaders to avoid surprises, make rapid decisions, and choose when and where to conduct engagements, as well as achieve decisive outcomes. It’s “the product of applying analysis and judgment to the unit’s situational awareness to determine the relationships of the factors present and form logical conclusions concerning threats to the force or mission accomplishment, opportunities for mission accomplishment, and gaps in information.”
Raw data becomes valuable information that leads to knowledge and understanding (Fig. 1).
Situational awareness is also a key component in the Observe, Orient, Decide, Act (OODA) loop developed in the mid-20th century by U.S. Air Force Colonel John Boyd, a military strategist. The OODA loop is a four-step approach to decision-making that focuses on filtering available information, putting it in context, and quickly making the most appropriate decision while also understanding that changes can be made as more data becomes available.
Situational Awareness Tools and Technologies Are Always Evolving
The tools and technologies that provide situational awareness are continuously evolving. In the earliest days of battle, it was a huge advantage to have a map. Later, sextants became an important situational awareness tool. And on it goes through the invention of still cameras, video cameras, global positioning systems (GPS), and other innovations.
As technologies continue to advance, increasingly sophisticated onboard sensors and mission systems have become the eyes and ears for everyone on ground and air platforms. And cutting-edge video solutions have become the best way to get all of this information to warfighters in a fast and effective way (Fig. 2).
A look at the different levels of video systems available for ground vehicles illustrates the effect that increasingly advanced technologies have on situational awareness:
- The most basic video systems allow warfighters who are in the vehicle or operating it remotely to view the images from one vehicle-mounted camera at a time, providing a rudimentary level of visibility.
- Multi-display and picture-in-picture solutions enable warfighters to simultaneously view images from multiple vehicle-mounted cameras so that they can consider their surroundings on all sides of the vehicle at all times. With potentially more than a dozen camera views to choose from, warfighters can easily access the optimal combination of views for the task or maneuver they’re executing.
- 360-degree video systems give warfighters an even higher level of situational awareness. These systems blend accurate, fully stitched images from all vehicle-mounted cameras into a seamless, panoramic image that most closely resembles what the human eye sees.
The next step is to move this crucial insight and visibility right into warfighters’ helmets or visors. In the next five years or so, we can expect to see AR solutions that put the video feeds, maps, GPS coordinates, speed indicators, and other information warfighters need right in their field of view. They will be able to use intuitive gestures to control the information they see at each phase of task execution, similar to movie depictions.
Requirements and Considerations for Enhanced Situational Awareness Solutions
The goal of every enhanced situational awareness solution is to get the right information to the right person at the right time in a form that can be rapidly assimilated and used. The only way to achieve this goal is to deploy end-to-end video solutions where all system components are seamlessly integrated.
Any other approach introduces significant risks, including interoperability issues, increased latency, increased complexity, and more challenging maintenance and upgrades. These shortcomings will make it far too difficult to acquire, distribute, and intuitively present the extremely high volumes of data that will be available in the battlespace of the future.
The end-to-end architecture for enhanced situational awareness varies depending on the specific application and platform. However, most solutions require sensors in addition to video, mission computer, and a video distribution system (VDS).
As technologies evolve, architectures will also evolve. As a result, each system component must be:
- Easy to deploy, use, and upgrade
- Low SWaP
Extremely low video-system latency is also essential for providing warfighters with complete confidence that what they’re seeing is the reality at the time. If information is delayed and warfighters are uncertain about their situation, they’re far more likely to hesitate when responding to threats, collide with obstructions or humans, or unknowingly enter dangerous territory.
To reduce latency end to end, each video system component must function in as close to real time as possible:
- A United Kingdom Ministry of Defence study found that military vehicle drivers can safely drive a vehicle through a visual display when the overall video-system latency is 40 ms or lower.
- A study looking into the effects of video latency on general situational awareness found that warfighters remain adequately aware of their surroundings when overall video-system latency is 160 ms or lower.
The sensors that capture information range from high-definition, thermal, and infrared cameras to GPS, radar, speed indicators, and other platform systems. While the data from video cameras is typically compressed before it’s sent to the mission computer for processing, data from other systems may be sent in the raw form.
With increasingly sophisticated adversarial threats and advances in video technologies, many enhanced situational awareness solutions now require computer-processing capabilities.
When computer processing is added, video systems can manipulate video feeds to enable video blending and video stitching. They can layer regular camera feeds with infrared and thermal-imaging feeds. And, they can display mapping and telemetry data and image metadata along with video streams.
When these advances first came along, it seemed logical to add computer-processing capabilities to the back of the mission display. Since then, technology has continued to evolve, and all-in-one video-display and computing solutions no longer have an obvious advantage over simple, standalone displays. In many cases, a simple video display connected to a separate computing component is the better choice because it:
- Reduces the cost of computer processing compared to smart displays
- Simplifies upgrades to computer processing capabilities
- Increases deployment flexibility and improves thermal management in SWaP-constrained spaces
Video Distribution System
The video distribution system, or VDS, is the central “brain” of the video system. These video gateways, switches, or multiplexers manipulate the image and information feeds before outputting the results to the mission displays.
To increase situational awareness, many video-distribution solutions provide a range of image configuration options. The most sophisticated solutions can simultaneously deliver multiple video streams to multiple displays. They also allow warfighters to quickly and easily manipulate views with the touch of a button. Key viewing features to look for include:
- Real-time views with zoom capabilities
- Simultaneous camera views using picture-in-picture technology and window overlays
- Video streaming and blending capabilities
The VDS must also provide high-performance processing and extensive I/O to support connections to multiple sensors and displays in an extremely SWaP-friendly form factor. And it must require minimal wiring. For example, it’s mandatory that all video streams sent to a single display travel over a single connection between the distribution technology and the display.
Finally, the VDS must be able to synchronize the video streams from all video cameras and other sensors; add diagnostic data, text, and graphics to put the visuals in context for warfighters; and then deliver it to mission displays with extremely low latency.
Mission displays must be ruggedized, lightweight, high-resolution, and extremely responsive. There have been numerous advances in display technologies over the last decade, and each one plays an important role in helping warfighters quickly and easily assimilate mission-critical information:
- High-brightness LED backlights make displays easy to read in low-light conditions.
- Fully bonded displays are highly reliable, reduce reflections, and provide superior clarity and contrast compared to legacy non-bonded displays.
- Projected capacitive (PCAP) touchscreens support intuitive, multitouch gestures, don’t require a stylus, and can be used while wearing gloves. They’re also lighter weight and more durable than resistive touchscreens.
- LCD technology provides more viewing angles, allowing warfighters to easily see the information on the screen from the side as well as the front.
- Anti-reflection and anti-glare coatings provide better visibility in bright light conditions.
End-To-End Video Solution for Enhanced Situational Awareness
Curtiss-Wright’s end-to-end video solutions help improve situational awareness today and pave the way to even more enhanced situational awareness solutions of tomorrow. To maximize flexibility and simplify evolution, the company takes a building-block approach to video-system design, providing ruggedized mission displays, video distribution systems and video recorders, as well as complete video management systems.
Examples of situational awareness system building blocks include:
- Ground vehicle display units (GVDUs) (Fig. 3) that feature ruggedized PCAP touchscreen displays designed for the unique requirements of ground vehicles, where there’s less light, more sand, and water in the air, and greater need to operate the display while wearing gloves.
- The RVG-MS1 Multi-Sensor Rugged Video Gateway (Fig. 4), which provides 25 inputs and 20 outputs in a unit that weighs only 3.25 kg (7.17 lbs.) and requires only 80 W of maximum power. This SWaP-optimized video gateway supports single, dual, triple, and quad views in a variety of layouts.
- Advanced video display units (AVDUs) and single video display units (SVDUs) that are ideal for large- and medium-sized video solutions on airborne platforms.
- The VRDV7000, a lightweight, small-form-factor, dual-channel HD video recorder that allows warfighters to easily review recently captured video footage to verify situations.
If critical computer processing power needs to be added to these video solutions, small-form-factor, modular mission computers like the ruggedized Parvus DuraCOR 8041 are ideal.
These ruggedized video solutions are designed and built to withstand extreme temperatures, shock, vibration, sand, water, and other challenging environmental conditions to ensure long-term, reliable operation on any terrain and in any weather conditions. They meet key industry standards, including:
- MIL-STD-461 for radiated emissions and electromagnetic compatibility
- MIL-STE-1275E for power and electrostatic discharge
- MIL-STD-810 for environmental engineering design and testing
- DO160-G for environmental and EMC/EMI
In addition, these solutions address the key technical challenges associated with reducing latency in individual solution components and across the end-to-end solution.
Val Chrysostomau is Video Display System Product Marketing Manager at Curtiss-Wright Defense Solutions.