Degraded Visual Environments (DVE) can significantly restrict rotorcraft operations during their most common mission profiles of terrain flight and off-airfield operations. The user community has been seeking solutions that will allow pilotage in DVE and mitigate the additional risks and limitations resulting from the degraded visual scene. To achieve this solution there must be a common understanding of the DVE problem, the history of solutions to this point, and the full range of solutions that may lead to future rotorcraft pilotage in the DVE. There are three major technologies that contribute to rotorcraft operations in the DVE: flight control, cueing and sensing, and all three must be addressed for an optimal solution. Increasing aircraft stability through flight control improvements will reduce pilot workload and facilitate operations in both Degraded Visual Environments and Good Visual Environments (GVE) and therefore must be a major piece of all DVE solutions. Sensing and cueing improvements are required to gain a level of situational awareness which can permit low-level flight and off-airfield landings, while avoiding contact with terrain or obstacles which are not visually detectable by the flight crew. The question of how this sensor information is presented to the pilot is a subject of debate among those working to solve the DVE problem. There are two major philosophies in the field of DVE sensor and cueing implementation. The first is that the sensor should display an image which allows the pilot to perform all pilotage tasks as they would fly under visual flight rules (VFR). The second is that the pilot should follow an algorithm-derived, sensor cleared, precision flight path, presented as cues for the pilot to fly as they would under instrument flight rules (IFR). There are also combinations of these two methods that offer differing levels of assistance to the pilots, ranging from aircraft flight symbology overlaid on the sensor image, to symbols that augment the displayed image and help a pilot interpret the scene, to a complete virtual reality that presents a display of the sensed world without any “see-through” capability. These options can utilize two primary means of transmitting a sensor image and cueing information to the pilot: a helmet mounted display (HMD) or a panel mounted display (PMD). This paper will explore the trade space between DVE systems that depend on an image and those that utilize guidance algorithms for both the PMD and HMD as recently demonstrated during the 2016 and 2017 NATO flight trials in the United States, Germany and Switzerland.
[1]
William,et al.
Brown-Out Symbology Simulation (BOSS) on the NASA Ames Vertical Motion Simulator
,
2008
.
[2]
Carl R. Ott,et al.
UH-60 Partial Authority Modernized Control Laws for Improved Handling Qualities in the Degraded Visual Environment
,
2014
.
[3]
C. E. Rash,et al.
Performance history of AN/PVS-5 and ANVIS image intensification systems in U.S. Army aviation
,
1997,
Defense, Security, and Sensing.
[4]
Zoltan Szoboszlay,et al.
Degraded Visual Environment Mitigation (DVE-M) Program, Yuma 2016 Flight Trials in Brownout
,
2017
.
[5]
Zoltan Szoboszlay,et al.
Flight Evaluation of Fully-Masked Approaches to Landing Using Boss Displays in an EH-60L
,
2012
.
[6]
James Savage,et al.
3D-LZ Flight Test of 2013: Landing an EH-60L Helicopter in a Brownout Degraded Visual Environment
,
2014
.
[7]
Sandra G. Hart,et al.
Helmet-mounted pilot night vision systems: Human factors issues
,
1989
.
[8]
Michael S. Brickner,et al.
Helicopter flights with night-vision goggles: Human factors aspects
,
1989
.