The Perseverance — Part 2 (THE LANDER VISION SYSTEM FOR MARS 2020)

To read the background of Mars 2020 mission, I would strongly suggest you to read part 1 of this series. This blog covers the technicality behind the mission.

In January 2016, the Mars 2020 project added Terrain Relative Navigation to the project baseline. This new capability helps the mission avoid large hazards in the landing ellipse, which enables the consideration of landing sites that more geologically diverse than before. This diversity should improve the quality of the samples collected by Mars 2020 for possible future return to earth. The Lander Vision System (LVS) is the sensor that provides the position fix that is used to determine where to land between hazards identified in orbital data prior to landing.

Mars 2020 is using the Mars Science Laboratory (MSL) Entry, Descent and Landing (EDL) system. This system generates an inertial position by propagating an IMU from the ground navigation position fix prior to entry; the error on this position can be as large as 3.2km.

What is Terrain Relative Navigation?

There exist numerous of navigation methods on earth because of the satellites carrying sensors monitoring earth through its orbital and producing relevant data regarding various factors like temperature, pressure, type of land, soil etc. continuously. But such case is not possible in Space. We do not have enough data to predict the natural conditions over there. Hence, it makes difficult for human to navigate in space. To overcome this problems, the scientists at NASA came up with Terrain Relative Navigation System which is a process of comparing onboard map with instantaneous pictures clicked by the camera mounted on top of rover. The comparison involves the matching stages in which the particular features are extracted from the image and are match to the onboard map. This process is iterative and as we descend closer to the land the feature vector gets denser because of increase in quality of image.

The Lander Vision System

The Lander Vision System is being added to Mars 2020 to decrease this position error down to 40m relative to a map of the landing site. Using this position, the Guidance, Navigation, and Control (GNC) system selects a landing point that is reachable given the fuel onboard and that also avoids hazards identified a-priori in the map. With this approach, sites that were previously considered too hazardous for landing, but very desirable scientifically, are now viable candidates for Mars 2020 and future missions.

The sole purpose of the LVS is to estimate position relative to map during Mars EDL. The LVS needs to reduce an initial 3.2km position error down to 40m and do this in 10 seconds. There is also a requirement to provide an answer with less accuracy (54m) at 6 seconds to deal with off nominal conditions where the LVS is started late in the EDL timeline or to recover quickly from a reboot of the LVS during its operational time window.

The Map Relative Localisation

The Map Relative Localisation (MRL) algorithms that run inside the LVS estimate position by fusing landmark matches between descent images and a map with inertial measurement unit (IMU) data. The process occurs in a coarse to fine fashion. First, the LVS processing is seeded with an initial estimate of the position, altitude, velocity and altitude of the vehicle which is obtained through IMU and is error prone with a range of 3.2kms; this is used as the starting point for inertial propagation of the LVS state. The position estimate is also used to crop the on-board map of the landing site for subsequent image processing. The cropped map accounts for the initial position error, image footprint, off nadir viewing and the drift during LVS operation. Computational constraints limit processing to maps that are at most 1024x1024 pixels.

The Lander vision system breakdown

Coarse Landmark matching: Once the lander reaches 4200m above ground level (AGL) altitude, a coarse landmark matching phase begins. Five large patches in each of three descent images are matched to a coarse on-board map. These landmark matches are fused with the propagated IMU data between images to compute a horizontal position correction for the LVS state. Because the coarse map is around 12 m/pixel this coarse horizontal position does not meet the 40m position error requirement, so the coarse position correction is used to crop the onboard map to a map that is 6km on a side with 6m/pixel and a fine matching phase is started.

coarse matching

Fine Matching : In fine matching, up to 150 small patches from each descent image are matched to the fine map. These matches are input as measurements to an Extended Kalman Filter (EKF) that estimates LVS position, velocity, attitude and IMU biases. The EKF constantly propagates the IMU data so that the best LVS state is available for the next image exposure and landmark match update. LVS landmark matching achieves the 40m position error after three fine images and continues processing until 2000m AGL or the spacecraft tells LVS that back shell separation has occurred. The LVS must operate correctly for all possible EDL conditions. These include vertical velocities between 65 and 115 m/s and horizontal velocities up to 70 m/s. Attitude rates can be as high as 50˚/s and off nadir angles up to 45˚. The impact of terrain is bounded by the terrain properties within a coarse landmark footprint of 1500m x 1500m. Within this footprint, the mean slope of a best fit plane can be up to 15˚ and the standard deviation of the elevation residual from this plane (terrain relief) can be as high as 150m. Much of the potential landing sites are rough and have high contrast but there are bland regions that have appearances with entropy as low as 3.0 (roughly a contrast of 8 DN within a 256 DN image). Although the landing occured in the mid afternoon, variation in the landing date, time or day and latitude induce a variability in the sun vector at the time of landing. Sun elevations can be between 25˚ and 55˚ above the horizon and sun azimuths can be between 240˚ and 310˚ clockwise from north. These illumination variations will introduce differences between the map image and the descent image that must be dealt with robustly during landmark matching.

fine matching

So, this was the gist of the Navigation system on Mars. However, rover has quite complex architecture involving numerous of sensors and methods to check and overcome their sensitivities. I might publish about the Hardware and analysis of the mission later as a part of series. Stay Tuned!

Loves Linux, practices Computer Vision and VR, researches on ADAS. MS student at IIITB.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store