Home / Science / NASA technology enables precise landing and avoidance of danger without a pilot

NASA technology enables precise landing and avoidance of danger without a pilot



By

New Shepard Booster Lands

The New Shepard (NS) booster lands after the fifth flight of this vehicle during NS-11 on May 2, 2019. Photo credit: Blue Origin

Some of the most interesting places to study in our solar system are in the most inhospitable environments – but landing on a planetary body is already a risky endeavor. With NASA Planning of robot and crew missions to new locations on the moon and MarsAvoiding landing on the steep slope of a crater or on a boulder field is crucial to ensure a safe landing process for exploring other worlds. To improve landing safety, NASA is developing and testing a range of precise landing and hazard avoidance technologies.

A combination of laser sensors, a camera, a high-speed computer and sophisticated algorithms gives spacecraft the artificial eyes and analysis capability to find a specific landing area, identify potential hazards and adapt the course to the safest landing location. The technologies developed as part of the SPLICE (Safe and Precise Landing – Integrated Capabilities Evolution) project under the Space Technology Directorate’s Game Changing Development program will eventually enable spacecraft to avoid boulders, craters and more in landing areas that are half the size of a soccer field are already classified as relatively safe.


A new suite of lunar landing technologies called Safe and Precise Landing – Integrated Capabilities Evolution (SPLICE) enables safer and more accurate moon landings than ever before. Future lunar missions could use NASA’s advanced SPLICE algorithms and sensors to target landing sites that were not possible during the Apollo missions, such as: B. Regions with dangerous boulders and nearby shadow craters. SPLICE technologies could also help land humans on Mars. Photo credit: NASA

Three of SPLICE’s four main subsystems will have their first integrated test flight on a Blue Origin New Shepard missile during an upcoming mission. When the booster of the rocket returns to the ground, after reaching the boundary between the earth’s atmosphere and space, the terrain-based navigation, the navigation Doppler lidar and the sinking and landing computer of SPLICE are carried out on board the booster. Everyone works in the same way as when approaching the lunar surface.

The fourth important SPLICE component, a lidar for hazard detection, will in future be tested using ground and flight tests.

After breadcrumbs

When choosing a location to explore, part of the consideration is to ensure enough room for a spacecraft to land. The size of the area, known as the landing ellipse, shows the inaccuracy of the ancient landing technology. The target landing area for Apollo 11 in 1968 was approximately 11 miles by 3 miles, and astronauts were piloting the lander. Subsequent robotic missions to Mars were designed for autonomous landings. Viking arrived on the Red Planet 10 years later with a target ellipse 174 miles by 62 miles.

Apollo 11 landing ellipse

The Apollo 11 landing ellipse shown here was 11 miles by 3 miles. Precision landing technology drastically reduces the landing area so that multiple missions can land in the same region. Photo credit: NASA

Technology has improved and the size of subsequent autonomous landing zones has decreased. In 2012, the Curiosity Rover’s landing ellipse had dropped to 12 by 4 miles.

The ability to locate a landing site will help future missions reach areas for new scientific exploration in places previously considered too dangerous for a non-piloted landing. It will also allow advanced supply missions to send cargo and supplies to a single location instead of spreading over miles.

Each planetary body has its own unique conditions. For this reason, “SPLICE is designed in such a way that it can be integrated into any spaceship that lands on a planet or moon,” said project manager Ron Sostaric. Sostaric, who is based at NASA’s Johnson Space Center in Houston, stated that the project will involve multiple centers across the agency.

NASA Terrain Relative Navigation

Terrain-based navigation provides a navigation measurement by comparing real-time images to known maps of surface features during the descent. Photo credit: NASA

“What we’re building is a complete descent and landing system that will work for future Artemis missions to the moon and that can be customized for Mars,” he said. “Our job is to put the individual components together and ensure that they work as a functioning system.”

Atmospheric conditions can vary, but the process of descent and landing is the same. The SPLICE computer is programmed to activate terrain based navigation several miles above the ground. The on-board camera photographs the surface and takes up to 10 images per second. These are continuously fed into the computer, which is pre-installed with satellite images of the landing site and a database of known landmarks.

Algorithms search the real-time images for the known features to determine the location of the spacecraft and safely navigate the vehicle to its expected landing point. It’s similar to navigating by landmarks like buildings, rather than street names.

In the same way, all-terrain navigation identifies where the spacecraft is and sends this information to the guidance and control computer, which is responsible for executing the flight path to the surface. The computer knows roughly when the spaceship should be approaching its destination, almost as if it were laying breadcrumbs and then following them to the final destination.

This process takes up to approximately four miles above the surface.

Laser navigation

Knowing the exact location of a spacecraft is critical to the calculations required to plan and perform a motorized descent for precise landing. In the middle of the descent, the computer turns on the navigation Doppler lidar to measure speed and distance measurements, which further complement the precise navigation information from the terrain-based navigation. Lidar (light detection and removal) works similarly to radar, but uses light waves instead of radio waves. Three laser beams, each as narrow as a pencil, are aimed at the floor. The light from these rays is reflected off the surface and reflected back to the spaceship.

NASA Navigation Doppler Lidar Instrument

NASA’s navigation Doppler lidar instrument consists of a chassis with electro-optical and electronic components and an optical head with three telescopes. Photo credit: NASA

The travel time and wavelength of this reflected light are used to calculate how far the vehicle is from the ground, in which direction it is going, and how fast it is moving. These calculations are carried out 20 times per second for all three laser beams and fed into the control computer.

Doppler Lidar works successfully on Earth. However, Farzin Amzajerdian, co-inventor and principal researcher of the technology at NASA’s Langley Research Center in Hampton, Virginia, is responsible for addressing the challenges of deploying it in space.

“There are still some unknowns about how much signal will come from the surface of the moon and Mars,” he said. If the material on the floor isn’t very reflective, the signal back to the sensors will be weaker. However, Amzajerdian is confident that lidar will outperform radar technology as the laser frequency is orders of magnitude higher than that of radio waves, allowing for far greater precision and more efficient detection.

Langley engineer John Savage

Langley engineer John Savage inspects part of the navigation Doppler lidar unit after it has been fabricated from a metal block. Photo credit: NASA / David C. Bowman

The workhorse responsible for managing all of this data is the descent and landing computer. Navigation data from the sensor systems are fed into on-board algorithms that calculate new paths for a precise landing.

Computer powerhouse

The descent and landing computer synchronizes the functions and data management of individual SPLICE components. It must also integrate seamlessly with the other systems on each spacecraft. So this little computer powerhouse prevents the precision landing technologies from overloading the primary flight computer.

The need for computation recognized early on made it clear that the existing computers were inadequate. NASA’s powerful space travel computer processor would meet demand, but is still several years after its completion. An interim solution was needed to prepare SPLICE for its first suborbital missile flight test with Blue Origin on its New Shepard rocket. Data from the performance of the new computer will help shape its eventual replacement.

SPLICE hardware vacuum chamber test

SPLICE hardware is being prepared for a vacuum chamber test. Three of SPLICE’s four main subsystems will have their first integrated test flight with a Blue Origin New Shepard missile. Photo credit: NASA

John Carson, Technical Integration Manager for Precision Landing stated, “The replacement computer has very similar processing technology that will influence both future high-speed computer design and future efforts to integrate descent and landing computers.”

Looking ahead, test missions like this one will help develop safe landing systems for NASA and commercial vendor missions on the lunar surface and other bodies of the solar system.

“Landing safely and precisely on another world still has many challenges,” said Carson. “There is still no commercial technology that can be bought for it. Any future surface mission could use this precision landing capability, so NASA must do that now. And we promote the transfer and use with our industrial partners. “




Source link