قالب وردپرس درنا توس
Home / Science / Quantitative approach gives the expression of the robot child's face a rich nuance – ScienceDaily

Quantitative approach gives the expression of the robot child's face a rich nuance – ScienceDaily



Japan's love for robots is no secret. But is the feeling in the amazing androids of the country mutually exclusive? We could now take a step closer to giving Androids a bigger face to communicate with.

While robots have seen advances in health care, industry, and other areas in Japan, capturing humanistic expression in a robot's face remains a difficult challenge. Although their system properties were generally addressed, the facial expressions of androids have not been studied in detail. This is due to factors such as the enormous range and asymmetry of natural human facial movements, the restrictions on the use of materials for Android skin and, of course, the complicated robotics of technical and mathematical movements.

A trio of researchers at Osaka University have now discovered a method for identifying and quantitating face movements on their Android robotic head. Named Affetto, the first-generation Android model was released in a 201

1 release. The researchers have now found a system to make the affetto of the second generation more expressive. Their results provide a way for androids to express larger emotion areas, and ultimately have a deeper interaction with humans.

The researchers reported their findings in the journal Frontiers in Robotics and AI Deformations are a key issue in the fight against Android faces, "explains co-author Minoru Asada." Movements in her soft facial skin produce instability , and this is a big hardware problem we are dealing with. We looked for a better way to measure and control them. "

Researchers looked at 116 different aspects of Affetto to measure their three points: dimensional movement, aspects underpinned by so-called" deformation units. "Each unit comprises a series Mechanisms that create a prominent facial contour, such as lowering or raising part of a lip or eyelid, and measurements of these were then mathematically modeled to quantify their surface motion patterns.

When applied force and the synthetic skin were challenged, they were able to use their deformation adjustment system to precisely control Affetto's facial surface movements.

"Android robot faces have proven to be a black box problem: they have been implemented, but only vague and general judged, "says the first author of the study, Hisashi Ishihara. "Our accurate results will allow us to effectively control Android facial movements to introduce more nuanced expressions such as smile and frown."

Story Source:

Materials provided by Osaka University . Note: Content can be edited by style and length.


Source link