Limited Time Offer!

For Less Than the Cost of a Starbucks Coffee, Access All DevOpsSchool Videos on YouTube Unlimitedly.
Master DevOps, SRE, DevSecOps Skills!

Enroll Now

WHY DO ROBOTS NEED TO LEARN LANGUAGE?

Source: analyticsinsight.net

Could giving robots voice help them learn human commands?

Robots have become an integral part of human’s daily lives. They help us in numerous ways, from performing complex tasks to lifting heavy weights and assisting the elderly, playing with kids, and entertaining people at events. They can interact with people in any scenario. However, construing a human language still a challenge for robotic systems. Training them with real-world experiences and knowledge about the world could help robots understand natural language.

People use language to express emotions, direct behavior, ask and answer questions, provide information, and ask for help. Language-based interfaces for robots require minimal user training and expression of a variety of complex tasks.

In a paper, researchers from MIT describes a new way to train machines. They noted that children learn language by observing their environment, listening to the people around them, and understanding what they see and hear. With keeping that in mind, they created a tool called semantic parser that mimics the experience of children learning a language. Parsers are already being used for web searches, natural-language database querying, and voice assistants. The system observes captioned videos and links the words that speakers say with recorded objects and actions.

As parsers are trained in sentences annotated by humans, they could be used to improve natural interaction between humans and robots. According to the paper, a robot equipped with the parser could observe its environment to reinforce its understanding of spoken commands, even when the spoken sentences are not fully grammatical or clear.

Earlier, Analytics Insight reported that how giving voice to robots within healthcare influence human perception. Already, robots are delivering a wide range of healthcare services and opportunities to medical personnel and advancing patient care delivery. In this article, we noted how researchers at the University of Auckland and Singapore University of Technology & Design have been using speech synthesis techniques to create robots that sound more empathetic. As part of their study, researchers tested a hypothesis on how a robot’s voice can impact users’ understanding by conducting a simple experiment using a robot called Healthbot. They used a professional voice artist for the robot’s voice, which was recorded while reading dialogs in two voice variations: a flat monotone and an empathetic voice.

More broadly, teaching a machine to speak and making them able to recognize human voice is a crucial yet effective step as spoken language is the most intuitive form of interaction for humans. In 2018, it was reported that researchers in Japan attempted to bring audition, or power of listening, to robots. Proposed by Tokyo Institute of Technology Professor Kazuhiro Nakadai and Professor Hiroshi G. Okuno of Waseda University in 2000, “Robot Audition” is a research area. For this, they turned their research public and made it open-source software. This essentially helped them generate interest and diversified the research. Their research was officially registered in the IEEE Robotics and Automation Society.

So, when robots and robotics systems are able to learn and recognize the human language, they will have a more emphatic impact on people’s lives.

Related Posts

Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence