Purdue University researchers developing autonomous robot capable of interacting with humans

Advertisement

Researchers at Purdue University's School of Electrical and Computer Engineering are developing integrative language and vision software that could potentially enable an autonomous robot to not only interact with people in different environments, but also accomplish navigational goals.

Led by Associate Professor Jeffrey Mark Siskind, the research team—which also includes Doctoral candidates Thomas Ilyevsky and Jared Johansen—is developing a robot named Hosh that can integrate graphic and language data into its navigational process in order to locate a specific place or person.

Hosh is being developed thanks to a grant funded by the National Science Foundation’s National Robotics Initiative.

​“The project’s overall goal is to tell the robot to find a particular person, room or building and have the robot interact with ordinary, untrained people to ask in natural language for directions toward a particular place,” Siskind explains.

“To accomplish this task, the robot must operate safely in people’s presence, encourage them to provide directions and use their information to find the goal.”

Among many possibilities, the researchers believe that Hosh could help self-driving cars communicate with passengers and pedestrians, or help complete small-scale tasks in a business place such as delivering mail.

After receiving a task to locate a specific room, building or individual in a known or unknown location, Hosh will unite novel language and visual processing so that it can navigate the environment, ask for directions, request doors to be opened or elevator buttons pushed and reach its goal.

In order to give the robot “common sense knowledge,” the researchers are developing high-level software. Common sense knowledge, the researchers note, is the ability to understand objects and environments with human-level intuition. With this knowledge, Hosh would be able to recognize navigational conventions.

An example of this is Hosh incorporating both spoken statements and physical gestures into its navigation process.

“The robot needs human level intuition in order to understand navigational conventions,” Ilyevsky says. “This is where common sense knowledge comes in. The robot should know that odd and even numbered rooms sit across from each other in a hallway or that Room 317 should be on the building’s third floor.”

The Researchers will develop integrative natural language processing and computer vision software to develop the robot’s common sense knowledge. Usually, the researchers say, natural language processing will enable the robot to communicate with people while the computer vision software will enable the robot to navigate its environment, but in this instance, the researchers are advancing the software to inform each other as the robot moves.

“The robot needs to understand language in a visual context and vision in a language context,” Siskind says. “For example, while locating a specific person, the robot might receive information in a comment or physical gesture and must understand both within the context of its navigational goals.”

As the technology advances, the researchers expect to send the robot on autonomous missions with more and more difficulty. Initially, the robot will learn to navigate indoors on a single floor, and then, in order to move to other floors and buildings, it will ask people to operate the elevator or open doors for it.

The researchers hope that in the spring, they are at a point where they can begin conducting outdoor missions.

“We expect this technology to be really big, because the industry of autonomous robots and self-driving cars is becoming very big,” Siskind says. “The technology could be adapted into self-driving cars, allowing the cars to ask for directions or passengers to request a specific destination, just like human drivers do.”