A robot learns to distinguish what it touches

Image result for A robot learns to distinguish what it touches, and feels what texture it sees
Credit: © Andrey Popov / Adobe Stock

A person can distinguish objects by touching them and be able to know the sense of purpose in view of them, but do robots have the ability to do so? Is it able to connect the senses of sight and touch and use it to distinguish between visual or tangible objects?

"The sense of touch comes before looking and before speaking, it is the first and last language, and then it always tells us the truth."
Canadian author Margaret Atwood, in her book Kill Blinds (Blind Assassins)
Touch brings us to feel the physical world, while our eyes help us understand the full picture of these tactile signals.

To better bridge this sensory gap, researchers at the Massachusetts Institute of Technology's Computer Science and Artificial Intelligence Laboratory found that predictive artificial intelligence can learn to visualize - what is a perception of what the concrete purpose is and what it is about - Tactile, and learns the sense through vision.

However, robots that are programmed to see only or to sense can only use these signals in a reciprocal way.
The team was able to create realistic signals through visual inputs, predict any object and what is the part that touches directly from those tactile inputs.

Haptic feedback on the HaptX gloves / Image Credit: HaptX

They used the KUKA robot arm, with a touch sensor called the GelSight, which was designed by another group at MIT.
Using a simple Web camera, they also logged about 200 objects to touch more than 12,000 times, such as gadgets, household products, fabrics, etc. The 12,000 videos were then divided into fixed frames. The VisGel dataset is a data set of more than 3 million visual / tactile images tested at MIT; that is, it carries information about shape and texture.

"By looking, the robot can visualize and feel the touch whether the surface is flat or a sharp edge," explains Yunzhu Li, PhD student at CSAIL, and the lead author of the paper. "By tact alone he can predict the shape The concrete purpose and interaction with the environment. "The combination of these two senses may develop the ability of the robot and reduce the number of data it needs to accomplish tasks that involve processing and understanding purposes."

More recently, most modern work has been providing robots with physical senses that mimic human senses, such as the MIT 2016 project using Deep Learning to refer to sounds or the model of robots that can respond to physical forces. Both use large data sets - Not currently available - to understand the interactions between vision and touch.

The technology used by the team depends on the use of the VisGel dataset, and a type of artificial neural network is the generic adversarial network (GAN).
GANs use visual or physical images to generate new images in a different way. They work with a generator and discriminator that compete with each other. The generator aims to create real images to deceive the characteristic. Each time the discriminant catches the generator, it must explain the internal logic of decision making, which allows the generator to improve itself over and over again.

From Vision to Touch:
Humans can infer the sense of something or generate a sense of it by seeing it. To better teach machines this, the system must first determine the position of touch of purpose, and then develop information about the shape and texture of the concrete area.
Reference images - without any interaction from the robot - helped the system to encrypt details about objects and the environment. Then, while doing the robot's arm, the robot could simply compare the current image with the reference image, and easily locate, size, and touch the concrete area.
This may look like teaching the system a computer mouse image, then "seeing" this area so that the robot predicts that the object should be picked up to capture it, which can greatly help the machines to take safer and more effective actions within the environment in which they are placed.

From Touch to Vision:
In this case, the objective of the robot was to produce a visual image by means of a tangible object and to create a picture of it. The robot analyzes the information resulting from the touch, detects the shape of the object it touches and the nature of its component, and then returns to the reference image to form an imagining of that interaction within the environment.
For example, if the robot feeds during the test with data touching a object such as a shoe, it may result in an image of the position it is likely to touch.
This capability may be useful for performing tasks in situations where there is no visible data, such as a dark place, a search box, an unknown box, or an unknown area.

Future vision:
The current set of data are examples of interactions in a known and controlled environment.
The team hopes to improve this by collecting data in unknown and organized areas, or using the MIT's new touch glove to increase the size and diversity of the data set. (You can learn more: here)
There are still details that can be deceptive at the conclusion, especially when switching from one situation to another; such as telling the color of a thing once touched, or telling how soft a sofa is without pressing it.
The researchers say this could be improved by creating more robust models to see possible results on a larger scale.

In the future, these types of systems can help a more harmonious relationship between vision and robots, particularly in terms of better understanding and understanding of objects and helping to integrate the robot into an auxiliary environment or laboratory.
Andrew Owens, a graduate student at the University of California, says: "This is the first theory that can translate the relationship between sense of sight and sense of touch convincingly." Such theories are likely to be very useful for robots; You need to answer questions such as 'Is this purpose solid or smooth?' Or 'Did you raise the mug with your hand or not?', And we find that this problem is very difficult because the signals are quite different, but; this model showed very high capacity. "

In your opinion, will we be able in the future to shake hands with a robot to get to know us by shaking hands and looking at us only ?!
Share your comments.

Post a Comment


Ad blocker detected

Ads help us fund our site, please disable Adblocker and help us provide you with exclusive content. Thank you for your support