By Markus Schmidt, Swisslog
Machine learning is key to the future of robotic item picking. But how will it overcome the three V’s of volume, velocity and veracity? The key may be in mimicking how babies learn.
In one of our previous posts, we outlined the challenges of robotic picking by highlighting the amazing capabilities of the human hand and its connection to the eyes, which makes it simple for us to reach into a bin and select a specific object from a group of random products.
This is not so easy for robotic pickers. They must break down what to us seems like one task, into multiple discrete tasks and decisions. We also shared some of the incredible progress that is taking place in the areas of vision and gripping technology that are allowing robots to close the gap with the human hand-eye combination.
But there is another piece to this puzzle: hand-eye coordination is made possible by the human brain. The brain is an excellent platform for doing complex tasks without the need for discreet programming. One of the key characteristics of the brain is its adaptability. Humans can learn from experience and improve their performance based on that experience. What is even more impressive is that humans can make decisions based on experience about how to proceed in situations we haven’t encountered before. Can robots be given this same ability?