Amazon Robotics is an innovative moving technology that has helped millions of people order goods from Amazon. Typically, in a distribution center, packages are transported along a conveyor line or using machines operated by humans (for example, forklifts). Under Amazon Robotics' approach, all items are placed on mobile devices. As soon as the order enters the database, the software detects the nearest automated controlled robotic device to the article and sends it to pick up the goods. Hundreds of orders are processed daily at the fulfillment centers. A growing portion of these packages is picked, sorted, and distributed by Amazon's Robin robot picker.
Robin removes parcels from a conveyor belt using a suction gripper, then scans them and transfers them to a driving robot that guides them to the appropriate loading unit. The robot's task is particularly challenging because of the rapidly changing environment. Compared to other robotic arms, Robin is not limited to performing a series of programmed movements but responds in real-time to its surroundings. This robot works in an environment where things are changing rapidly. It recognizes objects around it - different-sized boxes, soft packs, envelopes lying on top of others - and decides which one to choose. It does all these things without human assistance.
Amazon specialists suggested using the original method to train Robin to recognize packages coming down the conveyor belt. Before introducing computer vision algorithms to segment scenes into different elements, the developers gave the model the ability to search for objects in the image independently. When the model detected an object, experts reported how accurately it did so.
At first, models were used that had been pre-trained and could recognize the uncomplicated elements of objects, particularly edges and planes. The experts trained Robin to identify a variety of packages, which he was to process. The specialists also used several thousand images and drew lines to identify different packages to refine the system further. After all, labels can change. Even a human can hardly notice the difference between one package and another in some cases. Such images are used to retrain Robin constantly, but this is far from the only method developers achieve the highest accuracy for their robot.
The robot can report how convinced it is about the decisions it makes. Pictures flagged by the robot as insufficiently accurate are immediately transmitted for annotation and then entered into the team's training database.
Robin also understands if he has made a mistake. For example, by dropping a package or inadvertently placing two boxes on the sorting robot, the robot will try to correct the error. If it can't, then a human is called in to intervene. The robotic arm has been used in small numbers, but thanks to the team's commitment to increasing efficiency, it is approaching deployment on a large scale.
Nevertheless, this robot still has a lot to learn. Robin is retrained every couple of days with new fleet metrics, and the developers hope to be able to update it several times a week in the future.