Initial objective and goals:
Objective
By the end of this project, I would have investigated if a robot system is able to pick an object based on spoken instruction through vision and then place the item picked onto the speaker’s palm within 15 seconds.
Goals:
Investigate if:
Robot arm is able to identify object based on speech and vision
Robot is able to automatically grasp objects through vision.
Robot is able to grasp objects of different shapes and sizes
Robot is able to grasp an object in a different orientation.
Place the object onto the palm of a person.
However, after discussing with the supervisors, I have decided to have a more focused objective and goals:
Objective
By the end of this project, I would have investigated if a robot system is able to pick an object through vision and then place the item picked onto a person's palm within 15 seconds.
Goals:
Investigate if:
Robot arm is able to identify object based on vision
Robot is able to automatically grasp objects through vision.
Robot is able to grasp an object in a different orientation.
Place the object onto the palm of a person.
Literature Summary
Read through 3 papers and wrote a summary for the 3 papers:
- A Vision-Based Robot Grasping System
- Robotic Object Recognition and Grasping with a Natural Background
- Vision-Based Robotic Arm Control Algorithm using Deep Reinforcement Learning for Autonomous Objects Grasping
Title: A vision-based robot grasping system
Link: https://ieeexplore.ieee.org/abstract/document/9745523Objective/Aim:
Increasing the grasp pose detection accuracy for a variety of daily household objects only from the visual sensing
Setup:
Franka Panda robot arm equipped with parallel gripper. An intel RealSense D435 is attached to the arm, just above the gripper.
Method:
For the grasp detector, the system uses a densely connected Feature Pyramid Network (FPN) feature extractor.
For the robot system, the vision measurement algorithm generates the grasp pose from a single input modality directly, the depth image.
Computer system/software:
Tensorflow deep learning library, written in Python. PC running Ubuntu 18.04, equipped with NVidia GeForce 1080 Ti GPU Intel Core i7-6700K CPU @ 4.00GHz x 8 processor and 32G memory.
Evaluation:
Two public datasets were used for validation: Cornell Grasp Dataset and Jacquard Dataset.
Three types of experiments were done: i) Grasping a single object at different poses; ii) grasping 51 different objects that are not included in the public datasets; grasping multiple objects at one time.
Conclusion:
The three experiments done proved that the model is able to grasp all kinds of daily objects in various poses.
Future works/considerations:
Incorporate tactile sensing in the grasping system to give a higher grasping success rate.
Incorporate RGB-D visual servoing controller into the parallel gripper grasp system to eliminate execution error.
Title: Vision-Based Robotic Arm Control Algorithm Using Deep Reinforcement Learning for Autonomous Objects Grasping
Link: https://www.mdpi.com/2076-3417/11/17/7917Objective/Aim:
To evolve the grasping task by reaching the intended object based on deep reinforcement learning.
To compute the robot arm kinematics to grasp a specific object.
Setup:
5-DOF robotic arm equipped with an 8MP resolution camera.
Method:
Use YOLOv5 for object detection and position
Use backward projection for extraction of the object’s 3D position
Use inverse kinematics to compute the angles of the joints at the detected position\
Employ Deep Deterministic Policy algorithm to teach the arm to autonomously reach the wanted object
Computer System/Software:
Intel Core i7 8th generation processor CPU, 16GB RAM
GPU - NVIDIA GeForce GTX 960
Ubuntu operating system
Anaconda Python package: Jupyter, Tensorflow, Keras and Matplotlib
Evaluation:
Evaluate the efficiency of the 5-DOF arm robot to grasp a determined object.
Train the model for 400 episodes, obtain accuracy and error results.
Conclusion:
Despite some error, every joint angle can be calculated and the end-effector can reach the determined location.
Decrease in error range throughout episodes proved that the reinforcement learning algorithm can reach a targeted object with an inverse kinematic of the robot arm.
Future works/considerations:
Expand model by integrating the pick and place task.
Title: Robotic Object Recognition and Grasping with Natural Background
Link: https://journals.sagepub.com/doi/full/10.1177/1729881420921102Objective/Aim:
Introduce a novel, efficient grasp synthesis method that can be ued for closed-loop robotic grasping.
Setup:
Computer connected to robot control cabinet
6-DOF SD700E Industrial Robot Arm with EFG20 electric gripper
Logitech C310 camera mounted on the end flange of the arm
Method:
Edge detection, superpixel segmentation, shape matching
Using relative distance between object centroid and the gripper, the algorithm guides the robot to move the gripper to the object and form a proper grabbing posture to complete the task.
Computer System/Software:
-
Evaluation:
Changing the state of the table
Changing the relative order of objects
Changing the postures and positions
Conclusion: -
Future works/considerations: -
No comments:
Post a Comment