This week, I continued writing literature summary for the following papers:
- A Mobile Robotic Arm Grasping System with Autonomous Navigation and Object Detection
- Robotic Arm Grasping and Placing Using Edge Visual Detection System
Objective/Aim:
Develop a mobile grasping system for robotic arm.
Method:
First, the YOLOv4 algorithm is introduced to identify and locate the individual objects. Secondly, the output frame can be used as an input of the GrabCut algorithm to segment the objects from the background for calculating the capture orientation.
Thirdly, the conversion between the coordinate systems is conducted to acquire the positions of objects, and the obtained information is sent to the robotic arm to complete the grasping task on the Robot Operating System (ROS).
Process flow:
Robot performs SLAM through lidar to reach the desired position.
System will turn on the camera to identify and locate the objects, and compute the pose of each object.
User enters the object category to be captured, and the system will make a judgment.
Computer System/Software:
The system is implemented in Ubuntu16.04 on a Lenovo 2.6 GHz processor RTX 2060 with 16 GB RAM and 8 GB video memory
Setup:
The robot is equipped with a Hokuyo lidar, RealSense D435 camera.
Evaluation:
Experiment is done on 7 objects of different colour and shapes placed in the scene at different positions and angles. 3 measurements are obtained: grasping times, successful times and success rate.
Conclusion:
The results show that the proposed method can complete the robot’s navigation to the target position to realize the grasping function of the specified object.
Future works/considerations:
Increase the accuracy of the system by improving the moving object extraction algorithm.
Title: Robotic Arm Grasping and Placing Using Edge Visual Detection System
Link: https://ieeexplore.ieee.org/document/5899202
Objective/Aim:
Aims to develop the object visional detection system that can be applied to the robotic arm grasping and placing.
Method:
Accurately measure the relative distance between the object and robot arm using the edge detection algorithm with a camera device.
Uses image processing and shape matching to identify object
Placement of object is done using the same image processing method.
Computer System/Software:
Borland C++ Builder 6 programming was used for the image processing
Setup:
The robotic arm in the study uses seven RX-64 servomotors and line moving rail.
The webcam with the resolution of 320 × 240(pixel) is installed at the upper location of arm clip.
Evaluation:
Grabbing experiments using four kinds of color (red, yellow, green, and blue) ball for
each one, each two, and each three, respectively, indoor and the hall under different environment.
Conclusion:
Accomplished three tasks, i.e., “find the ball”, “clip the ball” and “put the ball”.
More importantly, this system is a really streamlined and inexpensive system.
Future works/considerations:
Focus on finding, grasping and placement of objects in different shapes and colors.
- Predicting Stable Configurations for Semantic Placement of Novel Objects (https://arxiv.org/abs/2108.12062)
- ReorientBot: Learning Object Reorientation for Specific-Posed Placement (https://arxiv.org/abs/2202.11092)
Title: ReorientBot: Learning Object Reorientation for Specific-Posed Placement
Link: https://ieeexplore.ieee.org/abstract/document/9811881
Objective/Aim:
To present a robotic system that can rearrange objects to a specific goal state, including reorientation and regrasping for final placement.
Method:
Systems runs detection, post estimation and motion planning to rearrange objects.
i) 6D pose estimation and volumetric reconstruction
ii) Motion waypoint selection that pairs start and end waypoints via learned filtering
iii) Trajectory generation by motion planning using the selected waypoints.
Computer System/Software:
ROS, PyTorch to implement the learned models. PyBullet as physics engine to simulate behaviour of objects.
Setup:
Franka Emika Panda robot, Realsense D435 mounted on robotic arm.
Two types of suction gripper:
I-shape
L-shape
Evaluation:
Done in both simulation and real-world. 6 large/medium sized objects were used in both simulation and real-world experiments: drill, cracker box, sugar box, mustard bottle, pitcher and detergent.
Conclusion:
The authors’ system improves in both efficiency and success rate and has shown capable, dynamic reorientation for significant rotation and precise placement in various target configurations.
Future works/considerations:
Possible combinations of learning models with traditional motion planning.



No comments:
Post a Comment