11/10/2024
Submitted report, codes and demo video:
11/10/2024
Submitted report, codes and demo video:
This week was spent reading journals (as reference) and writing the report.
Referencing method chosen is the IEEE style.
22/09/2024
Discovered why the end effector moves erratically sometimes during hand detection. This happens when the system somehow captures the hand coordinate as x: 0.00, y: 0.00 and z: 0.00.
15/9/2024
Spent the whole day adjusting the x, y, z values and combining the object detection module to moving the arm to the detected object.
During one of the experiments, the end effector twisted in an unexpected way. The camera cable got yanked out and the tip broke, ending my session prematurely.
Thought of ways to avoid this from happening again:
1. Unplug cable while testing robot's movement
2. Loosen the cable/ give it more leeway.
16/9/2024
Managed to complete the entire process of object detection and picking up object (single object). Based on my observation, when the object is nearer to the camera, the UR10 is able to pick up the object (1st position to 5th position), but the further away from the camera, the accuracy drops a lot.
I had a discussion with my supervisor about testing and evaluating. After further deliberation, I have decided to test the accuracy in the following manner:
Object position based on camera vs object position based on robot and the difference between the actual position and the position obtained by the robot.
At the moment, I am just working on vertical position and not horizontal position.
18/9/2024
Worked on the grippers. The system can now move to object's location and pick it up and place it onto a person's palm. However accuracy in placing on a person's palm seems to be affected by whether the whole palm is in the camera frame or not. When it isn't, the x and y value becomes completely inaccurate (even going the opposite direction of the palm).
20/9/2024
Today, I worked on getting the gripper to rotate according to the orientation of the object. I have tried getting the system to recognise the orientation of the object and then calculating the rx, ry and rz based on the camera coordinates, but the end effector ended up twisting away from the object.
Therefore, I came up with a simpler solution: For now, the system will only determine if the object was placed horizontally or vertically (based on the bounding box) and then the gripper would rotate at a fixed value depending on the orientation of the object.
21/9/2024
For the rotation of object to be placed on the palm, I discovered that I could use the media pipe (MediaPipe Hands), library to estimate 3D hand poses.
(Ref: https://mediapipe.readthedocs.io/en/latest/solutions/hands.html)
The idea was to have the object rotate 90 degrees away from the middle finger. However, it meant having a dynamic rx, ry and rz for the end effector. The result was as below
Since I am unable to rectify the issue of having dynamic rx, ry and rz without the robot flipping in the above manner, and recognising that there is not enough time to fully understand and explore the MediaPipe library, I decided to forgo using this method.
Instead, the same method that was used for the object detection and orientation was used for the palm - by recognising the orientation of the bounding box (rectangle), the end effector will rotate by 90 degrees (either horizontal or vertical).
Still encountering issues with calibration. The robot is still not moving to the object's location. Attempting to fix the issues by reading up and trying different ways. Here are some of the blogs and articles referred to, and excerpts worth paying attention to:
https://blog.zivid.com/the-practical-guide-to-3d-hand-eye-calibration-with-zivid-one
https://stackoverflow.com/questions/67072289/eye-in-hand-calibration-opencv
"I want to calibrate the camera and find the transformation from camera to end-effector. I have already calibrated the camera using this OpenCV guide, Camera Calibration, with a checkerboard where the undistorted images are obtained.
My problem is about finding the transformation from camera to end-effector. I can see that OpenCV has a function, calibrateHandEye(), which supposely should achieve this. I already have the "gripper2base" vectors and are missing the "target2cam" vectors. Should this be based on the size of the checkerboard squares or what am I missing? Any guidance in the right direction will be appreciated."
The checkerboard was downloaded from the following site:
https://medium.com/@chaitalibh.cb/camera-calibration-with-checkerboard-54f93af742a0
Consulted with the supervisor on the issues faced, ie: even after calibrating, the robot arm is moving away from the object and not towards the object. After the conversation, I realised that I missed out a step: determining the distance of the object from the robot base, which should be done after object detection.
After fixing the issue, the robot arm finally moved towards the object. However, it was not accurate in terms of the x, y and z location. I then took a few readings to see if the difference was consistent - it wasn't at all.
The next couple of days were spent trying to get a consistent, if not accurate reading of the x, y, and z position of the detected object.
14/9/2024
Discovered that the data (poses) captured at the beginning affects the accuracy and consistency of the detected object's x, y, z position.
Finally decided to use the 20 pose transformation matrix for the pick and place project. Although there were differences in the various obtained position and adjusted position, it was the most consistent of the different sets of poses I have tried.
The table above shows the obtained position, the adjusted position to reach the object, and the differences. The z and y value differs a lot, depending on the position of the object in the camera frame. Nevertheless, this is the best results I have obtained so far.
The videos below shows how data is captured for 20 different poses for calibration and to calculate the transformation matrix :
11/10/2024 Submitted report, codes and demo video: Updated Gantt Chart: Reflection: When it comes to technical research and experiments, so...