Friday, October 11, 2024

Week 06/10/2024 - 12/10/2024

 11/10/2024

Submitted report, codes and demo video:


Updated Gantt Chart:


Reflection:
When it comes to technical research and experiments, sometimes, things do not go as planned. Lots of troubleshooting involved just to get the system to work. 










Tuesday, October 8, 2024

Week 29/09/2024 - 05/10/2024

This week was spent reading journals (as reference) and writing the report. 


Referencing method chosen is the IEEE style. 




Monday, September 23, 2024

Week 22/09/2024 - 28/09/2024

22/09/2024
Discovered why the end effector moves erratically sometimes during hand detection. This happens when the system somehow captures the hand coordinate as x: 0.00, y: 0.00 and z: 0.00.



After  placing a condition where the x, y, z values cannot be all 0, the issue was resolved. 

I started testing the performance of the system by checking if the system is able to complete the entire process of picking and placing. Different types of comparisons could be done with the test data collected. For example, percentage of successful completion of process, successful rotation of end effector, etc.


Below is a video of a successful completion of the pick and place process:




Below is a video of the end effector delivering the object to the palm, but as it was at the tip of the finger, the object fell off:




Another test was done to check the accuracy of the end effector moving to the object during the pick process. 




A pointed apparatus was placed at the end effector. The system is then tested to check if the end effector is able to move to the centre of the object. 




The camera position, obtained position, adjusted position of the end effector and the actual position of the object is then recorded to compare the distance between the adjusted position and the actual position. As the z value is fixed, only the x and y values were compared.




23/9/2024
Drafted the journal report now that the test data has been collected.

24/9/2024
Continued drafting the report. Analysed the test data and generated graphs and charts.

3D Scatter Plot



Showed supervisor the draft.

25/9/2024
Continued writing the report in full. 









Monday, September 16, 2024

Week 15/09/2024 - 21/09/2024

 

15/9/2024

Spent the whole day adjusting the x, y, z values and combining the object detection module to moving the arm to the detected object. 


During one of the experiments, the end effector twisted in an unexpected way. The camera cable got yanked out and the tip broke, ending my session prematurely. 


Thought of ways to avoid this from happening again: 

1. Unplug cable while testing robot's movement

2. Loosen the cable/ give it more leeway. 

16/9/2024

Managed to complete the entire process of object detection and picking up object (single object). Based on my observation, when the object is nearer to the camera, the UR10 is able to pick up the object (1st position to 5th position), but the further away from the camera, the accuracy drops a lot.




I had a discussion with my supervisor about testing and evaluating. After further deliberation, I have decided to test the accuracy in the following manner:

Object position based on camera vs object position based on robot and the difference between the actual position and the position obtained by the robot. 

At the moment, I am just working on vertical position and not horizontal position. 

18/9/2024

Worked on the grippers. The system can now move to object's location and pick it up and place it onto a person's palm. However accuracy in placing on a person's palm seems to be affected by whether the whole palm is in the camera frame or not. When it isn't, the x and y value becomes completely inaccurate (even going the opposite direction of the palm). 



20/9/2024

Today, I worked on getting the gripper to rotate according to the orientation of the object. I have tried getting the system to recognise the orientation of the object and then calculating the rx, ry and rz based on the camera coordinates, but the end effector ended up twisting away from the object. 

Therefore, I came up with a simpler solution: For now, the system will only determine if the object was placed horizontally or vertically (based on the bounding box) and then the gripper would rotate at a fixed value depending on the orientation of the object. 



21/9/2024

For the rotation of object to  be placed on the palm, I discovered that I could use the media pipe (MediaPipe Hands),  library to estimate 3D hand poses.

(Ref: https://mediapipe.readthedocs.io/en/latest/solutions/hands.html)


The idea was to have the object rotate 90 degrees away from the middle finger. However, it meant having a dynamic rx, ry and rz for the end effector. The result was as below



Since I am unable to rectify the issue of having dynamic rx, ry and rz without the robot flipping in the above manner, and recognising that there is not enough time to fully understand and explore the MediaPipe library, I decided to forgo using this method. 

Instead, the same method that was used for the object detection and orientation was used for the palm - by recognising the orientation of the bounding box (rectangle), the end effector will rotate by 90 degrees (either horizontal or vertical). 


Started the data collection for analysis by testing the number of times the system is able to complete the entire process of picking object and placing the object. 10 different coordinates and 2 types of orientation (horizontal and vertical) was tested for both the object and the palm:




The number of tests will be increased to 20 tomorrow to provide better understanding of the system.




Saturday, September 14, 2024

Week 08/09/2024 - 14/09/2024


Still encountering issues with calibration. The robot is still not moving to the object's location. Attempting to fix the issues by reading up and trying different ways. Here are some of the blogs and articles referred to, and excerpts worth paying attention to
:


https://blog.zivid.com/the-practical-guide-to-3d-hand-eye-calibration-with-zivid-one 



https://stackoverflow.com/questions/67072289/eye-in-hand-calibration-opencv

"I want to calibrate the camera and find the transformation from camera to end-effector. I have already calibrated the camera using this OpenCV guide, Camera Calibration, with a checkerboard where the undistorted images are obtained.

My problem is about finding the transformation from camera to end-effector. I can see that OpenCV has a function, calibrateHandEye(), which supposely should achieve this. I already have the "gripper2base" vectors and are missing the "target2cam" vectors. Should this be based on the size of the checkerboard squares or what am I missing? Any guidance in the right direction will be appreciated."


The checkerboard was downloaded from the following site:

https://medium.com/@chaitalibh.cb/camera-calibration-with-checkerboard-54f93af742a0


Consulted with the supervisor on the issues faced, ie: even after calibrating, the robot arm is moving away from the object and not towards the object. After the conversation, I realised that I missed out a step: determining the distance of the object from the robot base, which should be done after object detection.


After fixing the issue, the robot arm finally moved towards the object. However, it was not accurate in terms of the x, y and z location. I then took a few readings to see if the difference was consistent - it wasn't at all. 


The next couple of days were spent trying to get a consistent, if not accurate reading of the x, y, and z position of the detected object. 


14/9/2024

Discovered that the data (poses) captured at the beginning affects the accuracy and consistency of the detected object's x, y, z position. 


Finally decided to use the 20 pose transformation matrix for the pick and place project. Although there were differences in the various obtained position and adjusted position, it was the most consistent of the different sets of poses I have tried. 




The table above shows the obtained position, the adjusted position to reach the object, and the differences. The z and y value differs a lot, depending on the position of the object in the camera frame. Nevertheless, this is the best results I have obtained so far. 


The videos below shows how data is captured for 20 different poses for calibration and to calculate the transformation matrix :










Week 06/10/2024 - 12/10/2024

 11/10/2024 Submitted report, codes and demo video: Updated Gantt Chart: Reflection: When it comes to technical research and experiments, so...