Saturday, September 14, 2024

Week 08/09/2024 - 14/09/2024


Still encountering issues with calibration. The robot is still not moving to the object's location. Attempting to fix the issues by reading up and trying different ways. Here are some of the blogs and articles referred to, and excerpts worth paying attention to
:


https://blog.zivid.com/the-practical-guide-to-3d-hand-eye-calibration-with-zivid-one 



https://stackoverflow.com/questions/67072289/eye-in-hand-calibration-opencv

"I want to calibrate the camera and find the transformation from camera to end-effector. I have already calibrated the camera using this OpenCV guide, Camera Calibration, with a checkerboard where the undistorted images are obtained.

My problem is about finding the transformation from camera to end-effector. I can see that OpenCV has a function, calibrateHandEye(), which supposely should achieve this. I already have the "gripper2base" vectors and are missing the "target2cam" vectors. Should this be based on the size of the checkerboard squares or what am I missing? Any guidance in the right direction will be appreciated."


The checkerboard was downloaded from the following site:

https://medium.com/@chaitalibh.cb/camera-calibration-with-checkerboard-54f93af742a0


Consulted with the supervisor on the issues faced, ie: even after calibrating, the robot arm is moving away from the object and not towards the object. After the conversation, I realised that I missed out a step: determining the distance of the object from the robot base, which should be done after object detection.


After fixing the issue, the robot arm finally moved towards the object. However, it was not accurate in terms of the x, y and z location. I then took a few readings to see if the difference was consistent - it wasn't at all. 


The next couple of days were spent trying to get a consistent, if not accurate reading of the x, y, and z position of the detected object. 


14/9/2024

Discovered that the data (poses) captured at the beginning affects the accuracy and consistency of the detected object's x, y, z position. 


Finally decided to use the 20 pose transformation matrix for the pick and place project. Although there were differences in the various obtained position and adjusted position, it was the most consistent of the different sets of poses I have tried. 




The table above shows the obtained position, the adjusted position to reach the object, and the differences. The z and y value differs a lot, depending on the position of the object in the camera frame. Nevertheless, this is the best results I have obtained so far. 


The videos below shows how data is captured for 20 different poses for calibration and to calculate the transformation matrix :










No comments:

Post a Comment

Week 06/10/2024 - 12/10/2024

 11/10/2024 Submitted report, codes and demo video: Updated Gantt Chart: Reflection: When it comes to technical research and experiments, so...