Date: 10-06-12
Duration: 7½ hours
Group members attending: Tore, Troels & Kristian
Goal
Further testing to fix the calibration errors found last lab session.
We need to expand the program to handle data transmission between pc and robot, and we need to have the program make a graphical representation.
We need to mount the ultrasonic sensors and come up with an idea as to how the recalibration part should be done.
Plan
1. Correct the calibration errors, both when turning as well as driving straight forward.
2. Expand the program, so we are able to send data to the robot.
3. Testin. If the robot hits close enough to the starting position, we will draw a larger image with more chains and see how that goes.
4. If testing goes well, we will mount the ultrasonic sensors.
5. Come up with some sort of calibration algorithm.
6. Further modifications to the robot. We had a new idea for the lower/raise pen mechanism.
7. Expand the program to make a graphical representation of the image on the pc. This image is made from the data we feed the robot. Each chain has its own colour to distinguish them from one another.
Execution
1. The most important problem of the calibration errors, is that the robot turns too much. This means that the distance, we have measured between the wheels are wrong, and since it turns too much, it means that our present diameter is too large. We wrote a simple test that made the robot drive forward 50 cm, then turn 1620 degrees, and drive forward 50 cm again. We drew lines to see how far off the robot was driving. After a few tests the line going out and the line going back was close enough to parallel that we were satisfied.
The calibration can be seen in this video (*1).
Later we came to think, that the wheels might not turn equally fast in forward and in backward drive. After testing, this proved to be correct. We set the wheel diameter to 17.23 and turn one way, and the tusch would stop in the exact same spot as it started. Then we turned the robot the other way around, and it would be almost 5 cm off. In the end we found the wheel size for turning both ways and took the average. The final diameter became 18.7.
This just proves the need for recalibration between each line drawing. The testing was done on a whiteboard, the results looked like this:
The two tests at the top is with the same wheel diameter, but with the robot turning different ways.
2. We already have the possibility to send data from the robot to the pc. The other way was done like this (*2) from the pc and like this (*3) from the robot.
3. The above result was finally somewhat satisfactory. Next step was to change the code, so that we could send the 2d array of an image to the robot. Code seen here (*3).
4. First idea for the recalibration, we only need one ultrasonic sensor. This was mounted like this:
5. The first draft of the recalibration algorithm. What we need:
- Two “walls” along the x and y axis.
- A precalibration phase.
- A recalibration phase.
We need to either (preferably) cut out two pieces of plank to use as walls, or place the whiteboard in a corner of the room.
We place the robot on the whiteboard facing a direction, we want to be the x-axis. Then the robot should measure the distance to the x-axis and then to the y-axis. We imagine it could be done in the following way. This is the precalibration phase.
The robot turns 90 degrees and measures the distance to the wall.
Then steps of the following will be taken: The robot turns one degree and measures the wall. If the distance is smaller it repeats the step. If the distance is greater it turns one degree in the other direction and measures the distance, and repeat this step. This way we will find the smallest distance from to the wall. When this is done, the robot turns 90 degrees to face the y-axis and repeats the steps above to determine the distance. Now we have the starting conditions.
The recalibration could be done like this:
After the robot has drawn a chain, it returns to the starting position. From here the robot will in the same way as mentioned above find the shortest distance from its’ current position to the x-axis. When this is found, the heading of the robot should be perpendicular to the x-axis and the robot should drive forward or backwards until the original distance to the x-axis is reached. Then this should be repeated with the y-axis and the robot should end up in the original position and be ready to draw the next line. This will be implemented next time.
6. We made further modifications to the raise/lower mechanism. Now the pen is more fixated. Shown in the following image:
It is still slightly off center, but we think this is the best, we can do.
7. We expanded the code to draw a graphical representation of the image the robot would draw. We used the java.swing library and just used the same method to draw the lines as the robot does. Code can be seen here (*4).
Status
We are finally satisfied with the mechanism that raises and lowers the pen.
We got the data transmission to work, and the rest of the goals we have set today regarding expansion of the program. That is, we are now able to draw an image, feed it to the program, which transforms it and sends it to the robot, and the robot then draws the image. We have also finished the code to give a graphical representation on the pc. Only minor modifications are needed, e.g. when we build the point chain, we need to prioritize horizontal and vertical directions over diagonal. Otherwise the algorithm will often skip pixels and twist the image slightly.
The robot is still not precise enough, so the next step is to use the ultrasonic sensors to recalibrate the position between the lines.
References
(*1) http://youtu.be/MsozQXSidjo
(*2) http://troelskristiantore.blogspot.dk/2012/05/legolab-code-drive.html
(*3) http://troelskristiantore.blogspot.dk/2012/05/legolab-code.html
(*4) http://troelskristiantore.blogspot.dk/2012/06/legolab-code-pcdraw.html
No comments:
Post a Comment