Date: 05-06-12
Duration: 3½ hours
Group members attending: Tore, Troels & Kristian
Goal
We want to be able to convert the bitmap photo into paths and hand it to the robot. The robot should then be able to follow these paths.
We need to finish our robot. Finally we need to think about some graphical representation of the pathing.
Plan
1. Get the path planning algorithms sorted out, so that we have something to feed the robot.
2. Find a proper solution for the tush on the robot. That is get it to be fixated enough to draw, yet loose enough to lift up and down.
3. Figure out a way to graphically represent the lines that the robot has to follow.
Execution
1. We got our line algorithm to work. We feed the image to the program, it is transformed to a 2-dimensional bit-array, and then it searches for lines as described in the previous lab-report. The result of this is shown below, step by step. The left 2d-array is representing the initial image. In the second 2d-array, the line of 1s to the left has been processed, and is removed. In the third 2d-array, the line at the bottom has been processed and is removed. Finally the line in the middle is processed. These lines are then transferred to an array of such lines which the robot will use to do the drawing.
Seen in the fourth 2d-array below, we still have a slight error in the algorithm, since some corners are skipped. There is two 1s left around the middle of the array.
Code seen here (*1).
2. We completed the first version of our robot: Drawbot 1.0. Image below:
We finally got the pen mounted as close to the center between the wheels as we think is possible. It is still a bit loose, but that is the cost of being able to raise and lower the pen. The pen is attached to a few lego bricks and slides up and down between other bricks.
The motor seen in the top right of the picture is used to raise the pen. We plan on adding two ultrasonic sensors later.
3. We want the people to be able to see how our point line algorithm works. At the moment when we run the algorithm it creates a list of point lines. This list is handed off to the robot so it has something to follow. We also want to make a graphical representation on the computer of this. So we create a canvas and draw the lines (as the robot will do in real life). People will then be able to see the different lines the robot will follow on the screen. This will both be a “gimmick” but also a way to see how well the robot actually draws the image (Compared to a computer drawing of the image).
Code seen here (*2).
Status
We completed the first part of the robot, that is, it is now able to draw, drive, raise and lower the pen. Next step is the to mount the ultrasonic sensors.
We also got the line algorithm to work, so we are now able to create the chains that the robot will follow.
References
(*1) http://troelskristiantore.blogspot.dk/2012/06/legolab-code-chaincalculator.html
(*2) http://troelskristiantore.blogspot.dk/2012/06/legolab-code-pcdraw.html
No comments:
Post a Comment