Duration: 2½ hours
Attended by: Troels, Kristian & Tore.
Results
What we built:
A robot that can take a picture as input and draw it on a whiteboard. The robot translates the picture into vectors in a coordinate system. It creates the drawing based on these vectors. The robot keeps track of its own location and there is periodic location calibrations.Functionality:
Calibration mechanism:
This
step took a while to figure out. We originally tried with one
ultrasonic sensor, but this was hard to get to work. We also considered
using a compass, but these were imprecise, especially when we drew on
a metal white board.
We
ended up using two ultrasonic sensors to measure the distance from the
robots starting location to the edges of the whiteboard. It measures two
distances: one perpendicular to the X-axis and one for the Y-axis.
These are used when the robot, between lines, returns to the starting
location. It can then measure the distances again and compare them to
the initial measurements and adjust its’ position accordingly. This recalibration seemed to work rather well and was an important step towards good images. Before this part, the robot would drive more and more awry, distorting the image increasingly by every line drawn.
Pen-lifting-mechanism:
We built a pen holder that is in the center between the wheels. The robot is able to lift and lower the pen during the drawing. The architecture of this mechanism went through several iterations, and we are quite happy with the result. The pen is fixed firmly in place, despite it being able to move up and down.
Point-chain algorithm
This algorithm analyses a bitmap picture and turns it into an array of vectors. These vectors represent one pixel on the picture and the robot draws the picture from these vectors. This worked rather well, although the points were only able to make the robot go horizontally, vertically and diagonally.
Drawing program
A computer interface that will imitate the robot’s behaviour and draw the picture on the computer using the same algorithm. We used this to validate the robots output compared to a “perfect” drawing.
Improvements:
Continuous drawing:When our robot faces multiple vectors with the same value, it will treat each vector as a separate point. This results in the robot starting and stopping multiple time when drawing a straight line.
This can be handled by concatenating the vectors with the same value thus creating a one long vector instead of several small vectors. This will make the drawing smoother, more precise and faster to draw for our robot. The challenge we are facing when implementing this is that we currently weighs every vector with the same distance and thus we can’t treat vectors with different distances.
An example:
How we do it now: 3-3-3-2-1-1-4-6
After our optimization proposal: (3,3)-(2,1)-(1,2)-(4,1)-(6,1)
The points were only able to make the robot go horizontally, vertically and diagonally. We would also have liked to improve the algorithm so that it was was able to analyze the point chains and create other directions like archs.
At the present state, the algorithm has a hard time handling pixels with more than two neighbours. This would occur in e.g. a cross. To implement this functionality we thought about numbering the pixels according to the number of neighbouring pixels. Whenever a pixel was analyzed, we would subtract 1 from the number. When it reaches 0, this pixel is not part of any other chains.
Optimize calibration:
Right now we only base our calibration on the ultrasonic sensors. We use them to calibrate the 0° angle, so that the robot is facing along the x-axis as well as the origin point. While this gives us a pretty good origin point calibration it failed to deliver a precise 0° angle. This was because the ultrasonic sensors were not precise enough. They only deliver a centimeter precision, which was imprecise enough to give a slightly wrong angle, and it would show on the drawing. We also had a problem with inconsistent measurements: Sometimes the sensors would measure a distance that was way off.
These errors could be minimized by using more than one type of sensor: For instance two light sensors could be used to calibrate the angle by pointing them on the ground, with a straight black line parallel to the y-axis. The robot could then turn until both sensors are on the black line. This approach should increase the precision of the heading at the origin.
Improve the GUI
This point was given a low priority, but we would have liked to have a more extensive interface with a few commands like start, pause and stop the robot, as well as buttons to control the robot directly to be able to draw “free hand” pictures.
Conclusion
The goal of this project was to build a robot, that would be able to draw images. There were several initial aspects to consider: how to transform the image, how to build the robot, how to draw the image. Then along the way other problems of various size arose. One of the more important problems, was the imprecision of the motors and sensors, which caused us to include the recalibration phase.We are rather satisfied with our final result, even though there is room for improvement. This would have been a project for the future, had we had more time.
The code for our project are found in these five classes:
Drive: http://troelskristiantore.blogspot.dk/2012/05/legolab-code-drive.html
Initiator: http://troelskristiantore.blogspot.dk/2012/05/legolab-code.html
PCDraw: http://troelskristiantore.blogspot.dk/2012/06/legolab-code-pcdraw.html
ImageToArray: http://troelskristiantore.blogspot.dk/2012/06/legolab-code-image-to-array.html
ChainCalculator: http://troelskristiantore.blogspot.dk/2012/06/legolab-code-chaincalculator.html