Thursday, 28 June 2012

Legolab 19 - Conclusion

Legolab 19, 28-06-12
Duration: 2½ hours
Attended by: Troels, Kristian & Tore.

Results

 

What we built:

A robot that can take a picture as input and draw it on a whiteboard. The robot translates the picture into vectors in a coordinate system. It creates the drawing based on these vectors. The robot keeps track of its own location and there is periodic location calibrations.

Functionality:

Calibration mechanism:
This step took a while to figure out. We originally tried with one ultrasonic sensor, but this was hard to get to work. We also considered using a compass, but these    were imprecise, especially when we drew on a metal white board.
We ended up using two ultrasonic sensors to measure the distance from the robots starting location to the edges of the whiteboard. It measures two distances: one perpendicular to the X-axis and one for the Y-axis. These are used when the robot, between lines, returns to the starting location. It can then measure the distances again and compare them to the initial measurements and adjust its’ position accordingly.
This recalibration seemed to work rather well and was an important step towards good images. Before this part, the robot would drive more and more awry, distorting the image increasingly by every line drawn.
Pen-lifting-mechanism:
    We built a pen holder that is in the center between the wheels. The robot is able to lift and lower the pen during the drawing. The architecture of this mechanism went through several iterations, and we are quite happy with the result. The pen is fixed firmly in place, despite it being able to move up and down.
Point-chain algorithm
    This algorithm analyses a bitmap picture and turns it into an array of vectors. These vectors represent one pixel on the picture and the robot draws the picture from these vectors. This worked rather well, although the points were only able to make the robot go horizontally, vertically and diagonally.

Drawing program
    A computer interface that will imitate the robot’s behaviour and draw the picture on the computer using the same algorithm. We used this to validate the robots output compared to a “perfect” drawing.

Improvements:

Continuous drawing:
    When our robot faces multiple vectors with the same value, it will treat each vector as a separate point. This results in the robot starting and stopping multiple time when drawing a straight line.
    This can be handled by concatenating the vectors with the same value thus creating a one long vector instead of several small vectors. This will make the drawing smoother, more precise and faster to draw for our robot. The challenge we are facing when implementing this is that we currently weighs every vector with the same distance and thus we can’t treat vectors with different distances.

An example:
How we do it now: 3-3-3-2-1-1-4-6
After our optimization proposal: (3,3)-(2,1)-(1,2)-(4,1)-(6,1)

The points were only able to make the robot go horizontally, vertically and diagonally. We would also have liked to improve the algorithm so that it was was able to analyze the point chains and create other directions like archs.
At the present state, the algorithm has a hard time handling pixels with more than two neighbours. This would occur in e.g. a cross. To implement this functionality we thought about numbering the pixels according to the number of neighbouring pixels. Whenever a pixel was analyzed, we would subtract 1 from the number. When it reaches 0, this pixel is not part of any other chains.


Optimize calibration:
    Right now we only base our calibration on the ultrasonic sensors. We use them to calibrate the 0° angle, so that the robot is facing along the x-axis as well as the origin point. While this gives us a pretty good origin point calibration it failed to deliver a precise 0° angle. This was because the ultrasonic sensors were not precise enough. They only deliver a centimeter precision, which was imprecise enough to give a slightly wrong angle, and it would show on the drawing. We also had a problem with inconsistent measurements: Sometimes the sensors would measure a distance that was way off.
    These errors could be minimized by using more than one type of sensor: For instance two light sensors could be used to calibrate the angle by pointing them on the ground, with a straight black line parallel to the y-axis. The robot could then turn until both sensors are on the black line. This approach should increase the precision of the heading at the origin.

Improve the GUI
This point was given a low priority, but we would have liked to have a more extensive interface with a few commands like start, pause and stop the robot, as well as buttons to control the robot directly to be able to draw “free hand” pictures.

Conclusion

The goal of this project was to build a robot, that would be able to draw images. There were several initial aspects to consider: how to transform the image, how to build the robot, how to draw the image. Then along the way other problems of various size arose. One of the more important problems, was the imprecision of the motors and sensors, which caused us to include the recalibration phase.
We are rather satisfied with our final result, even though there is room for improvement. This would have been a project for the future, had we had more time.


The code for our project are found in these five classes:
Drive: http://troelskristiantore.blogspot.dk/2012/05/legolab-code-drive.html
Initiator: http://troelskristiantore.blogspot.dk/2012/05/legolab-code.html
PCDraw: http://troelskristiantore.blogspot.dk/2012/06/legolab-code-pcdraw.html
ImageToArray: http://troelskristiantore.blogspot.dk/2012/06/legolab-code-image-to-array.html
ChainCalculator: http://troelskristiantore.blogspot.dk/2012/06/legolab-code-chaincalculator.html

Thursday, 21 June 2012

Legolab 18

Legolab 18
Date: 13-06-12
Duration: 5½ hours
Group members attending: Tore, Troels & Kristian

Goal
We need to improve the algorithm making the point chains. And we will look for other ways to improve the code.

Plan   
1. The algorithm making point chains has a flaw, so one line may be divided in two with a small gap.
2. Find other improvements
3. Final tests.

Execution
1. The earlier version of the algorithm had a flaw in the way we found beginnings of lines: if we had a line going straight north west a few pixels, and then straight north east a few pixels, the algorithm would start a line at the bend in stead of at the end. Thus dividing the original line in two. This bug was fixed.

2. The robot still drives a little skew. We found out, that we can set the wheel diameter for each wheel individually and this way make up for the error. This proved rather efficient.
The robot constantly makes very small calculation errors, mostly in the angles, so the angle the robot thinks it is heading is not the angle it is really heading. This small error gets bigger, the longer the robot drives. We changed this, so the error is reset after the robot has recalculated(*1).
We had problems with the robot not turning the correct angles. We had used our own heading counter, which was not effective because the robot would not always turn the correct amount. We now use the heading found in the Odometry class. It did not work too well earlier, when the calibration of the wheels was a bit off. But with further calibrations, this has proved more precise.

3. We drew the same image 4 times, with 4 different colours, on top of each other, to see how precise it was, or how consistent in the errors. Here is the image:


We measured the angle between the right blue and red eye. They are off each other by about 11 cm. The difference in angle between the headings at the origin, when the robot drove out to draw the eyes, are about 6 degrees. When the ultrasonic sensors are not more precise than integer cm, this angle difference is unavoidable, and must be considered acceptable in our case.
It is also worth to notice how close the recalibration dots are to each other (most of them, at least). Shown here:
We have had problems with the right ultrasonic sensor measuring strange distances sometimes. Even though the robot hardly turns and the left sensor only measures a difference in distance of about 1 cm, the right sensor could measure up to a 10 cm difference sometimes. This caused the robot to sometimes turn large angles, even though it should only have turned 1 or 2 degrees. We set a limit, so that it could turn at most 5 degrees, and this seemed to help.

Status
We have spent a great amount of time on getting the wheel distance and diameter correct, and so far have not been very successful because two motors might not turn equally. We finally found a way to fix this effectively by setting the wheel sizes individually.
We came up with a new idea to increase the precision of the heading after recalibration, but we do not have time for it: We could mount two light sensors on the robot. When they are both above a strip of tape on the board, the robot could be considered to be at the correct angle.

References
(*1) http://troelskristiantore.blogspot.dk/2012/05/legolab-code-drive.html

Legolab 17

Legolab 17
Date: 12-06-12
Duration: 6 hours
Group members attending: Tore, Troels & Kristian

Goal
We need  to test the robot and get rid of some bugs.

Plan   
1. Further testing of the calibration phase, the last try yesterday failed.
2. We need to correct several bugs. E.g. some strange turns, and we probably will find more during today’s testing.


Execution
1. Yesterday the robot measured several cm wrong. We need to draw the image again and do printouts to the screen to see, what the robot is doing.
The robot would sometimes oscillate between two angles when trying to face a wall. Before it would turn the difference between the two values measured by the ultrasonic sound sensors. We changed it to only turn half that value, and we put a constraint, so that it would max do 10 turns.
Half the value did not work, since the robot would try to turn 0.5 degree, and that was not possible. We instead increased the distance between the two ultrasonic sensors. This way, if the robot is turned slightly, the difference measured would be greater, and it should make the robot better at turning to the right direction. This, however, did not change anything, and we moved the ultrasonic sensors back to their original positions.
We ended up with the only change being the constraint of 10 turns.
Sometimes the right ultrasonic sensor would measure very wrong distances. We tried another sensor, but that measured a difference from the other of 3-4 cm, so we changed it back and accepted the random measurement errors.


2. The robot makes several unnecessary turns, e.g. first turn to head along the x-axis, then turn 180 degrees, then turn towards (0,0). This is done every time, the robot has finished a line and it could be combined into a single turn.
Another example is if the robot has just driven south west and the next step is northwest, it would turn 245 degrees instead of -45 degrees. We changed this to always take the shortest turn, seen in the code (*1).

Status
One of the bigger problems today is that even at full power, the robot is not able to drive straight ahead. And it was hard to find two motors, that would drive exactly the same.
We also had trouble with a dysfunctional ultrasonic sensor. It would measure strange distances, so we borrowed one from another group. Two US sensors would be tested on the same distance but the result would vary 2-3 cm.
Today was a frustrating day. Too much trouble with sensors and motors. But it mostly worked.

References
(*1) http://troelskristiantore.blogspot.dk/2012/05/legolab-code-drive.html

Legolab 16

Legolab 16
Date: 11-06-12
Duration: 4 hours
Group members attending: Tore, Troels & Kristian

Goal
Cut out wooden walls. Begin implementing the recalibration phase. Fix minor bugs in the existing code.

Plan   
1. We need to cut out two wooden walls. These will be placed along the x- and y-axis on the whiteboard and will be used to recalibrate the robot and return it to the starting position.
2. Implement the calibration algorithm described in the last lab session and test out how well it works.
3. Make minor changes to the drawing algorithm, we cannot draw closed circles.

Execution
1. We could not find any of the wood we were shown at an earlier lab session. For today’s testing purposes, we turned over two tables and used them as walls:

We used some lego wheels to lift the tables a little. This was done to make sure that the tables created a 90 degrees angle with the floor and with each other.

2. We changed the algorithm a bit and used two ultrasonic sensors instead. They are mounted as seen below:

Then the calibration phase was actually rather simple. We still have two steps: pre- and recalibration.

Precalibration:
The robot is placed on the whiteboard. This position is now (0,0) and it is headed along the x-axis.
It turns -90 degrees, measures the distance to the wall on the x-axis, turns another -90 degrees, measures the distance to the wall on the y-axis, and then turns -180 degrees to face the original heading. These two values give the place of origin compared to the walls.

Recalibration:
This happens after the robot has drawn a line and has returned back to the origin. The robot will almost always be off by a few centimeters. It is programmed to be facing the original direction along the x-axis after the return. The recalibration goes as follows:
The robot turns -90 degrees and faces the wall on the x-axis. With the two ultrasonic sensors we get two values. If the robot is perpendicular to the x-axis, the two values are equal. If the values are not equal, the robot turns a little and take new measurements to get two new values. This is repeated until the two values are equal. Depending on which of the values is greatest we turn either clock- or counter clockwise.
The robot then drives the difference between the original value from the precalibration phase and the new measured value. The robot is now reset with regard to the x-axis. It then turns -90 degrees, and does this again with the y-axis and finally turns -180 degrees. It is now back in the original position.

Here is a video, between each line drawn the recalibration can be seen (*1).
And a picture of the result:
The five dots at the bottom right is the position of the tusch after each recalibration. The distance between the upper and lower dot is ~4 cm. This margin of error, we think, is permissible for our purpose. The real gain of the recalibration is, that the robot aligns pretty well with the x-axis after each line.
The code can be seen in the Drive class (*2).

3.
Our previous drawing algorithm had a problem when drawing circles, since the starting pixel of the circle should be connected to the endpoint pixel. This didn’t happen since the starting pixel gets set to zero when the robot starts and so it wouldn’t be seen as a neighbour to the endpoint pixel. We changed the algorithm so it saves the starting point. We then use this to check if the starting pixel and end pixel are neighbours, and if so, draw the last line to close the circle.

Status
We completed the calibration-part. It seems to work rather well, although we need to test it a little further.
We made slight changes to the point chain program, we are now able to draw circles. In the previous version, there would be a small gap from where the robot ends the line and the start of the line.



References
(*1) http://youtu.be/h-GW8L1HLqo
(*2) http://troelskristiantore.blogspot.dk/2012/05/legolab-code-drive.html

Legolab 15

Legolab 15
Date: 10-06-12
Duration: 7½ hours
Group members attending: Tore, Troels & Kristian

Goal
Further testing to fix the calibration errors found last lab session.
We need to expand the program to handle data transmission between pc and robot, and we need to have the program make a graphical representation.
We need to mount the ultrasonic sensors and come up with an idea as to how the recalibration part should be done.

Plan   
1. Correct the calibration errors, both when turning as well as driving straight forward.
2. Expand the program, so we are able to send data to the robot.
3. Testin. If the robot hits close enough to the starting position, we will draw a larger image with more chains and see how that goes.
4. If testing goes well, we will mount the ultrasonic sensors.
5. Come up with some sort of calibration algorithm.
6. Further modifications to the robot. We had a new idea for the lower/raise pen mechanism.
7. Expand the program to make a graphical representation of the image on the pc. This image is made from the data we feed the robot. Each chain has its own colour to distinguish them from one another.

Execution
1. The most important problem of the calibration errors, is that the robot turns too much. This means that the distance, we have measured between the wheels are wrong, and since it turns too much, it means that our present diameter is too large. We wrote a simple test that made the robot drive forward 50 cm, then turn 1620 degrees, and drive forward 50 cm again. We drew lines to see how far off the robot was driving. After a few tests the line going out and the line going back was close enough to parallel that we were satisfied.
The calibration can be seen in this video (*1).

Later we came to think, that the wheels might not turn equally fast in forward and in backward drive. After testing, this proved to be correct. We set the wheel diameter to 17.23 and turn one way, and the tusch would stop in the exact same spot as it started. Then we turned the robot the other way around, and it would be almost 5 cm off. In the end we found the wheel size for turning both ways and took the average. The final diameter became 18.7.
This just proves the need for recalibration between each line drawing. The testing was done on a whiteboard, the results looked like this:

The two tests at the top is with the same wheel diameter, but with the robot turning different ways.

2. We already have the possibility to send data from the robot to the pc. The other way was done like this (*2) from the pc and like this (*3) from the robot.


3. The above result was finally somewhat satisfactory. Next step was to change the code, so that we could send the 2d array of an image to the robot. Code seen here (*3).


4. First idea for the recalibration, we only need one ultrasonic sensor. This was mounted like this:
5. The first draft of the recalibration algorithm. What we need:
- Two “walls” along the x and y axis.
- A precalibration phase.
- A recalibration phase.
We need to either (preferably) cut out two pieces of plank to use as walls, or place the whiteboard in a corner of the room.
We place the robot on the whiteboard facing a direction, we want to be the x-axis. Then the robot should measure the distance to the x-axis and then to the y-axis. We imagine it could be done in the following way. This is the precalibration phase.
The robot turns 90 degrees and measures the distance to the wall.
Then steps of the following will be taken: The robot turns one degree and measures the wall. If the distance is smaller it repeats the step. If the distance is greater it turns one degree in the other direction and measures the distance, and repeat this step. This way we will find the smallest distance from to the wall. When this is done, the robot turns 90 degrees to face the y-axis and repeats the steps above to determine the distance. Now we have the starting conditions.
The recalibration could be done like this:
After the robot has drawn a chain, it returns to the starting position. From here the robot will in the same way as mentioned above find the shortest distance from its’ current position to the x-axis. When this is found, the heading of the robot should be perpendicular to the x-axis and the robot should drive forward or backwards until the original distance to the x-axis is reached. Then this should be repeated with the y-axis and the robot should end up in the original position and be ready to draw the next line. This will be implemented next time.

6. We made further modifications to the raise/lower mechanism. Now the pen is more fixated. Shown in the following image:
It is still slightly off center, but we think this is the best, we can do.

7. We expanded the code to draw a graphical representation of the image the robot would draw. We used the java.swing library and just used the same method to draw the lines as the robot does. Code can be seen here (*4).

Status
We are finally satisfied with the mechanism that raises and lowers the pen.
We got the data transmission to work, and the rest of the goals we have set today regarding expansion of the program. That is, we are now able to draw an image, feed it to the program, which transforms it and sends it to the robot, and the robot then draws the image. We have also finished the code to give a graphical representation on the pc. Only minor modifications are needed, e.g. when we build the point chain, we need to prioritize horizontal and vertical directions over diagonal. Otherwise the algorithm will often skip pixels and twist the image slightly.
The robot is still not precise enough, so the next step is to use the ultrasonic sensors to recalibrate the position between the lines.


References
(*1) http://youtu.be/MsozQXSidjo
(*2) http://troelskristiantore.blogspot.dk/2012/05/legolab-code-drive.html
(*3) http://troelskristiantore.blogspot.dk/2012/05/legolab-code.html
(*4) http://troelskristiantore.blogspot.dk/2012/06/legolab-code-pcdraw.html

Legolab 14

Legolab 14
Date: 07-06-12
Duration: 7½ hours
Group members attending: Tore, Troels & Kristian

Goal
Write a program for the robot that will allow it to follow the point-chain arrays created last time.
Find a solution to the live link between the computer program and the robots program.

Plan   
1. Get the robot to drive along the point lines created last time: Have the robot drive to the starting point, get it to follow/draw the line and at the end, the robot should return to the starting position and heading. Then do the same for the next line.
2. Create a bluetooth connection to the nxt. Be able to write between the NXT and the PC.
3. Small changes to the robot.

Execution
1.
We have the algorithm to convert images. This program will be run on the computer, so we need the counterpart for the NXT. We modified a previous program, that implemented the navigation class “DifferentialPilot”. The run method now takes the 2d array of point chains and move through the chain step by step. For each step we calculate the angle to turn to as well as the distance to travel. When the different human calculation errors was corrected, it seemed to work, mostly this was errors in calculating the correct angle, that is, the angle between the current heading of the robot and the next heading.
Another problem was that the white board was too smooth. The wheels did not have a good  enough grip and so they would spin in place. To fix this we changed the speed and the acceleration of the robot, by using the build-in setAcceleration() and setTravelSpeed() in the DifferentialPilot.
The line drawn looked as expected, but we had a lot of trouble getting the robot to return back to the origin. After a long testsession, the conclusion is that we either have a wrong diameter between the wheels, or the program did not make the robot turn the correct amount of degrees. This will be tested next labsession, and hopefully corrected.

2.
We created a bluetooth connection between the robot and the PC(*1). We need this to transfer our point chains and make the robot follow these lines. Right now we are mostly using it for debugging purposes. We write out states from the robot so it is easier to track bugs.

We had a little trouble with the bluetooth connection since the documentation of leJOS are not the best in the world. We got some strange exceptions but eventually figured it out. We had some problems with writing strings from the NXT to the computer, because we were only able to transfer a single char at a time. We ended up writing this as a char array. Then we decoded the string array using “%”-sign as a line separator(*2). This was done because the library cannot figure out when line endings occur.

The code is rather simple. The robots starts out by waiting for an inbound connection. When this connection is established the robot will start driving. While the robot is driving it will send strings to the computer.

3.  Last time, the robot looked like this:
It seemed like the right idea, but then we forgot to put a cable in the motor, and the motor was too close to the wheels. We turned the motor 90 degrees, and changed the lifting mechanism to use gears and a sort of wire. This is shown here:
The pen is still slightly loose, but the lifting mechanism has been improved.


Status
The great success of today was getting a bluetooth connection established. This has made it a lot easier to debug. We are now able to get values printed to the computer.
The first version of the program to get the robot to draw is almost finished. The big problem today was either a too small diameter between the wheels or the DifferentialPilot class is imprecise. This will be tested and hopefully corrected next time.

References
(*1) http://troelskristiantore.blogspot.dk/2012/05/legolab-code.html
(*2) http://troelskristiantore.blogspot.dk/2012/05/legolab-code-drive.html

Legolab 13

Legolab 13
Date: 05-06-12
Duration: 3½ hours
Group members attending: Tore, Troels & Kristian

Goal
We want to be able to convert the bitmap photo into paths and hand it to the robot. The robot should then be able to follow these paths.
We need to finish our robot. Finally we need to think about some graphical representation of the pathing.

Plan   
1. Get the path planning algorithms sorted out, so that we have something to feed the robot.
2. Find a proper solution for the tush on the robot. That is get it to be fixated enough to draw, yet loose enough to lift up and down.
3. Figure out a way to graphically represent the lines that the robot has to follow.

Execution

1. We got our line algorithm to work. We feed the image to the program, it is transformed to a 2-dimensional bit-array, and then it searches for lines as described in the previous lab-report. The result of this is shown below, step by step. The left 2d-array is representing the initial image. In the second 2d-array, the line of 1s to the left has been processed, and is removed. In the third 2d-array, the line at the bottom has been processed and is removed. Finally the line in the middle is processed. These lines are then transferred to an array of such lines which the robot will use to do the drawing.
Seen in the fourth 2d-array below, we still have a slight error in the algorithm, since some corners are skipped. There is two 1s left around the middle of the array.
Code seen here (*1).



2. We completed the first version of our robot: Drawbot 1.0. Image below:
We finally got the pen mounted as close to the center between the wheels as we think is possible. It is still a bit loose, but that is the cost of being able to raise and lower the pen. The pen is attached to a few lego bricks and slides up and down between other bricks.
The motor seen in the top right of the picture is used to raise the pen. We plan on adding two ultrasonic sensors later.

3. We want the people to be able to see how our point line algorithm works. At the moment when we run the algorithm it creates a list of point lines. This list is handed off to the robot so it has something to follow. We also want to make a graphical representation on the computer of this. So we create a canvas and draw the lines (as the robot will do in real life). People will then be able to see the different lines the robot will follow on the screen. This will both be a “gimmick” but also a way to see how well the robot actually draws the image (Compared to a computer drawing of the image).
Code seen here (*2).

Status
We completed the first part of the robot, that is, it is now able to draw, drive, raise and lower the pen. Next step is the to mount the ultrasonic sensors.
We also got the line algorithm to work, so we are now able to create the chains that the robot will follow.

References
(*1) http://troelskristiantore.blogspot.dk/2012/06/legolab-code-chaincalculator.html
(*2) http://troelskristiantore.blogspot.dk/2012/06/legolab-code-pcdraw.html