Thursday, 23 February 2012

legolab 4 - code

import lejos.nxt.*;

public class PIDFollower
{
 public static void main (String[] aArg)
 throws Exception
 {
    final int kp = 1;
    final int tp = 68;
    BlackWhiteSensor sensor = new BlackWhiteSensor(SensorPort.S1);
   
    sensor.calibrate();
    int offset = sensor.getThreshold();
   
    LCD.clear();
    LCD.drawString("Light: ", 0, 2);
   
    while (! Button.ESCAPE.isPressed())
    {
        int lightValue =  sensor.light();
        int error = lightValue - offset;
        int turn = kp * error;
        Car.forward(tp + turn, tp - turn);
    }
     
    Car.stop();
    LCD.clear();
    LCD.drawString("Program stopped", 0, 0);
    LCD.refresh();
  }
}












import lejos.nxt.*;

public class BWSensorTest {
   public static void main(String [] args) throws Exception
      {
          ColorSensor s = new ColorSensor(SensorPort.S1);
          s.calibrate();

          while (! Button.ESCAPE.isPressed())
          {           
              int lightValue = s.light();
              LCD.drawInt(lightValue,3,7,0);
              if (s.black()) {
                  LCD.clear();
                  LCD.drawInt(lightValue,3,7,0);
                  LCD.drawString("black", 0, 0);
                  LCD.refresh();
              }
              if (s.white()) {
                  LCD.clear();
                  LCD.drawInt(lightValue,3,7,0);
                  LCD.drawString("white", 0, 0);
                  LCD.refresh();
              }
              
              if (s.green()) {
                  LCD.clear();
                  LCD.drawInt(lightValue,3,7,0);
                  LCD.drawString("green", 0, 0);
                  LCD.refresh();
              }
              Thread.sleep(5);
          }
          LCD.clear();
          LCD.drawString("Program stopped", 0, 0);
          Thread.sleep(2000);
      }
}

legolab 4 - video






Legolab 4

Legolab 4
Date: 23-02-12
Duration: 3 hours
Group members attending: Tore, Troels

Goal
Todays goal was completing the lab exercises for this week, which was about using the light sensor to control a car. This was accomplished follow the below plan.

Plan
1. Use the test program (*1) to test the blackt/white light sensor
2. Test the program (*2) line follower with calibration
3. Make a color sensitive program using (*2) that can detect 3 colors: black, blue, white.
4. Use the above experience to make a program that has the car follow a black line and stop in a blue end zone
5. Make a PID line follower


Execution
1. We wrote this (*3) program to test the BlackWhiteSensor program. The sensor gives a black value of around 35, while white is 60. We also tested the blue color with the value 45. Based on this, it should be easy to make a program that recognizes three colors. By calibrating the colors before testing, we take into account the light conditions in the room, which in the end will make for a better linefollower program.

2. We tested the line follower with calibration program (*2). On the straight line it did well, however on the curved line it was not always able to follow. It seemed like the program slept for too long between each correction. We could also change the amount of power to the motors to make it go a little slower or turn less.

3. We just copied the program we wrote for part 1, but change it so we had two thresholds, instead of only the blackWhiteThreshold:
    blackBlueThreshold = (black value + blue value) / 2
and
    blueWhiteThreshold = (blue value + white value) / 2
Now we could use these two thresholds to dinstinguish the three different colors in the same manner as with only the black and white:

public boolean black() {
          return (ls.readValue()< blackGreenThreshold);
  }
  
  public boolean white() {
      return (ls.readValue()> greenWhiteThreshold);
  }
  
  public boolean green() {
      return (ls.readValue() > blackGreenThreshold && ls.readValue() < greenWhiteThreshold);
  }


4. We used the program from part  2, and added a stop condition to make it stop in the end zone. The control while loop now looks like this
if ( sensor.black() ) {
            Car.forward(power, 0);
        }
else {          
            Car.forward(0, power);
        }     
if (sensor.green() && sideSensor.green()) {
            Car.stop();
        }
We also changed the power to 60. It didn’t work, the 60 power was almost too low to make the car move, and the thresholds did not seem to work correctly. After a few tests we found out that at the transition between the black tape and the white background, the sensor would sometimes measure a mean value, which would be in the blue interval and make the car stop.
Our solution was to use the extra light sensor we was given. We used the middle sensor to follow the line, i.e. test for black and white, and the side sensor to test for the blue color and make the car stop. The extra sensor was mounted like this:




5. The next step was to use the PID controller to make a program that uses one sensor, but which would stop half way through. We basically implemented the pseudo code given in the article (*4). We wrote this control loop:

final int kp = 1;
final int tp = 68;
    int offset = sensor.getThreshold(); // The black and white threshold generated from calibration
   
while (! Button.ESCAPE.isPressed())
        {
        int lightValue =  sensor.light();
        int error = lightValue - offset;
        int turn = kp * error;
        Car.forward(tp + turn, tp - turn);
}

kp is the turn value, and tp is the basic speed.

Test 1: At the first try, we realized our kp value at 10 was much too high, and the car would just oscilate in place without moving forward. We then changed kp to 2.

Test 2: the tp value of 50 was too low, the car was not able to drive forward with a power to both wheels of only 50. We changed it to 75.

Test 3: tp of 75 was too much, the car turned too much, so we lowered tp as well as the sleep time from 10 to 5 ms, thus making the car react faster.

Test 4: Still too large turns, so we lowered kp and tp again.

Test X: We have settled on kp = 1, tp = 65 and 0 sleep time. If we increase tp just a little, the car becomes less reliable because it does not turn fast enough at the sharp turns, however it also have trouble driving straight. We think instead of having the turn mechanic follow a linear graph, one could use a quadratic, thus still making small corrections when driving straight but the car should also be able to make sharp turns.



Status
We almost completed todays exercises. We tested the light sensor with the given test programs combined with our own program (*3) and got the car to follow a line and stop in the blue end zone. This was a bit complicated and we did not know how to do this with only one black/white light sensor, but it was fixed with two sensors.
The PID program worked rather well,  although we actually only implemented the "P" in PID, the "I" and "D" will be another day.
A quick thought about our PID implementation is that when the robot drives straight, it slows down to a speed of 65. When it turns, one of the wheels increases speed and the robot drives faster. If we add something like 1/error to the final speed, this could increase the speed while driving straight, but decrease when turning, so that the robot would still be able to follow the line. Or we could increase the tp to maybe 75, and then slow down the wheels instead of speeding up to turn.
We also thought about other ways to control the robot like the PID. One thing was using a quadratic equation as described earlier, instead of a linear. This, however, is a project for some other time.


Videos can be seen here (*5). The first shows the PID-car follow the curved line. In the second video the car follows the line on the landscape table. In the latter video, we see our point about the car not turning sharp enough: it almost drives out of the first turn, and does not complete the second.
The code for our two programs, PIDfollower and the sensortest program, can be seen here (*3).
(*3) and (*5) are actually just other posts on the blog.


Referenses
(*1) http://legolab.cs.au.dk/DigitalControl.dir/NXT/Lesson4.dir/BlackWhiteSensor.java
(*2) http://legolab.cs.au.dk/DigitalControl.dir/NXT/Lesson4.dir/LineFollowerCal.java
(*3) http://troelskristiantore.blogspot.com/2012/02/legolab-4-code.html
(*4) http://www.inpharmix.com/jps/PID_Controller_For_Lego_Mindstorms_Robots.html
(*5) http://troelskristiantore.blogspot.com/2012/02/blog-post.html

Thursday, 16 February 2012

Legolab 3 - soundsensor

Legolab 3

Date:16-02-12
Duration: 3 hours
Group members participating: Tore, Kristian, Troels


Goal
- Mount the sound sensor.
- Test the sound sensor.
- Use the sound sensor to control the robot.


Plan
1. Mount the sound sensor.
2. Compile and upload a rewritten version of the SonicSensor (*2) program to test the sound sensor.
3. Run the Data Logger program (*3) and analyse the output from the sensor and sample.txt
4. Run the Sound Controlled Car program (*4).
5. Examine the sound graph of a clap


Results
1. The sound sensor was mounted according to the book (*1). Seen in the picture below. We have the lightsensor and ultrasonic sensor mounted as well.


2. 
We rewrote the SonicSensor program slightly, so it used a sound senser instead:
      LCD.drawString("Sound level ", 0, 0);       
      while (! Button.ESCAPE.isPressed())
      {
          LCD.drawInt(sound.readValue(),3,13,0)
          Thread.sleep(300);
      }

 The readings are a percentage based scale. High sounds close to the device gives us a high percentage close to 100 while lower sounds gives lower values. It seems to be very dependent on the direction of the sensor compared to the sound source. If the sound origin appears in a cone in front of the microphone the sound appears higher than from sounds appearing from the sides and behind the machine.

3. 
We modified the code of the program so the output data was written one sample at a line, this was done in a simple way by changing the value  

private int itemsPerLine = 1

The reason for this was to make it easier to analyse later on. We converted the .txt file to a .csv file and loaded it in excel. We used excel to create the graph below. The graph shows our experiments with sound and the different sound sources we used. It also represents different positions for the sound source. If the sound source was closer to the microphone it will have a bigger spike on the graph and the opposite the longer the source was away. At the same note the sound also had a smaller spike if the sound source was on the side or behind the microphone. Which means it has a cone-like reading area in front of the microphone where it works best. We made the same sound in different places relative to the sensor. On the graph we se the spikes according to the sounds. The spike at ~800 is right behind the sensor. The next spike is 10 cm in front of the sensor, the next two spikes at 2097 and 2359 is the same distance moved 10 cm to the left and to the right. The next three spikes at 3145, 3407 and 3669 is at a distance of 50 cm. The last three spikes at 4193, 4455, 4800 is at 100 cm, front, to the left and to the right. We see that each time we increase the distance, the fluctuation decreases. It appears we did not move far enough to either side to really see the effect.




4.
 We ran the sound controlled car program (*4) on the NXT. The car reacts on high sounds which is reflected in the following line:

private static int soundThreshold = 90;

If the sound exceeds 90 the program will react and take the next order in line. It starts by moving forward, right, left and then stops in the aforementioned order.
To stop the car we had to hold down the escape button while clapping at the car to make it go through the different states. Only after the last state was it possible to stop the program. To change this we inserted a buttonlistener in the main method:

Button.ESCAPE.addButtonListener(new ButtonListener() {

             public void buttonPressed(Button b) {}


             public void buttonReleased(Button b) {

                 System.exit(0);

             }

           });



This made us able to stop the program by pres-releasing the escape button.


5. 
The SoundSampling program takes samples every 5 ms. The graph below shows our clap





We se the clap-attack start at around 19 and within 25 ms the clap has reached high-amplitude at 85-95. After 175 ms the clap-amplitude starts to decay returning back to low-amplitude after ~375 ms. Our clap is a little longer than Sivan Toledo, but otherwise the theory holds.



Status
The sound sensor works as one would expect with high numbers for high amplitude sounds, although it seems to cut off at a value of 93 as seen in the first graph, and more clearly in the clap sample graph at the interval [55,61].
Next project could be creating an algorithm to filter claps from other loud sounds. After this one could, as suggested in this weeks lab-exercises, mount two sound sensors and make the robot drive in the direction of a clap.


References
*1  LEGO Mindstorms education
*2 http://legolab.cs.au.dk/DigitalControl.dir/NXT/Lesson2.dir/SonicSensorTest.java
*3 http://legolab.cs.au.dk/DigitalControl.dir/NXT/src/DataLogger.java
*4 http://legolab.cs.au.dk/DigitalControl.dir/NXT/Lesson3.dir/SoundCtrCar.java

Tuesday, 14 February 2012

week 2, part 2

Legolab week 2, part 2, 14-02-12

Since we’re a bit behind on the lab sessions, I decided to take a look at what we’re missing. Duration of ~3 hours, attended by Troels.

Plan
I just wanted to play around with the ultrasonic sensor, getting a feel of how it worked, so my plan for today was something like:
1. mounting the ultrasonic sensor on the robot according to the book.
2. getting the simple test program transferred.
3. doing tests with the sensor.


Execution
1. This was easy enough, although I did not have the right parts, and no access to the spare parts room.

2. I’m just glad I remembered to write down in details last time, how I used the command promt to compile and transfer a program to the NXT, so this time it was rather painless.

3.  I had no instrument to measure the real distance to the object I tested the program on, but a normal piece of A4 paper is ~297mm long, so that was used instead.
Then trying to run the program, I got an error:

Exception: 134
FExec...
 something
 something
 ...


I tried to google it, but found no solution. It seemed like this was a java error rather than some leJOS error, though that was not much help.
In the end it was fixed by flashing the NXT, and transferring the program again.

Now back to the tests.
First I tested against the wooden panels dividing the legolab room. One could think that some materials would be better to reflect the sound while others might absorb it. This however did not seem to be the case for me, as I tried it against the wooden panels, the windows, the door to the spare parts room and the blue fabric covered noticeboards around the lab. Although I did get small errors of about 2 cm in measurements when the object was closer than 15 cm. This might just be due to the imprecision of my paper measure. Other minor errors might just as well be because I did place the sensor at exactly the correct real world distance.
The results:

Real distance (cm)NXT ultrasonic distance (cm)
1518
3030
4545
6059
7574
9089


This may not have been the most precise way to do it, preferably I would have had a tape measure or a ruler. Still the distance measured with the paper was most of the time consistent with the distance measured by the ultrasonic sensor.

Status
I think it went rather well today. The test was a bit imprecise, but I still think I learned from it and the sensor seems to be reliable. Next lab session, we should be ready to complete (hopefully rather fast) the last part of this lab exercise.

week 2

legolab 2, 09-02-12
Thursday, a duration of 3 hours attended by the entire group.

The plan
1. The plan for today is getting Kristian and Tore up to speed, that is installing leJOS on their laptops.
2. When that works, we will proceed with last weeks lab excercises, hopefully completing those and then moving on to this weeks excercises. This means getting the car to follow the line,
3. testing the light on different colors.
4. testing different sleeptimes.

The execution of the plan
1. Just like last week, we ran into touble, this time installing eclipse plugin. Tore installed the plugin through eclipse, which didn’t work for Troels and Kristian, eclipse could not contact the plugin server. Kristian then downloaded pulse from http://www.poweredbypulse.com/download_win.php
and could install the plugin. Didn’t work for Troels.

2. That means first building the car, which actually took longer than expected. Tore then uploaded the linefollower program to the NXT and we got it to follow the edge of a white papir on a blue floor.

3. Colorvalues:
We tested the sensor in the lab with the above ambient light, about 3-4 cm above the subject. The observation is, that the percentage rises when the distance increases. Also, different colors seem to have almost the same light percentage, which makes it hard to distinguish them from one another.
White54%
Green43%
Blue44%
Yellow53%
Red51%
Black35%


To get the blackWhiteThreshold you could just take the average of the black and white percentage. In our case: (35+54)/2 = 44.5

4. Sleepcycles.
The default sleeptime is 100 ms. That is, every 100 ms the car takes measurements and adjusts its direction accordingly. If we decrease the sleeptime to, say, 50 ms, the car will take measurements more often, thus correcting the direction more often and it will oscillate faster. On the other hand, increasing the time will make it take larger turns, and decrease the cars ability to follow the line.

Status
It is a bit tiresome that we keep having trouble just installing the system, although now it works on Tore’s computer, so we might just use his for the rest of the course. Yet it was nice to finally get the robot working