torsdag den 30. september 2010

NXT Programming, Lesson 5

Black White Detection

The sensor is calibrated with regards to a black and a white value, which is used to determine the difference between black and white afterwards.
Black is typically calibrated to a value of 39 and white to a value of 60 in the environment we did the testing in.

Line Follower with Calibration

To make the car move more smoothly, we implemented proportional regulation of the speed on the two wheels. This means that we can regulate the power between the wheels instead of giving all the power to one wheel.


We ustilize the fact that the senser will not return either 39 or 60, as measured in the first test. It will return values from 39 to 60, and when it is excatly aligned with the line, it will return a value around 50. So we need to make the car go more to the left when the value is above 50 and more to the right, when it is below 50, and the closer it is to the outer values, the faster it should turn.

Our implementation:

int lightValue = sensor.light();
int error = lightValue - offset; float turn = kp * error; float powerA = tp + turn; float powerC = tp - turn; Car.forward((int)powerA, (int)powerC); Thread.sleep(10);

We have implemented the pseudocode from [PID], and tweak it to work with our robot.

Video of the new car can be found here.

ColorSensor with Calibration

The scheme from the BlackWhiteDetection-class is modified in such a way, that green know also is to be calibrated before the line follower begins to run. However it is important to notice that green is position right in between black and white value-wise. This means that it is not possible to use the same strategy, as for black and white, in this case. The black and white strategy checks for whether or not the given light sensor value is above or under a black and white threshold.
This black and white strategy is still used, but it is however combined with a new strategy for determining the color green. To detect green the program is simply calibrated to the color green, and if a value is the same as the calibrated value or of by a interval x, then it is the color green.

public boolean green() {
int interval = 1;
if(ls.readValue() == greenLightValue ||
ls.readValue() == greenLightValue+interval ||
ls.readValue() == greenLightValue-interval) {
return true;
}
return false;
}

To test the program we simply modified the line follower robot from above to stop in a "green zone", as described in the next section. 5

Line Follower that stops in a Goal Zone

It is in many ways a simple task to make a robot stop in a goal zone of the color green, since we have function for determining how the color green looks it is simply a matter of stopping the robot when it detects this color.
However as mentioned above the color green is very close to both black and white, which means the line follower can risk stopping on detecting one of these colors. To solve this problem we choose to create a counter which on 40 detections in a row of the color green would halt.

if(sensor.green()) {
i++;
if(i>=40) {
return;
}
} else {
i = 0;
}

NXT Programming, Lesson 4


Self-balancing two-wheeled robot

Duration of activity: 4 hours.




Participants: Nick Lauritsen, Jens Junge, and Allan Thomassen


The goal of this session was to build and program a two-wheeled robot that is able to balance.

To reach the goal we started out by building the robot that Brian Bagnall[Bagnall], proposes as a balancing two-wheeled robot, and using the code example as well.
The first problem we encountered was that the code example uses the Motors directly, which cause the JVM to throw an exception - a piece of code that didn't work was: line 20, Motor.B.regulateSpeed(false); or Line 61, Motor.B.setPower(power);

We handled this by using the Car class introduced in the very first lab session, and calling this instead. This didn't work well at all. One of the reasons that the robot didn't balance was, in our opinion, that the center of gravity was shifted a bit forward compared to the center of the robot. Therefore we extended to the robot with an "arm" with weights on. The robot didn't balance at all.
The result is seen in the video as "The First Attempt". Here is a .mov edition, and here is a .m4v edition.

The next thing we thought about was changing the balancing algorithm. This thought came from the fact the robot didn't seem aggressive enough, and therefore tilted in stead of getting back to the starting position - or close to this. The change made to the code was to add extra power to the motors. Here is a snapshot of the code:
"Original", power = 55 + ((power * 45) / 100); // NORMALIZE POWER
"New", power = 65 + ((power * 45) / 100); // NORMALIZE POWER
This attempt, which can be seen as "The Second Attempt" in the video, still didn't balance. It appeared that the robot was more eager to go backwards than forwards. This could e.g. be caused by the fact that the light intensity measurement isn't a linear function, but grows quadratic and therefore the reactions from the robot are different, and hence more aggressive in one direction as the other. To deal with this we experimented with changing the values of power to the motors such that the backward power (in the direction where the light sensor is placed) was eased in the hope that robot wouldn't tilt too much backwards. In the video this can be seen as "The Third Attempt". It is clear that robot is actually balancing for a while, a very short while, and then goes back to the unwanted behavior of overpowering the motors in the backwards direction.
After considering how we handle to problems so far, we realized that we needed to teach the robot how to balance. By this we mean that the robot needs to be aware of the environment, just like humans, in order to balance for a longer period of time. Otherwise the robot will blindly trust the calibration point and try to reach this state (even if it is wrong), or react in inappropriate ways if the surrounding lights changes just a bit, as we saw in lab 1 -- it is of course not possible to make the robot balance if the light settings change fast, e.g. by going from a white surface to a black, the robot simply cannot adapt to such a dramatic change.
The next, and last, phase of building a two-wheeled balancing robot is to make the robot evaluate the changes in light intensity in a way that the robot will automatically adjust the balancing point according to the light setting/measurements, and not quite precise calibration. This is done by counting how many times more the robot goes forwards or backwards compared to the other, e.g. if the goes forward 14 times, then goes backward 1 time, and goes forward again, the balancing point (value of light intensity that equals this point) needs to be shifted forward. This will make the robot aim to be in a state that is more upright than the previous. In theory this will also make the robot capable of adapting to small changes in the surface color or light settings.
The way we programmed this, was to introduce a counter, called "i", that counts how many times Car.forward() or Car.backward() is more than the other. Each time the loop is run, we check to see if the offset needs be edited. The indicator for a change is that the absolute value of "i" is greater than a predefined value, e.g. 100 (meaning that forward [or backward] has been called 100 more times than backward [or forward]). If this is true, we adjust the offset and reset the counter. Below is the complete control loop code.



The result can be see as "The Last Attempt" in the video. Unfortunately it is nothing to brag about, but it is clear that this a better path to a well-balanced robot.

It is questionable if it is possible to get robot to balance with a light sensor setup like the above described. A much better solution will be to use a gyrometer, as this is not affected very much by changes in light or the underlying surface. One thing we experimented with during the session was to the motors tachometers instead of speed, but the readings from the underlying framework was very odd, so we quickly discarded this way of solving the problem.





torsdag den 16. september 2010

NXT Programming, Lesson 3

Test of the Sound Sensor
We wanted to test the sound sensor at different distances and sound levels. We found the clapping was not a very good sound, since it is a short sound, and therefor the reading might not be at the max sound level of the clap.

A continuous sound made from a phone was better suited for this experiment. A small video of the experiment can be found here.


Pictures:


Allan with the phone.


The program used to make the sound.




The robot and the table where the the distance is measured.














The some what soundproof room, where we did our experiment.


The results from the experiment can bee seen in the table below.

sound level\cm0cm40cm80cm120cm160cm200cm
low367*----
medium933224161311
high936446303619

*result might be influenced by background noise

The maximum dB the sensor can measure is 90 dB[Lego], and the value the sensor returns is the percentage hereof.

At 0 cm the medium and high gave the same result while low got a significant lower result. At 40 cm high dropped a quarter while medium dropped two quarters. At 200 cm both the medium and high gave results higher than the background noise, but we decided to stop at this marker, cause the difference was very small.

The high dropped about a quater at every measurement marker, although we had a reading at 160 cm that did not fit in with the rest of the readings. We suspect that our soundproof room was not that soundproof...

Data logger

The above table is generated using the given datalogger, from which the highest db amount from the logged file is noted. Another way to look at this data would be as a graph, e.g. like the bellow mentioned graph over the data "high, 40 cm.":

The graph shows how the first couple of seconds of sound should be ignored, since this is the sound of the NXT when starting up.

Sound Controlled Car
The sound controlled car program is given in the assignment, and is a simple program which reacts on claps, by blocking until a clap (a loud sound) is registered. Furthermore the value of the escape-button is polled in such a way that the robot does not react to an escape button push if it is blocked.
The solution to do the polling in a new thread (read event listener):

Button.ESCAPE.addButtonListener(new ButtonListener() {
public void buttonPressed(Button b) {
isRunning = false;
System.exit(0);
}

public void buttonReleased(Button b) {
}
});

Clap Controlled Car

We modified the program to be able to register four types of commands, executed by clapping the right amount of times, with a predefined amount of maximum time between them:
  1. Drive forward
  2. Drive right
  3. Drive left
  4. Stop
The robot is build around our SoundCarListerner, which waits for a clap. When the first clap is registered, the method driveForward is called:


private void driveForward(long time) {
     LCD.drawString("driveForward",0,7);
        boolean right = waitForClapSoundTimed(time);
        LCD.drawString("driveForward",0,7);
        if (right) driveRight(getTime());
        else {
            car.driveForward();
            try {
  waitForClapSound();
     } catch (Exception e) {
       // TODO Auto-generated catch block
       //e.printStackTrace();
     }
        }
    }


The method waits for the timed method waitForClapSoundTimed(), which returns true if a clap is heard within 300 ms, or false if no clap was heard. driveForward(), will either call the method driveRight(), if true is returned, or make the robot start driving forward if no clap was heard. This recursive cycle moved between the methods: driveForward -> driveRight -> driveLeft -> stop.





The problems we had with this implementation, was that we did not have any wait time between our sound.readValue(), which meant that it would jump to stop every time we made a sound. By making the thread sleep for 100 ms between readings, we could now distinguish between claps. But this also means that the robot will not register faster claps.    

torsdag den 9. september 2010

NXT Programmering, Lesson 2


Duration of activity: 3 hours Participants: Nick, Allan and Jens Chr.
The goal of this session is to explore how the Ultrasonic sensor works. This will be done by first measuring distances to a static point, e.g. a wall, to see how accurate the sensor is. After this experiment we will make a robot that is able to follow a wall.

The distance experiment (Ultrasonic sensor)
To setup this experiment we put a large ruler in front of a table that was tilted. This way we were able measure the distance to the table by placing the robot, or actually the ultrasonic sensor, at certain distances from the table.
The first set of measurements are made 15cm apart, meaning 15cm, 30cm, 45cm, etc. from the table. The result of these measurements can be seen of the graph as the red line. Almost all readings from this set where 1-3cm off, which in this first go seems fair*. But by looking at the experiment setup, we realized that the table wasn't totally vertical and was tilting a bit. This had the implication that the table wasn't orthogonal with the floor, but had an angle of 90+ degrees (seen from the robot's perspective), and hence the distance that ruler showed might not be correct.
This error was corrected by raising the bottom of the table legs. Then we measured the distances once more. This can be seen as the yellow line in the graph. The readings where quite accurate between 0 and 240cm from the table. After 240cm the sensor gave out readings that were a bit shorter than the actual, physical distance.
The theoretical max. distance that be measured is 254cm, as 255 should be interpreted as 255+ cm. This physical distance from the table was 257 cm when the sensor measured 254 as output value. We tried using a different surface, a pillow instead of imitated wood, but without luck – the readings where either the same as before or for some bizarre reason sometimes 255 even though the sensor was within the limit of 254cm (or in this environment 257cm!).
According to the documentation the ultrasonic sensor waits approximately 20 milliseconds in continuous mode for return of the sound-wave. This means that the sensor has a reach of approximately 6.80 meters when taking the speed of sound into account.
* as the documentation for the ultrasonic sensor states that the accuracy of the sensor is +/-3cm[Lego].
The wall following robot
To build a wall following robot we used the standart Lego robot/car that we used for following a black line on the floor, but instead of the light sensor we placed an ultrasonic sensor on the side of the robot/car. The ultrasonic sensor was placed orthogonally to the driving direction. The main idea of this placement of the sensor was to detect whether the robot of getting too close or too far from the desired distance to the wall.
The software is designed after an sequential principal, as the Tracker Beam project, given in the exercise.
For the program to work, the robot must have the wall on the right side compared to the driving direction. Then, at specific intervals, the robot measured the distance to the wall. If the distance is too close it lowers the speed on the motor driving the left wheel, making the robot turn left and hence away from the wall, and vice versa if the robot is too far from the wall.
The above mentioned program was too aggresive, and made the robot sometimes turn too much and therefore drove into the wall. To help this problem we made the turns less aggresive, so the turns were more smooth. Besides this, we also introduced a clause that made robot not turn if the distance from the wall was close to the desired distance. This setup gave slightly faster and much more gracefully moving robot.
public class FollowWall
{
public static void main (String[] aArg)
throws Exception
{
UltrasonicSensor us = new UltrasonicSensor(SensorPort.S1);
final int noObject = 255;
int distance,
desiredDistance = 25, // cm
power = 80,
minPower = 70;
float error, gain = 0.5f;
LCD.drawString("Distance: ", 0, 1);
LCD.drawString("Power: ", 0, 2);
while (! Button.ESCAPE.isPressed())
{
distance = us.getDistance();
if ( distance != noObject )
{
error = distance - desiredDistance;
if ( Math.abs(error) <>

Car.forward(power, power);
}
else if ( error > 0 ) //Right - to high
{
Car.forward(power,minPower);
LCD.drawString("right ", 0, 3);
}
else //Left - to low
{
Car.forward(minPower, power);
LCD.drawString("left ", 0, 3);
}
LCD.drawInt(distance,4,10,1);
LCD.drawInt(power, 4,10,2);
}
else
Car.forward(power, 0); //turn around one self
Thread.sleep(100);
}
Car.stop();
LCD.clear();
LCD.drawString("Program stopped", 0, 0);
Thread.sleep(2000);
}
}





torsdag den 2. september 2010

NXT Programmering, Lesson 1

Duration of activity: 3 hours Participants: Nick, Allan and Jens Chr.


The goal of this session was to get the leJOS NXT framework to work on our computer, and build our first robot.


We got the framework working on a single computer (maybe two...).


We followed the description of building the robot from the manual and installed the first example program. The program made the robot follow the outer part of a black line, with curves and it was able to turn around at the end of the line and follow the line back to its origins.


With standard program:
The program is sequentially controlled[1], since it follow a procedural series of steps, although there is only one step.


We tried with a black line on white surface, where the robot worked perfectly and we tried it on a black line on a dark blue surface where the program did not work. Here the robot just turned around on it self and were not able to start anywhere.


With the light sensors refresh rate at 10, 100 and 500 ms:
We did not see any significant changes in the way the robot responded to the change in color. We thought the 10 ms would make a difference, and therefor we thought the program was not properly loaded, so we change the refresh rate to 5000 ms and saw the robot turn in circles over the black line until the refresh matched the black line and the robot changed direction.


Light sensor:
We needed to get the percentages returned by the sensor in different lightconditions. We tried with the lights on, as it normally would be, with it turned off and only light from surroundings and thirdly with lights off and light from a Macs screen. The results were the following:

Lightcondition\Color Red Blue Green White Black Yellow
Lights on 62% 50% 51% 64% 45% 63%
Lights off 44% 35% 42% 45% 26% 45%
Lights off, light from Mac 44% 38% 42% 45% 29% 43%


With the lights on, the sensor were able to distinguish between three levels of input. One level would be black, second would be blue and green, and third would be red, white and yellow.


With the light off, the amount of level stayed the same, but the different levels changed. Level one would be black, two would be blue, and the rest would be the third level. Although this seems somewhat promising, as soon as the surroundings change just the slightest (someone walking by etc.) the readings changed, so a program would not be able to distinguish the colors, even of the difference stayed the same.

Memory usage
When printing during execution of the program it is seen that memory is constantly falling when text strings are used directly in method calls. This points at that it is not good practice to insert text strings directly into method calls, since this results in a form of "memory leak". Notice that memory leaks in an embedded system are not desirable, since the amount of available memory typically is small.