MAE 5190 - Fast Robots - Fall 2024
The course focus is on systems level design and implementation of dynamic autonomous robots. With the recent DIY movement, design of kinematic robots is largely becoming a software challenge. In dynamic robots, however, any latency or noise can be detrimental. We will design a fast autonomous car, explore dynamic behaviors, acting forces, sensors, and reactive control on an embedded processor, as well as the benefit of partial off-board computation. Students will learn how to derive design specifications from abstract problem descriptions and gain familiarity with rapid prototyping techniques, system debugging, system evaluation, and online dissemination of work.
In this lab, I set up my board and ran through a few exercises to become familiar with running the board via Arduino. In the second part of this lab, I established communication between my computer and the Artemis through Bluetooth and established framework for sending data over Bluetooth using a Jupyter notebook.
In this example, I ran premade Arduino code to verify the board works and I can flash code to it from Arduino. The code blinks the LED on and off.
The code from this example prints a message to the Serial line.
The code from this example reads the value from the temperature sensor and prints it to the Serial line.
The code from this example reads the frequency heard by the microphone and outputs it to the Serial line.
I wrote code to turn on the onboard LED when the microphone picks up a musical "A" at 880Hz.
For the second part of this lab, I had to update the code with the MAC address of my Artemis. Below is a screenshot of the Artemis outputting its MAC address.
The code block below sends an ECHO command to the Artemis, which then sends an augmented string back.
These functions receive strings from the Artemis board. One handler processes time stamps and the other processes time stamps and temperature readings.
The code block below gets the current time in milliseconds from the Artemis and prints it.
I then wrote code to continuously run the GET_TIME_MILLIS command in order to determine how fast messages could be sent. Ultimately, it seems that the fastest reliable rate for receiving data in this manner is about 60ms.
This code then takes a different approach, by batching time stamps from multiple GET_TIME_MILLIS commands into an array. The SEND_TIME_DATA command then runs through this array and sends each data point as a string. This method is able to run around 15 times in 1ms.
This code uses the batch approach to get temperature readings from the Artemis. This method is able to run around 8 times in 1ms.
Sending updates immediately gives the computer new information about every 60ms, which is way slower than the second method. The second method is able to record data 15 times in 1 millisecond. The first method is better when we need to react to new data as soon as possible. The first method is worse if the Artemis isn't able to record data from other sensors while it is sending data to the computer. The second method is better when we want to collect as much data as possible, and when we don't need to process it on the computer side as soon as possible. Assuming the Artemis has 384kB of RAM, and that we are storing 150 byte values, we can store 2560 values before the Artemis runs out of memory.
To test the data rate when using different sized replies, I wrote code to send replies of lengths between 5 and 120 bytes. I then timed how long it took for the computer to receive a reply back from the Artemis, and then divided the number of bytes sent by the time it took to get a response. I then plotted out the data rate as a function of the number of bytes sent.
When data is sent at a higher rate from the robot to the computer, it takes slightly longer than if smaller packets are sent. The computer was able to read all the data published, but did take slightly longer at 4 instances in the larger end of bytes sent, around 82, 90, 100, and 112 bytes.
Set up IMU
In this example, I ran premade Arduino code to verify the board works and I can flash code to it from Arduino. The code blinks the LED on and off.
This code uses the batch approach to get temperature readings from the Artemis. This method is able to run around 8 times in 1ms.
AD0_VAL is the value of the last bit of the I2C address. It was defaulted to 1, but I had to change it to 0 to because the ADR jumper is closed. As I moved the board around, the accelerometer and gyroscope both output noisy signals. When the board was held still against a surface, the noise was greatly decreased. I taped the accelerometer to a box in order to execute 90 degree rotations in both the roll and pitch directions, and found that the accelerometer was surprisingly accurate to +- 2 degrees.
Wire and test TOFs off the robot.
I soldered the TOFs to the QWIIC wires, and connected them to my Artemis.
After soldering the TOF sensor to the Artemis, I ran a quick piece of I2C code to check the address of the sensor. While I expected an address of 0x52, the code returned 0x29. Looking at these two hexadecimal numbers, we notice that they are bit shifted values of each other. This is likely due to something within the communication protocol using the LSB as an I/O bit, and we therefore do not need to worry about this discrepancy.
I then set about testing a distance mode of my choice. The TOF sensors have a short, medium, and long distance mode, corresponding to 1.2m, 3m, and 4m respectively. The short mode would provide the most fidelity, whereas the long mode would provide the most range with the sacrifice of resolution/accuracy of measurement. The medium range would likely be a middle grounds balancing these pros and cons. I decided to go with the short mode since the robot should ideally be able to react within 1m, and would benefit from accurate depth measurements when attempting to localize.
I tested the short mode by taking 50 TOF measurements at 100mm increments up to 1200mm. In these plots I show the sensor value compared to the actual distance, the error between the sensor and actual value as a function of the actual distance, and the ranging time in ms as a function of the target distance. I found that the TOF would be pretty consistent up until 900mm, which is a bit dissapointing since the short mode should be accurate up to 1200mm.
Since we will be using two TOF sensors on the robot, I connected another one to the Artemis, this time adding an extra wire connecting the XSHUT pin on the TOF to pin 1 on the Artemis. This will allow us to drive the pin low to turn off the sensor while we change the address on the other TOF. Since the TOFs store their addresses in volatile memory, we will do this every time we boot up the Artemis. Below is the Serial output of the two sensors reading different values.
As implied by the name of the course, it is important that we obtain sensor values quickly without hanging our code as it waits for the sensors to finish a measurement. I wrote code to output the clock upon every loop execution, and took advantage of the checkForDataReady() routine on the TOFs to return TOF values as fast as possible. As we can see in the picture, our TOFs take around 100ms to return a value, which corresponds closely to the ranging times as found in the previous test.
I then wrote code to record timestamped TOF data and send said data to the computer over Bluetooth. Here are the two TOF sensors measuring an object from different depths, alongside each measurement's time stamp.
One other distance sensor I found that was infrared based was the Sharp GP2Y0A41SK0F, which uses triangulation to determine distance by measuring the angle of the reflected light to calculate the distance to the target object. It has good range, reaching up to 3m, and is compact. However, it uses an analog output, and is susceptible to ambient light interference. Another would be the Sharp GP2Y0E03, which emits modulated infrared light, and measures the intensity of the reflected light to determine distance. While less impacted to target surface properties, the GP2Y0E03 suffers from a limited range and lower accuracy.
The TOF sensors we use are definitely sensitive to different textures, especially extremely reflective or transparent materials. Textured materials that scatter light in different directions make it difficult for the TOF sensors to measure the light pulse as it bounces off the surface. Color is slightly less of an issue, but I found that the sensor works better and is more consistent with lighter surfaces than darker surfaces.
Wire motor drivers to robot and perform calibration and open loop control.
Here is a diagram of the circuit I constructed to integrate motor drivers into my robot. I chose 5, 6, 11, and 12 since they all have PWM functionality, and are on symmetric opposite sides of the board, allowing for easier soldering and cleaner wire distribution. The Artemis and motor drivers are powered by separate batteries to avoid large current draw potentially causing the Artemis to shut off and reset.
After soldering the motor to the motor driver and to the Artemis, I hooked it all up to the oscilloscope and power supply. To model being connected to the 850mAh battery, I set the power supply to 3.7V, and ran quick test code to send a PWM signal to the motor driver using analogWrite. Here is the code snippet, setup of the connections, oscilloscope output, as well as a video of the motor running forward and backward.
After confirming that one motor driver worked, I soldered up the other motor driver, connected the V_in and grounds to the battery, and secured everything inside the car.
I then tested the motor's lower limit range by lowering the PWM value until the car wouldn't move anymore. To help me with this, I made a new command and sent strings using python containing PWM duty cycles for left and right motors as well as the amount of time to run the cycle for.
Since the two motors are slightly different and have different amounts of friction within them, the car doesn't travel straight when the same PWM duty cycle is sent to both motors. To get around this, I tested to find a calibration factor that would slow down one motor to match the speed of the other. Here is a video of the robot following a line in my kitchen.
Then for fun, I had the robot drive out of my kitchen into the hallway using open loop control and sending PWM signals to make it turn.
The analogWrite runs at a frequency of ~400-500 Hz, which is plenty fast for our robot and its intended purposes. Manually configuring the timers would help in allowing us to hold a steadier PWM signal.
Finally, I tested to find the slowest that I could make the robot go without stopping once in motion. I did this by having the robot start slowly and then decrease its PWM duty cycle until it was barely moving. I stopped at the value one before the robot came to a stop before the allotted time was up, and the robot would whine indicating the motors were still trying to run but couldn't.
Demonstrate closed loop control in implementing a PID controller to have the robot stop 300 mm from walls.
To collect and send data, I implemented a readSensor function that checks if the TOFs and IMU have data ready to be pulled. If they do, the Artemis will store their values onboard into arrays. If the sensors do not have new values ready, the Artemis will extrapolate from existing data to update the arrays. Upon receiving a sendData command, the Artemis will send all the array data, grouped by timestamp, over BLE, which is received by a complementary function. I have included the code for the readSensor, sendData, and PID callback function below.
When choosing my controller, I decided to go with a PID in order to address the problem of overshoot. I ended up with Kp = 0.5, Ki = 0.1, and Kd = 0.4. The Ki was critical in eliminating steady state error, and the high Kd was needed to minimize overshoot. The TOFs were left at short range and with smallest sampling time in order to increase the frequency of sampling to provide the PID loop with data as often as possible.
Here are two graphs showing the PID performance while tuning. As you can see, the first attempt had overshoot, which was minimized in the second attempt by increasing Kd. We can also see that the robot performs on both hardwood and carpet, but was tuned on hardwood.
I capped the integral windup by simply capping the I term in the PID control. This is necessary in order to remove saturation from the system in the case that the system is unable to react to errors quicker than the integral term increases to a saturating value. This helps the car perform similarly on carpet as it does hardwood, since carpet makes the robot move slower and therefore accumulate more error. Here is a video of the robot on carpet without integral windup protection.
Demonstrate closed loop control in implementing a PID controller to have the robot turn 90 degrees.
To collect and send data, I used the same readSensor function I used in lab 5, since it also handled IMU data. I removed the extrapolating function since the IMU was able to reliably return values to the Artemis every loop execution. I modified the PID callback function on the Python side to handle the new rotational PID data as seen below. I also used a SPIN function in my code to have the robot execute spins. I found that the right motor spun slower than the left motor, requiring an alpha value to scale up the motor input. Below is the SPIN function as well as the rotationalPID code and PID_ROTATIONAL command.
When choosing my controller, I decided to go with a PID in order to address the problem of overshoot. I ended up with Kp = 10, Ki = 0.4, and Kd = 2. The Ki was critical in eliminating steady state error, and the high Kd was needed to minimize overshoot. While smaller Kp values worked, a higher value was chosen to make the turn quicker. Compared to the TOFs, the IMU returned data every loop execution, matching the rate of our PID controller, so no modifications to the sensor's configuration was necessary.
Here is a graph and video of a successful run of the 90 degree turn.
I capped the integral windup by simply capping the I term in the PID control. This is necessary in order to remove saturation from the system in the case that the system is unable to react to errors quicker than the integral term increases to a saturating value.
Implement a Kalman Filter on the robot to supplement slowly sampled TOF values by predicting the state of the robot.
I began by subjecting my robot to a step response, where I assert a PWM close to the maximum speed of the robot. The robot is left to reach steady state before hitting a pillow to prevent damage. In doing so, I saved the TOF data and motor input data over bluetooth to be processed in Python. I took the TOF data along with the timestamps in order to estimate the velocity of the car. After I got these graphs, I calculated the 90% rise time and the steady state speed to calculate the parameters I need in my Kalman Filter matrices. In the end I come away with d = 0.000238 and m = 6.204e-5. While these values seem very small, this was probably because I ran my robot at such a high speed. If there is discrepancies in my Kalman Filter compared to the measured values, I will re-characterize the system at a slower speed.
Using the provided code as a reference, I defined a KF function to take inputs and provide a Mu and Sigma. The input covariance matrices were ballparked to begin with and I adjusted them to favor the sensor data over the state. After doing so, I got these graphs back which not only closely matched the raw data, but also provided a much smoother estimation of the state, especially when looking at the velocity graph.
I then went back and had the Kalman Filter run at twice the frequency of the TOF data by halving my delta t, discretizing my matrices using this new delta t, and running the Kalman Filter twice per TOF data entry: once with the TOF data and once with the estimated measurement. This resulted in a very noisy graph, with high error in the beginning. I believe that this can be reconciled by fine tuning the covariance values better, and will try increasing my initial sigma matrix since the system is bound to perform worse in the beginning since it is essentially making up data.
Issue controls to the robot so that it drives at a wall and performs a 180 drift before driving away from the wall.
I added a new command to tell the robot to perform the stunt. I did this by splitting up the motion into two phases, the part when driving to the wall, and the 180 degree turn combined with driving away from the wall. We see here that the robot is doing orientation control at all points of the stunt, but being issued a new target angle once it comes within 3 feet of the wall. In order to do orientation control while moving, I had to slightly modify my rotationalPID function so that it could move and not just spin in place. After making these changes and tuning the PID constants a bit, my robot was able to successfully perform the desired stunt.
Here is a video of a successful run as well as a slow motion video.
Use the robot's TOFs and angular control to map out an environment.
In order to perform the slow controlled spin to take TOF readings, I made a new command called mapSpin that would run the rotational PID controller and increment the angle the robot was pointing at by 15 degrees every second. After adjusting the PID values a bit to get the robot to respond to low angle errors, I was able to get the robot to perform the 15 degree turn in about a quarter of a second, leaving the TOFs 0.75s to take readings. The code for the mapSpin function is below.
I then had the robot perform mapping spins in 4 designated locations in the environment, where it was able to take these readings. The red dots are the readings taken with the forward facing TOF and the blue dots are the ones with the TOF facing off to the left. Using some transformation matrix magic, I was able to convert these readings into the global frame and plot out where the robot believes the walls to be.
I then conglomerated the 4 spins into one big map and overlayed the theoretical walls over it. It is clear to see that the robot is able to accurately discern the walls close to it, but the accuracy of the readings fall off with further measurements. Additionally, it is evident that the assumption that the robot spins in place and doesn't drift is a poor assumption, since some of the walls seem to be translated off too far in one direction. Nonetheless, the robot did a relatively good job in scanning the environment and providing us with a mapping to use in future labs.
Implement grid localization using a Bayes Filter in a simulation of the robot and its environment.
The environment is split into 12 x 9 x 18 = 1,944 cells over three axes: x, y, and theta. Each cell represents a 1 foot by 1 foot by 20 degree configuration of the robot in its environment. I wrote effecient Bayes filter code to take into account the robot's sensor inputs as well as odometry in order to localize the robot within the grid environment.
This function takes the previous and current pose and uses it to extrapolate the control information that the robot executed upon.
This function uses the calculated control from odometry along with the actual control inputs and motion uncertainties to calculate the probability of the robot's position given its previous position and control inputs.
This function iterates through all the cells and uses the probability calculated using the odometry motion model to build the belief map in the environment. One key note is that this function ignores cells where the belief is under a minimum threshold in order to execute the code faster.
The sensor_model function returns the expected distance readings given a specific robot pose in the map. The update_step function updates the probabilities of the cells in the grid map based on the bel_bar and sensor model.
Here is a video of the simulation running, with the belief of the robot in blue tracing the ground truth of the robot in green.
Implement grid localization using a Bayes Filter of the robot and its environment.
I used the provided code and modified the main function with my robot commands. I coded the robot to spin in 20 degree intervals and record 18 time of flight measurements. The code then took these TOF measurements to localize the robot according to the Bayes filter algorithms. I also ran the provided code to make sure that the provided Bayes filter worked as expected. Here is the graph of the odometry in red, belief in blue, and ground truth in green.
I ran the robot's spin localization in 4 different spots on the map, and saw how the robot was able to localize within the map.
At this point, the robot was able to localize perfectly.
At this point, the robot localized about a foot off, which was surprising since I expected this corner to be the most accurate because of how well my robot performed in this corner in Lab 9.
At this point, the robot was able to localize perfectly. This was probably because of the obstacle the robot was next to.
At this point, the robot didn't quite spin on the dime, which led it closer to the bottom right corner. This probably contributed to the poor localization, as the robot thought it was in the corner next to the obstacle.
Demonstrate closed loop control in implementing a PID controller to have the robot turn 90 degrees.
To collect and send data, I used the same readSensor function I used in lab 5, since it also handled IMU data. I removed the extrapolating function since the IMU was able to reliably return values to the Artemis every loop execution. I modified the PID callback function on the Python side to handle the new rotational PID data as seen below. I also used a SPIN function in my code to have the robot execute spins. I found that the right motor spun slower than the left motor, requiring an alpha value to scale up the motor input. Below is the SPIN function as well as the rotationalPID code and PID_ROTATIONAL command.
When choosing my controller, I decided to go with a PID in order to address the problem of overshoot. I ended up with Kp = 10, Ki = 0.4, and Kd = 2. The Ki was critical in eliminating steady state error, and the high Kd was needed to minimize overshoot. While smaller Kp values worked, a higher value was chosen to make the turn quicker. Compared to the TOFs, the IMU returned data every loop execution, matching the rate of our PID controller, so no modifications to the sensor's configuration was necessary.
Here is a graph and video of a successful run of the 90 degree turn.
I capped the integral windup by simply capping the I term in the PID control. This is necessary in order to remove saturation from the system in the case that the system is unable to react to errors quicker than the integral term increases to a saturating value.