Deprecated: Function set_magic_quotes_runtime() is deprecated in /home/ptmaynard/public_html/textpattern/lib/txplib_db.php on line 14
Philip Maynard
Go to content Go to navigation Go to search

Pictures! · 2006-12-05 by Philip Maynard

Finally, I have some pictures of the robot.

To make battery changes less painful, we have a box accessible from the bottom:

Our CO2 holder. It has a hardened piercing tip and a (rather cleverly, I must say) captured slide for the bottom of the cylinder.

The solenoid:

We made our own wheel encoder boards to save a few pennies. They turned out nicely!

Design file ready for printing:

Finished item (traces are on the other side):

Our header board was made with a more advanced process and greatly simplified wiring by putting pins just where we needed them. It contains power circuitry, a buffer, debugging LEDs, and wires all the pins on the microcontroller board to convenient groups of header pins

Our top-mounted IR rangers can only see the walls on the game board (not holes, dominoes, or the softball), and so give us our position.

The front-mounted short-range IR rangers give us enough data to know what kind of object we’re facing.

We added a switch for the motor power to prevent killing batteries. A triumph of UI design, it pulls to the “on” position so that accidental bumps in transport or storage won’t kill the batteries.

Motion and Intelligence · 2006-11-06 by Philip Maynard

With all systems go, it was time to combine them into something useful. We set out to navigate the board and run over dominoes. With some simple code to smoothly accelerate the robot, stop after a specified distance, and turn accurately, we tried a dead reckoning approach.

Obviously, it failed. There are too many variables to use such a navigation system successfully, since errors compound. But it was very useful because we discovered what kind of feedback and correction was needed. The current encoder-based dead recking systems is fairly accurate, just not robust to environment variables. On the rough, undulating game board it wanders from it’s specified path quickly, and will need to “square up” with the walls after just one or two moves. Another possibility is using the IR rangers to position it more accurately during motion. The problem with that approach is that there may not always be a wall nearby to follow.

Another discovery was how big the robot is. Our IR rangers have a “dead zone” where they give false data. For our long-range sensors this is anywhere within about 8” of the sensor. We thus mounted our sensors facing across the robot’s body so that it is physically impossible to get a false reading, eliminating a troublesome error case. The problem is that they had to stick out a bit on either side, making the robot rather wide. This means position is now even more critical, since hitting a wall is a rather bad situation. We may try to help the situation by placing small casters on the edges of the IR mounts to prevent edges from catching.

We have also developed an algorithm to determine our location. Once motion is reliable, we can very quickly and efficiently navigate the board with no guesswork. We feel this algorithm is far superior to the other designs being used, and so I will not publish it for competitive reasons. It approaches an optimal solution, and has been proven to work reliably with minimal robustness requirements imposed on the hardware. The rigorous proof of it’s correctness is absolutely key, and is the main difference between our algorithm and the guesswork employed by the other teams.

The Robot is Getting Somewhere · 2006-10-17 by Philip Maynard

This week the robot comes alive. We have figured out the inputs from the IR rangers, and the outputs to the motors. Our final step to making it mobile is assembling some wheel encoders. To save our budget, we’ve purchased some sensors and will be etching our own printed circuit boards (PBCs) to mount them on.

The PCB etching system is pretty slick. You design the layout in a program made for circuit layout, or freehand in a drawing program. Using a laser printer, you print the layout to a special sheet of paper. The paper is coated with a plastic material that combines with the toner to make a mask. By pressing the mask sheet onto a blank copper-clad board with heat from an iron, the mask is transferred to the copper board. Etching the board in an acid solution removes all the copper not masked by the toner and plastic. The mask is then scraped off, leaving crisp, clean traces. Holes are drilled for mounting components, and you have a professional-looking board that’s customized for your application!

Robot Update · 2006-10-11 by Philip Maynard

It’s alive!

An off-the-shelf chassis from Budget Robotics was used, complete with servo motors. These motors are used in all sorts of hobby devices, and take a pulse width modulated (PWM) signal as an input. The PWM capabilites of the HC12 make this easy. By defining our clock and changing the duty cycle, we can very accurately modulate the pulse width to control motor speed. It now accelerates, cruises, and stops on demand.

Next up for locomotion are wheel encoders. Once we know how far the wheels are turning we can use a control program to accurately position the robot on the board.

Since the robot will need to do more than just perform basic motion, we’ll be using some IR range sensors. These are around ten dollars, made by Sharp, and fairly accurate. They give a voltage related to the distance of an object in front of the sensor, and work from about 1.5 inches to 60 inches. Once we get motion accurate, we will implement PID control using both encoders and rangers to position the robot with relation to the environment. I think we’ll be using the compass chip as well, to help the robot figure out where it’s pointing when powered up. At that point the problem solving algorithm will be all that’s left – but that’s the big one.

FPGA Project · 2006-10-11 by Philip Maynard

I’m working on getting the project details transfered to this website, but until then you can read all about it here.

Senior Design Project: Autonomous Robot Fun! · 2006-10-10 by Philip Maynard

For a capstone project, I’ve elected to take part in a robot competition. The robot my team constructs will be completely autonomous, relying on sensor arrays and complex algorithms to make decisions about where to move and what to. The problem we have to solve is quite challenging:

The robot must fit inside a 12” cube.
It must navigate an 8’ square game board without outside intervention or communication.
On the board are three dominoes, at fixed locations. There are several walls on the board, but these may be moved. The robot’s starting position and orientation are unkown. One of the dominoes is red, and that one must be knocked down last.

To accomplish this task, we have the following tools:

A Freescale HC12-based microcontroller.

We’ve elected to purchase a robot chassis including two servo motors, wheel encoders, and mounting hardware to easily bolt up all the components. This will save us the hassle and complication of attempting to make the mechanicals ourselves.

For sensor input, we have a variety of options, and we’re going to use many of them. The more data, the easier it is for the robot to make decisions. For accurate object detection and ranging, Sharp IR sensors will be used. In each direction there will be at least one sensor, with a variety of long-range (8” to 6’) and short-range (1” to 1’) sensors being used. The panels that make up the maze-like walls of the game board are set up on a 12” grid. There are lines on the board in a 9” grid, which at first seems troublesome if we’re to attempt using them for navigation. However, this is a boon! Since the walls and lines don’t line up, our robot can measure the distance from the line it’s following to the closest wall and determine where on the board it is. To assist in determing direction, a compass may be used to provide a rough heading.