Aqua Phoenix
     >>  Research >>  Sentinel  
 

Navigator
   
 
       
   

2. Phenomena

After a human learns to drive a car on a road, it becomes second nature, as if the task was re-learning to ride a bike. Understanding how to drive comes through experience and practice, but soon enough humans generate their own sets of rules of how to drive safely and correctly (most humans anyway). Since humans can make learning to drive seem autonomous, it follows that an algorithm could be designed to potentially control a car.

Wouldn't it be cool to come out of your house and have your car drive up from where you originally parked it? Giving your car more intelligence saves humans the need to drive from place to place. Cars can potentially park themselves, as well as taxi individuals around, like the automated "Johnny Cab" from the movie "Total Recall." One classic example is the Knight Industries 2000 car, KITT, from the TV series, "Knight Rider." The car is a combination of a computer and a vehicle, which has the ability to drive itself anywhere (along with other capabilities, like beating up bad guys).

Our goal here is not as ambitious but follows along those lines. We wish to have the car drive by itself. We are interested in working with image translation. Roads with lane lines are human constructions, made to logically regulate traveling direction, and this will be our main algorithmic focus for our proposal -- taking snapshots of the road in front of the car.

Cars to date do not have real "vision" at this point but vision can be demonstrated by attaching a real-time wireless camera to a section of the car, perhaps attached to the front bumper. The wireless camera on the vehicle provides a real-time image of the region in front of the vehicle. The camera is tilted towards the ground at about 30 to 45 degrees. This will supply enough information about the road in the immediate and near future environment. The images from the camera are processed by an algorithm, which is used to minimize the color information to 2 colors: black and white. Since the vehicle moves along a line that is colored differently than the floor, it is possible to get a well defined line that resembles the path on the road. Once the image has been minimized to 2 colors, the path is analyzed in order to make a decision on possible turns. For this part, it is necessary to analyze the line immediately in front of the vehicle, as well as the line one or two feet ahead. Depending on the complexity of commands available to turn the vehicle, the algorithm returns a command to turn one step, which may either be defined as an X degree turn, or an X seconds turn in a particular direction. Following a turn, further images from the camera are analyzed to either continue turning, or reverting to a straight drive.

The actual vehicle mechanics will be designed by a separate team of Columbia students who will construct their vehicle in the MSL lab. While we generate the algorithm to interpret images from a camera, these individuals propose to do the following:

"We propose a terrestrial vehicle that drives itself based on two types of real-time feedback inputs. A road will be represented by a colored-line drawn on the floor. This road will contain several branches/forks. At the end of one of the forks there is a desired goal. Based on our feedback inputs, we will drive the vehicle along the road and decide which direction to take at each fork. One of these inputs, a signal from a wireless camera, will help in controlling the motion of the vehicle. The other two infrared sensors will aid in making the direction decisions when there is a fork."