Theory of Operation
The principle of operation I based my work off was taken from Todd Danko’s DIY Laser Range Finder. The basic idea is that if we setup the camera and laser pointer just right, we can find the distance to an object by the distance of the laser dot from the center of the image. This is illustrated below.
If we can find θ and measure H (the height), then by basic trigonometry we can find the distance, D, using
Now, how do we find θ given we know H and assume we can find the pixels from center (pfc) of the laser dot. We can solve this by assuming that θ is a linear function of pfc.
So, how do we find a1 and a0? This is just a matter of solving a linear equation, which is not too difficult if we know the input (pfc) and the output (θ). For this calibrating phase we can find θ by measuring the pfc with a known D:
So, if we take some measurements, for example with H = 3.6 cm at fixed, known distances and record the pfc:
|pfc (px)||D (cm)||arctan(H/D) (rad)|
After recording the data, we can then use matlab’s or numpy’s polyfit to solve for a1 and a0:
x = [229, 139.5, 92, 34, 22, 11, 2.5]
y = [0.29145679, 0.17809294, 0.11942893, 0.04796319, 0.03598446, 0.02033618, 0.01216156]
a1, a0 = polyfit(x, y, 1)
Now that we know a1 and a0, we can find D by plugging in the linear approximation of θ on our very first equation:
So, how well does the linear approximation perform? Here is a comparison:
Finding the Dot / pfc
An issue with using a camera to detect the laser dot, as opposed to an infrared sensor is that we only sense in the visible spectrum. Hence, to detect the laser dot we assume that if something is “reddish” then it is the laser dot. Before that, let us first convert the image from RGB to Hue-Saturation-Value (HSV). HSV allows us to look at a colour regardless of its intensity (i.e. how dark or bright it is). By just looking at colour, we can then just filter out “reddish” things (a good post on this can be found here) a lot easier then if we had to deal with the various combinations of green and blue that go into reds with RGB. A good resource for figuring out the range of colours your looking for is this online HSV colour map. Now for filtering out the “reddish”, we will use OpenCV’s InRangeS. The result is a binary image where 1 (white) pixels are pixels that were “reddish” while 0 (black) pixels are not.
By just looking around, you can probably find some “reddish” things in your immediate vicinity. These “reddish” things can make finding the dot more confusing. That is why in the next step, we employ a mask. For our purposes a mask blocks out everything but a central bottom column where the red dot can appear.
With the mask in place, let us see what is left:
That white speck is the laser dot, which can be cleaned up with closing if desired, but may not be necessary). We can then use OpenCV’s FindContours to detect blobs, specifically our laser dot. It finds the boundaries of blobs for us, and using those boundaries we can easily find the number of y-pixels a blob is from the center, which gives us pfc. With pfc we can find the distance.
For the camera I used a Logitech C310 and for the laser I used a $20 Staples laser pen. The laser pen was powered by 2xAAA batteries and was activated by pressing a button. I cut the laser pen down and soldered ground to the ground springy thing and positive to the shell of the laser pen which is made of brass. The laser is wedged into the mount in such a way as to have the button pushed in at all times, so it is always active.
Rather than using batteries I just grabbed a Seeeduino Mega board and powered it off the 3.3V pin. To mount the camera and laser pointer I printed a plastic part using a Makerbot Thing-O-Matic 3D Printer (mk 6).
The operating range in my setup was approximately 15 – 300 cm, with last 200-300 cm being not very reliable. The minimum range is defined by H and the field of view of the camera.
This setup works OK around 16 – 100 cm, but has some issues as demonstrated in the video below. Notice the detection of the laser reflection in some cases.
As previously mentioned, the system works in the visible spectrum and we assume anything “reddish” is the laser dot. This results in an obvious disadvantage: anything red in the mask area will likely lead to incorrect results.
The biggest issue (in my opinion) with this method is the decreased resolution/accuracy when dealing with farther distances (>60 cm). The function to find D using θ/pfc is asymptotic (i.e. f(x) = 1/x), as illustrated in the comparison plot far above. This translates into a wildly uneven distribution of pixels to distance. For example, the range 60 – 300+ cm is represented by about 50 pixels. Because it is asymptotic, there is no great way to improve upon this except by moving towards a typical laser range finder, which uses the time of flight of a laser pulse to determine the range. Given that a web camera, such as the one I am using, can at best reach about 30 fps (~33ms poll rate) without image processing, it is unlikely that it has the temporal resolution required to be used as a half decent detector for a time of flight method.
All of the code for my project can be downloaded from https://bitbucket.org/raw/csc578c.