Monday
Jul022012

Otoscan Wins Best New Product 

Otoscan, the 3D ear scanning technology I have been working on, won best new product at the world's largest audiology conference, Audiology Now 2012 in Boston, MA.  We received a lot of attention at the show:

And some press afterwards:
    Hearing Review
    Hearing Health & Technology Matters
    Georgia Tech Alumni Magazine
    PC Werth

Thursday
Mar312011

TAG Top 10 Innovation Award

ShapeStart was recently selected as one of TAG’s (Technology Association of Georgia) top 10 innovative technology companies in Georgia.  As a recipient of this award I was given the opportunity to present at the Georgia Technology Summit, a gathering of executives, entrepreneurs, and academia with over 1,000 attendees this year.  

All of the Top 10 companies were given 3 minutes and 3 slides to tell their story.  That meant I had to boil all of ShapeStart down into a three minute pitch.  I ended up with the following three slides:

Slide 1

- ShapeStart is developing a direct in-the-ear 3D scanner called LaserFit Hearing
- This will replace the silicone impressions necessary to produce hearing aids,
    hearing protection and custom fitting head phones.
- The hearing aid industry takes about 10MM silicone impressions every year
- The DoD spends over a billion dollars a year on hearing loss disability payments,
    which is why they are funding us to develop a custom hearing protection
    solution.

Slide 2

- We have a solution for all this silicone.  LaserFit Hearing, a direct in-the-ear 3D
    scanner

- The process is similar to an otoscope examination, the probe is inserted into
    the ear.  LaserFit captures the shape and size of the ear canal and concha bowl.  

- The data can then be immediately transferred to manufacturers
- This process is faster, taking just a few seconds
- Results in a better performing in-ear device
- Is more comfortable for the patient and clinician
- All while reducing the cost!


Slide 3
- Our small diameter 3D scanning technology will revolutionize the way in-ear
    devices are made

- We then plan to take this technology to other industries such as endoscopy,
    dental, and even industrial inspection applications


The Georgia Technology Summit turned out great for us and we very much appreciate the opportunity to present at the event.  We made a lot of great connections and will continue supporting TAG as we move forward.


Here is a picture of me with accepting the TAG top 10 award with Tino Mantella, TAG’s president & CEO.

 

Friday
Feb112011

StartAtlanta - Markit8dude

Over the weekend of Jan 28-30th I participated in StartAtlanta.  It was a great time, the event started with a presentations of about 30 business ideas to a group of about 150 developers, engineers, designers, and entrepreneurs.  The group voted and whittled the ideas down to 12.  We then broke into groups and began building companies. This was Friday at about 9pm.  I ended joining a group that was trying to build a bowling score tracking app for smartphones.  I joined because, one I'm a bowler (2009 league champ), and two because the app was to include OCR (optical character recognition) of the score using a smart phone camera.  OCR is something I'm interested in learning so I decided to join.  

We ended up working 30-40 hours over the weekend to build a minimally functional version of markit8dude.  I built a quick OCR application in Matlab (I'll post more on my OCR development sometime).  Here is a sneak peak of the markit8dude app.

Tuesday
Jan252011

3D foot scanner using a point and shoot camera

December 17th, 2010 - ShapeStart Project Day

ShapeStart project days occur about once a quarter.  It is a day where the developers and engineers at ShapeStart can work on any project they want and can work with whomever they want.  Check out ShapeStart’s project day page for more information. http://www.shapestart.com/project_day.html

I decided to build a 3D foot scanner using the canon powershot A590 and the matlab calibration toolkit (I could have used any point and shoot camera, as long as the autofocus can be disabled.  There is also a way to make this work even with autofocus enabled, although it is much less straight forward and since I had only day to try and make this work. I decided to implement a non-autofocus version for now)

 

 The Idea

 

I used the point and shoot camera as a stereo rig seperated by a distance D.  I’d grab the position of cameras from the checker board, and use a pattern printed on a foot “stand-in”(which was a peanut butter jar wrapped with a printed pattern) to find correspondence. In a real foot scanner, we’d have to use a sock with a similar printed pattern.

First I should explain the principal that I used for this scanner.  I actually built upon the idea I created at a previous project day.  The previous project day I built a foot scanner that used two fire-i board cameras mounted to a steel plate.  
The foot scanner uses the principal of stereo vision.  On the first project day I actually just used the matlab cailbration toolkit, stereo calibration procedure to calibrate this stereo rig.  Then I used the image rectification option to rectifiy two images of the “foot” pattern.  Once the images were rectified the pixels along the translation axis are all aligned. If the lens calibration is perfect, any line drawn parallel to the axis of translation should intersect the same points in the two images:


Note: If you’re good at it you should be able to cross your eyes and see me in 3D, if you dont know what I mean check this out:
http://www.3dphoto.net/text/viewing/technique.html.


I then used canny edge detection to find the edges in the “foot” pattern.  In the original setup the center bar is double the width of the other bars.  This is so I could find the proper correspondence between cameras(figure out which line was which in each picture even though the bars were not coded).  So once I knew which lines were the same in each picture and the images were rectified it was a simple matter of triangulation.  Please take a minute and check out my article on Stereo Vision.

With this new, more generalised setup, the distance D was unknown.  However, I was  able to find the 3D positions of both cameras relative to a checker board using the Matlab calibration toolkit.  Then using the two transform matrices I was able to put both cameras in the same “world coordinate system”. Once the cameras were in the same coordinate system, I had to find the corresponding pixels in each image, then triangulate the distance to the XYZ point in space where the correspondence match occurs.  One way to think about this is to imagine a ray drawn from the theoretical camera center through the pixel on the image sensor, (ignoring lens distortion) this ray will also intersect the correspondence point.

I say theoretical camera center, because in reality lens distortion makes this more of a region then the theoretical pin-hole of a camera.   So to find the 3D position of our point of correspondence, we just need to construct two rays, one from each camera, and find where they intersect.  In theory this is simple, but in reality, lens distortion makes this quite complicated.  I was on a time constraint (1 day) for project day, so I decided to just use the matlab camera calibration toolkit’s undistort function.  Once the image is undistorted, it is as simple as constructing a ray from pixel through camera center.

Below, you can see an example of a distorted image vs an undistorted image.  This is the same picture of the checker pattern.  The first is straight out of the camera, you’ll notice the checker board is actually warped(deviates from the yellow line which I drew in photoshop).  Of course, in reality the checker board is not warped, so the warping you are seeing is caused by the lens of the camera.  The undistort function applies the lens calibration parameters to the image to remove this lens distortion.  You can see in the undistorted version of this same image below, the checker board is indeed straight and stays in contact with the yellow line I added in photoshop.

 

Now to construct the rays. First I had to identify the points of correspondence, last project day I built some code that used canny edge detection and a double thick center bar to determine points of intersection from both cameras.  My original thought was I could grab that code and implement it here, however after about 2 hours of trying to get that to work, I gave up and manually clicked points of correspondence just to keep moving.  Here are the corresponding pixels I used.  l_x represents left camera x pixel, so the first number in l_x should be the same point as the first number in r_x, etc:

l_x=[726 729 727 725 769 770 770 769 812 813 814 812];
l_y=[188 149 108 70 189 152 110 71 188 153 111 72];
r_x=[575 574 574 576 620 618 618 618 663 660 661 663];
r_y=[151 118 80 41 154 119 79 41 155 119 81 41];

To construct the rays I just created two points, one at the camera center, translated to “world coordinates” (so both rays would be in the same coordinate space), and the second point is the pixel on the camera sensor where the correspondence occurs.  Here is the code for this:

%construct the first ray
ray1_pt1=Rc_l^-1*([0;0;0]-Tc_l);
ray1_pt2=Rc_l^-1*(KK^-1*[l_x;l_y;1]-Tc_l);

Definitions for KK, Tc, Rc, etc are on this page: 
 http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/parameters.html

Then to find the intersection of the two rays (actually the point where they are the closest, since they will probably never perfectly intersect) I found the normal distance between the rays at every possible point along the ray, for each ray, and stored the location where it was the smallest.  I call this brute force because its insanely inefficient.  I didn’t have time to speed it up, I could probably do something with the cross product, or maybe figure out a way to locate an area of likely intersection and use the brute force over a small area.  Anyway, the brute force method worked.

From this second view you can see it is 3D scan data, in the shape of the stand-in.

There are, however, inaccuracies because the cameras are not epipolar and that is one of the assumptions I made in this setup.  Two solutions to this are to use a “3D curve” reconstruction technique, rather than a point to point reconstruction or use a pattern where the epipolar constraint is not necessary, such as a checker pattern or stripes with ticks marks parallel to the axis of translation.

 

 

 


Here is my Matlab code:
foot_v2.m
stereo_match.m

 

Friday
Jan142011

iPhone 3D Scanner - Trimensional


Georgia Tech research scientist Grant Schindler has developed a 3D scanner for the iPhone. Dr. Schindler's software is called Trimensional. If you have an iPhone 4 or iPod Touch (4th Generation) you can download the software from the App store and immediately start taking 3D scans. The 3D scan of me above was taken using my iPhone.