A nice new pre-print with my favorite engineer @amy_tabb in which we demonstrate our vision of the "lazy susan"---a revolving stand or tray on a table---image capture rig for 3D model reconstruction.

A rambling thread ahead.

biorxiv.org/content/10.110…
This project goes back to 2016/17 (my second year of graduate school). I had become enamoured with geometric morphometrics and really want to do something with that in strawberry, which lead to in 2020: academic.oup.com/gigascience/ar…
I brought a ROUGH draft of this idea to @PlantPhenomics in 2018. On the last day I happened to sit at a table with @amy_tabb and we got to talking about 3D stuff when objects have few unambiguous homologous landmarks, are reflective, are not easy to tell one view from other views
Effectively, it took us several years to go through all of the computer engineering in WV (arxiv.org/abs/1903.06811 github.com/amy-tabb/calico) and the platform engineering in CA to get everything working properly. We even sent data sets back and forth via snail mail.
But by the end, we have a system that fully relies on consumer grader hardware and no licensed software that allows us to capture 60+images per rotation per camera in 360º of a target object in 9 seconds. The longest part of data acquisition is the time between objects.
The physical system cost ~$1600 dollars including cameras. Image acquisition is 9s/sample. Calibration takes ~30s/sample. Segmentation takes ~30s/sample. Reconstruction takes ~420s/sample... lol
The time between objects is dictated by the cameras ability to clear on board cache and the users organization with subsequent targets.
To do this, we envisioned and built a "lazy susan" type rig where the object rotates on a pedestal with a stepper motor controlled by an Arduino microcontroller at 1 rev every 9s.
One on side of the object is a vertical arm with mounted camera(s)--Sony alpha 6000 in this case--controlled by Raspberry Pi triggered PocketWizard Transceivers so the camera(s) is/are never touched and can be controlled by a single barcode scanner that prints sample IDs. Image
I never figured out how to control the Arduino using the same idea, which is currently left "on" unless the user is mounting a new object.
On the other side of the target, opposite the cameras, is a aruco patterned backboard. Below the object on the pedestal so that it rotates with the target is pair of offset cubes with charuco patterns printed on them. Image
The two design elements allow us to correct for geometric aberrations (radial distortion) as well as calibrate the relationship between cameras and the target using CALICO (github.com/amy-tabb/calico). Remember that the object rotates and the cameras are stationary. Image
After calibration, which we both feel very strong about, we segment the target in each image and reconstruct the object using shape from inconsistent silhouette. And we are able to do potatoes, strawberries, pears, peppers, grapes, and... sometimes not so well pears... Image
However, I am lazier than heck and outright refused to take manual measurements because manual measurements are not "ground truth", suffer from measurement errors, and biological samples degrade over time so they could not be perfectly measured in multiple sessions.
So we downloaded and 3D printed a series of 11 odd objects that we could measure directly in the digital model (perfect), mount on our pedestal, reconstruct, and digitally measure, session after session and reprint if one was lost or damaged. Image
Again, we can examine the exact dimensions of our models on every axis! And we do pretty well! This figure was made using iterative closest point matching between the reconstructions and the ground truth digital model. Image
Going in, we knew that the die would be the worst offenders because we cannot capture the type of concave/saddle regions they have. However, these models excluded (2nd row), we do pretty well at measuring several size features. Image
Surface area is biased, but our models are made of voxels and have rough, jagged surfaces, while the ground truth objects are perfectly smooth. So this was expected but I probably wouldn't measure SA often anyhow.
Some improvements I would like to make, but simply haven't had the time, energy, etc:
1. Start and stop stepper motor using barcode trigger for the cameras
2. Automatically write images from cameras to computer directory named by the barcode scanner.
1. requires Rpi to power on/off arduino via GPIO? maybe?
2. Might require different cameras or something, idk, but I wasn't and haven't been able to figure that out.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Mitchell Feldmann, PhD

Mitchell Feldmann, PhD Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(