I'm new to Lens, and am excited to be working on my first project!
My goal is this: to create a Lens project that detects an object, and project a mesh, etc onto it in 3d space (much like the Wannaby shoe example).
I followed the ML instructions, trained a model in Colaboratory (luckily my object was in the array of supported objects), and it seems to work - at least, with pictures. However, I tried opening Wannaby's shoe project (as that's closer to what I want to achieve), and while it totally worked, I was hoping to inject my ML model and let the scripts do the rest. To my chagrin, Colaboratory spits out an onnx while the shoe example uses a .dnn, which wasn't drag-and-drop compatable.
can I ascertain the detected object's position and rotation in 3D space with my existing onnx? Or, does anyone have a rec for how to create a .dnn file? (it looks and sounds hard)
I'm a newbie, please be kind!
Thanks for any help,