Machine Learning - Position and Rotation, Masking etc
I'm new to Lens, and am excited to be working on my first project!
My goal is this: to create a Lens project that detects an object, and project a mesh, etc onto it in 3d space (much like the Wannaby shoe example).
I followed the ML instructions, trained a model in Colaboratory (luckily my object was in the array of supported objects), and it seems to work - at least, with pictures. However, I tried opening Wannaby's shoe project (as that's closer to what I want to achieve), and while it totally worked, I was hoping to inject my ML model and let the scripts do the rest. To my chagrin, Colaboratory spits out an onnx while the shoe example uses a .dnn, which wasn't drag-and-drop compatable.
can I ascertain the detected object's position and rotation in 3D space with my existing onnx? Or, does anyone have a rec for how to create a .dnn file? (it looks and sounds hard)
I'm a newbie, please be kind!
Thanks for any help,
Hi Casey Berman,
Once you have a ONNX model ready, you can import it using MLComponent in Lens Studio and our internal converter will convert ONNX models to dnn models if all operations are supported on mobile inference.
Thanks for getting back to me. I thought I responded, but it looks like I didn't...
To give a little clarity, here is a link to the Wanaby presentation, if you're not already familiar: https://lensstudio.snapchat.com/templates/ml/ml-templates-library/
My confusion is that there is no .onnx in the Wanaby example - only a .dnn. When I drag that dnn into the ML Component in Lens, I don't get any errors (it's always possible that Lens is prepared for someone to make a mistake like that, but it seems odd to me). I drag my .onnx into the ML Component, and I get an "ML Component can only set inference mode before model loading" error.This error seems to recur every minute or so. Also, even if I remove the .onnx from the ML Component (or delete it from the project), the project ceases to work - I must re-download the original project and start again.
Do you have any insight into this? My model works on pictures of stop signs in the editor, so I don't *think* it's something wrong with the model, but maybe I'm wrong.
Lastly, there does not seem to be a .onnx file in the Wanaby project's directories, and there is no .dnn in my dummy project's directories? I'm working in Lens 3.0 , I know there's a 3.1 out now - do you think that would have anything to do with it?
I can send you the model if that would help troubleshoot.
Instead of directly drag your .onnx model into existing MLComponent, could you please try importing your model by clicking the + button under objects panel at top left and search for MLComponent, too? I tried using Lens Studio 3.1 but couldn't reproduce the error you reported.
Thanks again for getting back to me. Correct, that does not result in an error. However, that doesn't solve my issue - I want to attach an object to the detected object in real 3d space.
Anyway, let me try rephrasing my question slightly. If I want to attach a physical object (say, a cube) to my ML Component's detected instance in 3 dimensional space, how would I do that?
I was trying to use the working example from Wanaby, change the model and adjust from there - but maybe its aims are too specific and too different from mine. If I wanted to put a sign onto or below a detected stop sign, for example... can you point me in a direction to do that?
I saw in the documentation that attaching a 3D object to a model detection instance IS possible, but it seems only with the built-in models (people, hands and cats and dogs). Can I do the same with a trained ML model? Or, is this something that will be a feature coming soon to the platform? Lastly, am I missing something in the documentation?