exporting tensorflow model using tutorial results in errors
Hello!
I am following this tutorial to export my model https://lensstudio.snapchat.com/guides/machine-learning/ml-frameworks/export-from-tensorflow/
This is my model:
This model follows the requirements afaik, the input node is a placeholder with rank 4 and of shape (N,H,W,C) I am able to use the freeze model code you provided to get my model to a pb, that is too big and wont import to lens studio. I try to use the optimization code provided but I first I received a parsing error, which i resolved by moving
graph_def = tf.GraphDef()
on the second line of export_from_frozen_graph to be the first line after the with tf.gfile statement. That error went away (mentioning it in case it helps diagnose the problem, or help someone else)
Then I ran into an error:
onnx export input node output not found in graph
I couldnt find any lead to fixing this, so I thought there must just be something wrong with the first node and if I change it using the code provided in the tutorial it would fix it (also there is a typo in the code, the closing paren on the third to last line is commented out). Alas, I ran into another error:
Attempted to map inputs that were not found in graph_def: [placeholder:0]
I set the names in my input & output nodes by passing in name="placeholder" and name="output" when I create the model. Though I checked my model by downloading ascii version of pb and the first and last names were not what I set them to be, (the first one was named 'x' and the last 'functional_1/output/Tanh') but even when I put those as the input and output names to optimize_graph I run into the same errors.
Does anyone have insights on how I can resolve these errors? I can provide code, I didnt include all my code because this post was already getting too long haha :)
Thank you,
Char
Hi Char Stiles,
Do you mind sharing your current frozen graph model before optimization? Also may I know what's current model size before optimization? Normally the optimization step only removes inference irrelevant nodes and may not decrease the model size by a lot.
Best,
Eric
Hello @EricHu,
Thank you for your comment. I ended up abandoning this and switching to pytorch and that worked for me. I can still get you my pb if you'd like
I was able to get my pytorch onnx into lens studio and it compatible, but I see a black screen when I hook it up. It is too big atm, I just want to get the model in then I can make it smaller. But now you say it, I am wondering does the model have to be less than 10MB for it to even run in Lens studio? Is that the reason Im seeing a black screen? I thought maybe it was because my onnx is NCWH and Lens studio needs NWHC, but I didnt see anything about that being required in the pytorch export page.
anyways this really should be its own question. Thank you for your time.
Hi Char Stiles,
Glad that it worked for ONNX. Nonetheless, It would be great if you can provide your TensorFlow frozen graph so that we can improve importer compatibility on Lens Studio side. Yes we have a pretty strict size limit right now mainly because we are running lenses on phones with different capabilities and computational powers. If the model goes too large, it may be very laggy and hurt user experience. For black screen it may be a separate issue that needs more information and investigation.
Thanks,
Eric
Hi,
Here are my frozen graphs: https://drive.google.com/drive/folders/1hYC_KyKtc0oMoUoZXdtm9Fs9tgDQF5WM?usp=sharing
All of them I added names to the first and last node. The first node I named is placeholder, which i think means its actual name is placeholder:0 the output I named output, meaning output:0
Im not sure if there is a difference between withcheckpoints and withnames frozengraph.
I included the ASCII frozen graph, as I couldnt figure out any of the ways to display the graph listed on the tutorial, but that is more of a me problem.
Cheers,
Char