ML Data Sets
Hey Guys,
I'm pretty bad at coding and I only know a little java so I wanted to reach out to the community here in the coding/scripting section. I have a data set that I will be uploading to GitHub and I would like to download and run it as a new data set instead of the current coco one. I'm just using the regular style transfer model but something clicked during lens fest when the woman was talking about her statue ML data set. If anyone can help me out that would be great!
Happy Creating,
Ryan
Kaggle is a good place to find datasets. They are of varying quality, but there's quite a few datasets to look through. Other than that people sometimes put datasets up on GitHub.
You can find pre-trained models in the ONNX Model Zoo, although not all can be run in LS due to size constraints.
Hey Mike,
So I have the data sets, the only problem is finding a way to use the data sets to run in the Style Transfer. So essentially I have over 1k images that I would want to use to build a style transfer lens much like how u can in runaway ML. The only problem is I couldn't see a way to export a runaway ML model and when looking at Fritz AI I couldn't see a way to upload a data set instead of a singular image. That's what brought me back to just the google collab.
Gotcha. I have used both the Google Colab and Fritz AI style transfer templates. As you saw, Fritz AI only takes a single style image. The example Google Colab also takes a single style image. This is due to how the model itself is constructed. I don't know enough ML to go into any more detail, but I think the notebook starts with a pretrained model and freezes most of the model in place. It then uses the input style image and the downloaded image dataset to retrain a portion of the model to transfer the style over (although maybe it does train from scratch).
I've never used RunwayML, but I'm guessing you might have been using a CycleGAN style transfer or something.
It is possible to train and run a CycleGAN in LS (example of mine here), but it isn't the most straightforward. I used this code repo, edited the code to output an onnx file, and played with the parameters until I got a good enough model that could still fit inside a lens.
I am planning on making a tutorial on how to modify the code and set the parameters, but I don't have an ETA on it, and it's too extensive to put into a forum comment.
If you do decide to dive into things and try to figure it out, I do recommend training on just a handful of images to make sure you can load the model into LS. This will be a horrible model that doesn't give good results, but you'll be able to verify that all the model layers are readable by LS and that the model size isn't too big. That way you don't waste however long training your model just to find out you can't use it in LS.
And the GAN models are a lot bigger than the template style transfer models. But they could potentially capture more of an artist's nuances from a dataset of images rather than a single example. I haven't tried a style transfer with them yet, but I'm interested to see how it turns out.