MLInpainter - Individual
Before you buy:
Please note that MLInpainter is currently only available for the following operating systems and DCC packages.
Nuke/X/Studio and After Effects on Windows 10
Nuke/X/Studio on Linux for Ubuntu 22.04.X, CentOS8, Rocky Linux 8 or above.
MLInpainter is not available for Nuke Indie at this time.
What is MLInpainter?
MLInpainter is a tool created to help speed up cleanup tasks for VFX artists.
Leverage a machine learning inpainting model to clean up and remove objects from images based on Alpha input from brush strokes or roto shapes. Trained on a responsibly sourced dataset with commercially permissive terms so as to avoid any legal issues, making it easier for VFX artists and studios to handle the industry reality of ever-growing workloads on shorter deadlines with shrinking budgets.
All while processing their own data locally and keeping it off of the cloud, reducing cloud service subscriptions, and avoiding any potential security issues with client data.
v0.2.1 Features:
- Inpainting model trained on a commercially appropriate dataset.
- All image data is processed locally.
- 32-bit support.
- Project colorspace independent inpainting using OCIO.
- Adjustable Inpaint Regions and Full Frame processing.
- Automatic image padding to support non-standard image dimensions.
- Pre and post-processing settings.
- Experimental Sequence inpainting support in Nuke.
- Refine results feature, for improved color and detail when inpainting larger areas at higher resolutions.
- Versioning system for a more robust iterative and baked inpainting workflow.
Can I use it in a commercial setting?
Yes, you can use this tool in a commercial setting. The licensing for the model and the dataset that its weights were trained on have commercially permissive terms.
What DCC packages and platforms is it available for?
MLInpainter is available for use with Nuke on Windows and Linux, and After Effects on Windows with plans to add integrations for Blender and Fusion a little further down the road.
Who is this version intended for?
This version is intended for individual artists/freelancers in non-air-gapped work environments.
What kind of dataset was it trained on?
MLInpainter's model was trained on a custom dataset comprised of responsibly sourced CC0 images, Public Domain images, and images with commercially permissive terms.
How is MLInpainter different from Generative Fill or Inpainting with Stable Diffusion?
Generative Fill and Stable Diffusion Inpainting makes use of Diffusion models and prompting to generate plausible infill for masked image regions from latent space. These diffusion models are also not fully cleared for commercial use due to questions surrounding their training datasets. This is an especially sticky issue given the current climate around 'Generative AI', dataset provenance, and client-side data security requirements.
MLInpainter uses a GAN model that utilizes 'Fast Fourier Convolutions' to create a plausible infill for masked image regions, relying on features and colors that are present in the image itself as opposed to synthesizing a plausible infill from latent space.
During the training process, MLInpainter's model uses its training dataset to learn how to reconstruct missing parts of an image and not the training data itself. There is no prompting involved.
What are the development plans for the tool going forward?
Regular updates with improvements and new features are planned moving forward after launch.
Some of what is planned can be found below.
Coming in the next update:
- Improved, Temporal consistency for experimental video inpainting.
- Experimental video inpainting support for After Effects.
- Revised, Nuke Indie-friendly integration.
- Blender 3.6 DCC Support.
- Fusion DCC Support.
- Larger parameter model variation for improved inpaint detail and color at higher resolutions.