An efficient tutorial where I show you how to quickly use a thin plate spline motion model to generate animations from a single image. We’ll go through the Google Colab Jupyter Notebook and check out a few results as quickly as we can and I offer some general tips and such. Take it and run with it! Enjoy.

This is useful in img2img prompting, generating training data for multiple poses, making too many awkward videos of yourself making faces to use as the driving video, feeding deforum video input to clean up the noise, having tons of fun and of course getting into science! If you have any input about this please share! I’m super interested to know all that I can so comment! Alright. I’m pushing publish.

From the Paper by Jian Zhao and Hui Zhang:

“Image animation brings life to the static object in the source image according to the driving video. Recent works attempt to perform motion transfer on arbitrary objects through unsupervised methods without using a priori knowledge. However, it remains a significant challenge for current unsupervised methods when there is a large pose gap between the objects in the source and driving images. In this paper, a new end-to-end unsupervised motion transfer framework is proposed to overcome such issues.”

Links:

Thin Plate Spline Motion Github
Really cool Arxiv
PDF

This was my first attempt at a tutorial video. Feedback, criticism, encouragement and likes and subscribes very much appreciated. If this goes over well enough I’ll give in another go. I have some fun things in mind. Let me know! It’s nice to have motivation to create things.

Update: I came across Nerdy Rodent’s latest video and was stunned by his pretty face moving around like an actor from an old movie all stylish while he explained some new thing. He makes some great videos you should definitely check out. He works hard. He uses this technique and did a tutorial as well here Nerdy Rodent.

Keep Stable Diffusion for ALL.

Aimfriende
Perpetually Preposterous