Every now and then a fascinating research project comes to my attention and I have to mention it. Such is this: a research project from Cornell University Library to use artificial intelligence to automatically create in-between animation frames for traditional animation.
It’s a under-looked problem: 3D CGI animation and Flash vector-based animation already uses interpolating to accomplish in-betweening, such that the animator only need draw a few keyframes per second. However, Flash animation has never been fully accepted as a replacement for hand-drawn animation, today Flash is used mainly used for low-budget webtoons and children’s shows. Hand-drawn animation is still largely… well, hand-drawn. That can be scanned to bitmap images and even converted to vector, but it doesn’t tell the computer how the images have changed; if an eye is open in one frame and closed in another, how does the computer know the image is of an eye in the first place, and what parts of the frames are supposed to correspond? So in-between animation is still done by entry-level animators. It’s a time-consuming process, and recent reports shine light on the poor working conditions of animators in Japan.
So the research above showcases work trying to use neural networks to filter and automatically create in-between frames. While it has much reason to be attempted, the video viewed in fullscreen shows it is a little lacking… it looks like rough motion blur that doesn’t really replace the real thing. All the same, I’m curious if the research can improve. It’s developed by Yuichi Yagi of DWANGO in Japan. Alternatively, DWANGO was in the news months ago when their other work was deemed an “insult” to the art form… I hope they don’t give up completely, but perhaps alternatives to completely replacing the human hand should be considered.
The full paper can be read here (it’s mostly in Japanese, but the abstract gives some idea what it is about in English).