Hello everyone!
I’m trying to be more open about my experimental processes for some of the side-projects I spin up as I go along. I’m constantly evaluating new technologies and the implications they can have on the products we create at ActionVFX. So this post is me trying to let people in a little earlier on in the exploratory process!
In this test, I’m utilizing an early-stage machine-learning model to generate a high framerate video clip based on our existing assets.
(Watch in max resolution)
It analyzes the frames that are contained inside of the clips provided and generates completely new frames inside of an exported video clip. To explain how it operates in its most basic form, it analyzes Frame A and Frame C and solves to generate Frame B. Except that example barely scratches the surface for how it actually works.
You’ll notice how when slowed to 10% speed, the un-interpolated clip begins dropping frames, while the interpolated clip with the higher framerate has a much smoother motion. Even when the playback is dropped to 10%.
One implication of this type of technology could be that we could retroactively process the elements inside of our existing effects library to offer an experimental “High Framerate” download option, so if someone needed a clip at, say, 500FPS, they could simply download it.
It would take quite a bit more investigative work before this would be feasible, but would this even be something you’d be interested in us doing? Because, well…