Meta’s researchers have made a significant leap in the AI art generation field with Make-A-Video, the creatively named new technique for — you guessed it — making a video out of nothing but a text prompt. The results are impressive and varied, and all, with no exceptions, slightly creepy.
We’ve seen text-to-video models before — it’s a natural extension of text-to-image models like DALL-E, which output stills from prompts. But while the conceptual jump from still image to moving one is small for a human mind, it’s far from trivial to implement in a machine learning mode
Meta’s Make-A-Video AI achieves a new, nightmarish state of the art
![](https://techcrunch.com/wp-content/uploads/2022/09/A_teddy_bear_painting_a_portrait.webp)
コメント