Posted on December 03, 2023

MagicAnimate
Temporally Consistent Human Image Animation using Diffusion Model

Visit GPTsHunter to Discover the best GPTs →

MagicAnimate Demos

MagicAnimate demo 1: Temporally Consistent Human Image Animation using Diffusion Model
MagicAnimate demo 2: Temporally Consistent Human Image Animation using Diffusion Model
MagicAnimate demo 3: Temporally Consistent Human Image Animation using Diffusion Model
MagicAnimate demo 4: Temporally Consistent Human Image Animation using Diffusion Model

What is Magic Animate?

Magic Animate (Github) is an exciting new open source project that allows you to produces an animated video from a single image and a motion video.

As everyone's on the lookout for AnimateAnyone, MagicAnimate just dropped and it's seriously awesome!

MagicAnimate, a cutting-edge diffusion-based framework for human image animation. This innovative tool excels in maintaining temporal consistency, faithfully preserving the reference image, and significantly enhancing animation fidelity. MagicAnimate stands out in its ability to animate reference images with motion sequences from various sources, including cross-ID animations and unseen domains like oil paintings and movie characters. It also integrates seamlessly with T2I diffusion models like DALLE3, bringing text-prompted images to life with dynamic actions.

Who built MagicAnimate

Magic Animate is built by Show Lab, National University of Singapore & Bytedance(字节跳动).

Advantages of MagicAnimate

Currently, it offers the highest consistency among all dance video solutions.

Disadvantages of Magic Animate

  • Some distortion in the face and hands (a recurring issue).
  • In its default configuration, the style shifts from anime to realism, particularly noticeable in the faces in the videos. This might require modifying the checkpoint.
  • The default DensePose-driven videos are based on real humans, so applying an anime style can result in changes to body proportions.

AnimateAnyone VS MagicAnimate

WIP.

Since AnimateAnyone doesnot release yet, there is no demo we can try for Animate Anyone.

Getting Started

Please download the pretrained base models for StableDiffusion V1.5 and MSE-finetuned VAE.

Download our MagicAnimate checkpoints.

Installation

prerequisites: python>=3.8, CUDA>=11.3, and ffmpeg.

Install with conda:

conda env create -f environment.yml conda activate manimate

Try MagicAnimate online demo on huggingface

Try MagicAnimate online demo on Replicate

Please visit Magic Animate on Replicate

Try MagicAnimate on Colab

You can refer this tweet: How to Run MagicAnimate on Colab and theColab url

Magic Animate API

You can use Replicate API to generate animated video.
  
    import Replicate from "replicate";
    
    const replicate = new Replicate({
      auth: process.env.REPLICATE_API_TOKEN,
    });
    
    const output = await replicate.run(
      "lucataco/magic-animate:e24ad72cc67dd2a365b5b909aca70371bba62b685019f4e96317e59d4ace6714",
      {
        input: {
          image: "https://example.com/image.png",
          video: "Input motion video",
          num_inference_steps: 25, // Number of denoising steps
          guidance_scale: 7.5, // Scale for classifier-free guidance
          seed: 349324 // Random seed. Leave blank to randomize the seed
        }
      }
    );
    

How to generate Motion Video or convert video into Motion Video?

OpenPose is a Real-time multi-person keypoint detection library for body, face, hands, and foot estimation

You can convert a Motion Video to OpenPose with this model: video to openpose

And then you can use this model:magic-animate-openpose to use OpenPose with MagicAnimate: magic-animate-openpose

More information for Magic Animate