首页 > Ai资讯 > Ai教程 > LivePortrait 本地部署教程,强大且开源的可控人像AI视频生成

LivePortrait 本地部署教程,强大且开源的可控人像AI视频生成

发布时间:2024年07月30日

1,准备工作,本地下载代码并准备环境,运行命令前需安装git

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
git clone https://github.com/KwaiVGI/LivePortrait
cd LivePortrait
# create env using conda
conda create -n LivePortrait python=3.9
conda activate LivePortrait
# install dependencies with pip
# for Linux and Windows users
pip install -r requirements.txt
# for macOS with Apple Silicon users
pip install -r requirements_macOS.txt
git clone https://github.com/KwaiVGI/LivePortrait
cd LivePortrait

# create env using conda
conda create -n LivePortrait python=3.9
conda activate LivePortrait

# install dependencies with pip
# for Linux and Windows users
pip install -r requirements.txt
# for macOS with Apple Silicon users
pip install -r requirements_macOS.txt

git clone https://github.com/KwaiVGI/LivePortrait
cd LivePortrait

# create env using conda
conda create -n LivePortrait python=3.9
conda activate LivePortrait

# install dependencies with pip
# for Linux and Windows users
pip install -r requirements.txt
# for macOS with Apple Silicon users
pip install -r requirements_macOS.txt

注意:确保您的系统已安装FFmpeg,包括ffmpegffprobe!不会安装?看这个FFmpeg 【安装教程

2. 下载预训练权重

下载预训练权重的最简单方法是从 HuggingFace 下载:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
# first, ensure git-lfs is installed, see: https://docs.github.com/en/repositories/working-with-files/managing-large-files/installing-git-large-file-storage
git lfs install
# clone and move the weights
git clone https://huggingface.co/KwaiVGI/LivePortrait temp_pretrained_weights
mv temp_pretrained_weights/* pretrained_weights/
rm -rf temp_pretrained_weights
# first, ensure git-lfs is installed, see: https://docs.github.com/en/repositories/working-with-files/managing-large-files/installing-git-large-file-storage
git lfs install
# clone and move the weights
git clone https://huggingface.co/KwaiVGI/LivePortrait temp_pretrained_weights
mv temp_pretrained_weights/* pretrained_weights/
rm -rf temp_pretrained_weights
# first, ensure git-lfs is installed, see: https://docs.github.com/en/repositories/working-with-files/managing-large-files/installing-git-large-file-storage
git lfs install
# clone and move the weights
git clone https://huggingface.co/KwaiVGI/LivePortrait temp_pretrained_weights
mv temp_pretrained_weights/* pretrained_weights/
rm -rf temp_pretrained_weights

 

非海外用户,没有外网环境的朋友,你可以从Google Drive百度云网盘下载所有预训练权重。解压并将它们放在 中./pretrained_weights

确保目录结构如下,或包含:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
pretrained_weights
├── insightface
│ └── models
│ └── buffalo_l
│ ├── 2d106det.onnx
│ └── det_10g.onnx
└── liveportrait
├── base_models
│ ├── appearance_feature_extractor.pth
│ ├── motion_extractor.pth
│ ├── spade_generator.pth
│ └── warping_module.pth
├── landmark.onnx
└── retargeting_models
└── stitching_retargeting_module.pth
pretrained_weights
├── insightface
│ └── models
│ └── buffalo_l
│ ├── 2d106det.onnx
│ └── det_10g.onnx
└── liveportrait
├── base_models
│ ├── appearance_feature_extractor.pth
│ ├── motion_extractor.pth
│ ├── spade_generator.pth
│ └── warping_module.pth
├── landmark.onnx
└── retargeting_models
└── stitching_retargeting_module.pth
pretrained_weights
├── insightface
│   └── models
│       └── buffalo_l
│           ├── 2d106det.onnx
│           └── det_10g.onnx
└── liveportrait
    ├── base_models
    │   ├── appearance_feature_extractor.pth
    │   ├── motion_extractor.pth
    │   ├── spade_generator.pth
    │   └── warping_module.pth
    ├── landmark.onnx
    └── retargeting_models
        └── stitching_retargeting_module.pth

3.推理使用

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
# For Linux and Windows
python inference.py
# For macOS with Apple Silicon, Intel not supported, this maybe 20x slower than RTX 4090
PYTORCH_ENABLE_MPS_FALLBACK=1 python inference.py
# For Linux and Windows
python inference.py

# For macOS with Apple Silicon, Intel not supported, this maybe 20x slower than RTX 4090
PYTORCH_ENABLE_MPS_FALLBACK=1 python inference.py

# For Linux and Windows
python inference.py

# For macOS with Apple Silicon, Intel not supported, this maybe 20x slower than RTX 4090
PYTORCH_ENABLE_MPS_FALLBACK=1 python inference.py

如果脚本成功运行,你会得到一个名为 的输出mp4文件animations/s6--d0_concat.mp4。此文件包含以下结果:驾驶视频,输入图像或视频,以及生成的结果。

图像

或者您可以通过指定-s和参数-d来更改输入

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
# source input is an image
python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d0.mp4
# source input is a video ✨
python inference.py -s assets/examples/source/s13.mp4 -d assets/examples/driving/d0.mp4
# more options to see
python inference.py -h
# source input is an image
python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d0.mp4

# source input is a video ✨
python inference.py -s assets/examples/source/s13.mp4 -d assets/examples/driving/d0.mp4

# more options to see
python inference.py -h

# source input is an image
python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d0.mp4

# source input is a video ✨
python inference.py -s assets/examples/source/s13.mp4 -d assets/examples/driving/d0.mp4

# more options to see
python inference.py -h

参照视频自动裁剪 📢📢📢

 

要使用您自己的参照视频,我们建议:⬇️

  • 将其裁剪为1:1 的宽高比(例如 512×512 或 256×256 像素),或通过 启用自动裁剪--flag_crop_driving_video
  • 重点关注头部区域,与示例视频类似。
  • 尽量减少肩部运动。
  • 确保参照视频的第一帧是正面且表情中性

以下是自动裁剪的案例--flag_crop_driving_video

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d13.mp4 --flag_crop_driving_video
python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d13.mp4 --flag_crop_driving_video
python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d13.mp4 --flag_crop_driving_video

如果觉得自动裁剪的效果不好,您可以修改--scale_crop_driving_video--vy_ratio_crop_driving_video选项来调整比例和偏移量,或者手动进行调整。

动作模板制作

 

您还可以使用自动生成的以 结尾的运动模板文件来.pkl加速推理,并保护隐私,例如:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d5.pkl # portrait animation
python inference.py -s assets/examples/source/s13.mp4 -d assets/examples/driving/d5.pkl # portrait video editing
python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d5.pkl # portrait animation
python inference.py -s assets/examples/source/s13.mp4 -d assets/examples/driving/d5.pkl # portrait video editing
python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d5.pkl # portrait animation
python inference.py -s assets/examples/source/s13.mp4 -d assets/examples/driving/d5.pkl # portrait video editing

4. Gradio 可视化界面操作

在Gradio的可视化界面下可以获得更好的体验,适合新手使用,只需运行下面安装代码即可:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
# For Linux and Windows users (and macOS with Intel??)
python app.py
# For macOS with Apple Silicon users, Intel not supported, this maybe 20x slower than RTX 4090
PYTORCH_ENABLE_MPS_FALLBACK=1 python app.py
# For Linux and Windows users (and macOS with Intel??)
python app.py

# For macOS with Apple Silicon users, Intel not supported, this maybe 20x slower than RTX 4090
PYTORCH_ENABLE_MPS_FALLBACK=1 python app.py

# For Linux and Windows users (and macOS with Intel??)
python app.py

# For macOS with Apple Silicon users, Intel not supported, this maybe 20x slower than RTX 4090
PYTORCH_ENABLE_MPS_FALLBACK=1 python app.py

您可以指定--server_port、、--share参数--server_name来满足您的需求!

🚀 它们还提供了加速选项--flag_do_torch_compile。首次推理会触发优化过程(约一分钟),使后续推理速度提高 20-30%。性能提升可能因 CUDA 版本的不同而有所差异。

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
# enable torch.compile for faster inference
python app.py --flag_do_torch_compile
# enable torch.compile for faster inference
python app.py --flag_do_torch_compile
# enable torch.compile for faster inference
python app.py --flag_do_torch_compile

注意:Windows 和 macOS 不支持此方法。或者,在HuggingFace上轻松尝试一下🤗

5. 推理速度评估

 

下方提供了一个脚本来评估每个模块的推理速度:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
# For NVIDIA GPU
python speed.py
# For NVIDIA GPU
python speed.py
# For NVIDIA GPU
python speed.py

以下是使用原生 PyTorch 框架在 RTX 4090 GPU 上推断一帧的结果torch.compile

模型 参数(米) 模型大小(MB) 推理(毫秒)
外观特征提取器 0.84 3.3 0.82
运动提取器 28.12 108 0.84
铲形发电机 55.37 212 7.59
变形模块 45.53 174 5.21
拼接和重定向模块 0.23 2.3 0.31

注意:拼接和重定向模块的值代表三个连续 MLP 网络的组合参数数量和总推理时间。

当然如果你没有一张好的显卡,无法本地运行,那么可以在huggingface上免费体验:【点击前往】在线使用

详细的LivePortrait安装教程如下:

如果你想要了解关于智能工具类的内容,可以查看 智汇宝库,这是一个提供智能工具的网站。
在这你可以找到各种智能工具的相关信息,了解智能工具的用法以及最新动态。