Diffusers库的初识及使用

diffusers库的目标是:

  • 将扩散模型(diffusion models)集中到一个单一且长期维护的项目中
  • 以公众可访问的方式复现高影响力的机器学习系统,如DALLE、Imagen等
  • 让开发人员可以很容易地使用API进行模型训练或者使用现有模型进行推理

diffusers的核心分成三个组件:

  • Pipelines: 高层类,以一种用户友好的方式,基于流行的扩散模型快速生成样本
  • Models:训练新扩散模型的流行架构,如UNet
  • Schedulers:推理场景下基于噪声生成图像或训练场景下基于噪声生成带噪图像的各种技术

diffusers的源码地址为:https://github.com/huggingface/diffusers
官方文档地址:https://huggingface.co/docs/diffusers/index

diffusers的安装

pip install --upgrade diffusers

关于加载预训练模型

diffusersfrom_pretrained()加载模型,可以是本地模型,或从the Hugging Face Hub自动下载。
给定地址,优先本地查询,本地查询不到时,默认查询网络地址为base_url + 输入内容,base_url=https://huggingface.co/


https://huggingface.co/CompVis下包含如下

  • stable-diffusion-v1-1
  • stable-diffusion-v1-2
  • stable-diffusion-v1-3
  • stable-diffusion-v1-4
  • 等等

https://huggingface.co/runwayml下包含:

  • stable-diffusion-v1-5
  • stable-diffusion-inpainting

在https://huggingface.co/下搜索upscaler可以找到stabilityai下有如下诸多模型:

  • stabilityai/stable-diffusion-2-1
  • stabilityai/stable-diffusion-x4-upscaler
  • stabilityai/stable-diffusion-2-inpainting
  • stabilityai/stable-diffusion-2-depth
  • ......

需要从线上导入模型时,执行类似下面代码即可。

XXXPipeline.from_pretrained("CompVis/stable-diffusion-v1-4")

使用from_pretrained()下载的模型,默认会缓存到C:\Users\Administrator\.cache\huggingface\diffusers下,由from_pretrained中如下代码控制:

cache_dir = kwargs.pop("cache_dir", DIFFUSERS_CACHE)

==>
DIFFUSERS_CACHE = default_cache_path

==>
hf_cache_home = os.path.expanduser(
    os.getenv("HF_HOME", os.path.join(os.getenv("XDG_CACHE_HOME", "~/.cache"), "huggingface"))
)
default_cache_path = os.path.join(hf_cache_home, "diffusers")

所以,如果想指定下载模型缓存位置,有以下两种方式:

  • from_pretrained中指定cache_dir
  • 指定环境变量HF_HOMEXDG_CACHE_HOME的值

示例以下第一种方式:

pipeline = StableDiffusionUpscalePipeline.from_pretrained(model_id, torch_dtype=torch.float16, cache_dir="./models/")

就会在当前目录创建一个models文件,并将下载的模型缓存到该位置。

使用Diffusers进行模型推理

导入Pipeline,from_pretrained()加载模型,可以是本地模型,或从the Hugging Face Hub自动下载。

from diffusers import StableDiffusionPipeline

image_pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4")
# 加载本地模型:
# image_pipe = StableDiffusionPipeline.from_pretrained("./models/Stablediffusion/stable-diffusion-v1-4")
image_pipe.to("cuda")

prompt = "a photograph of an astronaut riding a horse"
pipe_out = image_pipe(prompt)

image = pipe_out.images[0]
# you can save the image with
# image.save(f"astronaut_rides_horse.png")

我们查看下image_pipe的内容:

StableDiffusionPipeline {
  "_class_name": "StableDiffusionPipeline",
  "_diffusers_version": "0.10.2",
  "feature_extractor": [
    "transformers",
    "CLIPFeatureExtractor"
  ],
  "requires_safety_checker": true,
  "safety_checker": [
    "stable_diffusion",
    "StableDiffusionSafetyChecker"
  ],
  "scheduler": [
    "diffusers",
    "PNDMScheduler"
  ],
  "text_encoder": [
    "transformers",
    "CLIPTextModel"
  ],
  "tokenizer": [
    "transformers",
    "CLIPTokenizer"
  ],
  "unet": [
    "diffusers",
    "UNet2DConditionModel"
  ],
  "vae": [
    "diffusers",
    "AutoencoderKL"
  ]
}

查看Images的结构:

StableDiffusionPipelineOutput(
images=[<PIL.Image.Image image mode=RGB size=512x512 at 0x1A14BDD7730>], 
nsfw_content_detected=[False])

由此,可以看到pipe_out的包含两部分,第一部分就是生成的图片列表,如果只有一张图片,则pipe_out.images[0]即可取出目标图像。

如果我们要一次生成多张图像呢?

  • 如果想要一次生成多个prompt的结果,则输入一个list即可:
from diffusers import StableDiffusionPipeline

image_pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4")

image_pipe.to("cuda")
prompt = ["a photograph of an astronaut riding a horse", "a girl from England riding a horse"]
out_images = image_pipe(prompt).images
for i, out_image in enumerate(out_images):
    out_image.save("astronaut_rides_horse" + str(i) + ".png")

示例输出:

  • 如果是一个prompt生成多个图片,可用num_images_per_prompt参数控制:
from diffusers import StableDiffusionPipeline

image_pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4")

image_pipe.to("cuda")
prompt = ["a photograph of an astronaut riding a horse"]
out_images = image_pipe(prompt, num_images_per_prompt=2).images
for i, out_image in enumerate(out_images):
    out_image.save("astronaut_rides_horse" + str(i) + ".png")

示例输出:

在使用image_pipe生成图像时,默认是float32精度的,若本地现在不足,可能会报Out of memory的错误,此时,可以通过加载float16精度的模型来解决。

Note: If you are limited by GPU memory and have less than 10GB of GPU RAM available, please make sure to load the StableDiffusionPipeline in float16 precision instead of the default float32 precision as done above.

You can do so by loading the weights from the fp16 branch and by telling diffusers to expect the weights to be in float16 precision:

image_pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", revision="fp16", torch_dtype=torch.float16)

对于每个PipeLine都有一些特定的配置,如StableDiffusionPipeline除了必要的prompt参数,还可以配置如下参数:

  • num_inference_steps: int = 50
  • guidance_scale: float = 7.5
  • generator: Optional[torch.Generator] = None
  • 等等

示例:如果你想要每次得到的结果均一致,可以设置每次的种子都一样

generator = torch.Generator("cuda").manual_seed(1024)
prompt = ["a photograph of an astronaut riding a horse"] * 3
out_images = image_pipe(prompt, generator=generator).images

再看训练

敬请期待

posted @ 2023-02-23 11:36  iSherryZhang  阅读(10542)  评论(0编辑  收藏  举报