百度开源 Qianfan-VL: 领域增强的通用视觉语言模型 - 详解

通过持续预训练增强领域能力 | 参数量级30亿至700亿 | 文档理解与OCR增强 | 支持思维链推理

快速链接

模型描述

千帆-VL是一系列专为企业级多模态应用优化的通用多模态大语言模型。该系列模型在保持强大通用能力的同时,针对工业部署中的高频场景进行了深度优化。

模型变体

模型参数量上下文长度思维链支持最佳适用场景
Qianfan-VL-3B3B32k边缘部署,实时光学字符识别
Qianfan-VL-8B8B32k服务器端通用场景,微调
Qianfan-VL-70B70B32k复杂推理,数据合成

架构

  • 语言模型
    • 千帆-VL-3B:基于Qwen2.5-3B
    • 千帆-VL-8B/70B:基于Llama 3.1架构
    • 通过3T多语料库增强
  • 视觉编码器:基于InternViT,支持动态分块高达4K分辨率
  • 跨模态融合:MLP适配器实现高效的视觉语言桥接

核心能力

OCR与文档理解

  • 全场景OCR:手写体、公式、自然场景、卡片/文档
  • 文档智能:版面分析、表格解析、图表理解、文档问答
  • 高精度:在OCR基准测试中具有行业领先表现

思维链推理(8B & 70B)

  • 复杂图表分析与推理
  • 数学问题分步求解
  • 视觉推理与逻辑推断
  • 统计计算与趋势预测

基准测试表现

通用视觉语言基准
BenchmarkQianfan-VL-3BQianfan-VL-8BQianfan-VL-70BInternVL-3-8BInternVL-3-78BQwen2.5-VL-7BQwen2.5-VL-72B
A-Bench_VAL75.6575.7278.175.8675.8676.4979.22
CCBench66.8670.3980.9877.8470.7857.6573.73
SEEDBench_IMG76.5578.0279.1377.077.5276.9878.34
SEEDBench2_Plus67.5970.9773.1769.5268.4770.9373.25
MMVet48.1753.2167.3480.2878.970.6475.69
MMMU_VAL46.4447.1158.3356.1160.7851.065.78
ScienceQA_TEST95.1997.6298.7697.9797.1785.4792.51
ScienceQA_VAL93.8597.6298.8197.8195.1483.5991.32
MMT-Bench_VAL62.2363.2271.0665.1763.6761.469.49
MTVQA_TEST26.530.1432.1830.327.6229.0831.48
BLINK49.9756.8159.4455.8751.8754.5563.02
MMStar57.9364.0769.4768.466.0761.5366.0
RealWorldQA65.7570.5971.6371.1174.2569.2873.86
Q-Bench1_VAL73.5175.2577.4675.9977.9978.179.93
POPE85.0886.0688.9790.5988.8785.9783.35
RefCOCO (Avg)85.9489.3791.0189.6591.4086.5690.25
OCR和文档理解
BenchmarkQianfan-VL-3BQianfan-VL-8BQianfan-VL-70BInternVL-3-8BInternVL-3-78BQwen2.5-VL-3BQwen2.5-VL-7BQwen2.5-VL-72B
OCRBench831854873881847810883874
AI2D_TEST81.3885.0787.2385.0783.5577.0780.47283.84
OCRVQA_TEST66.1568.9874.0639.0335.5869.2471.0266.8
TextVQA_VAL80.1182.1384.4882.1583.5279.0984.96283.26
DocVQA_VAL90.8593.5494.7592.0483.8292.7194.9195.75
ChartQA_TEST81.7987.7289.685.7682.0483.486.6887.16
数学推理
BenchmarkQianfan-VL-8BQianfan-VL-70BInternVL-3-8BInternVL-3-78BQwen2.5-VL-7BQwen2.5-VL-72B
Mathvista-mini69.1978.669.570.167.273.9
Mathvision32.8250.2929.6134.825.9539.34
Mathverse48.461.0443.6849.2644.2155.18
ChartQA Pro50.435237.3244.4343.7345.3
HallusionBench51.7254.5249.240.247.949.9
InHouse Dataset A59.8771.7840.6441.4745.5857.2
InHouse Dataset B61.3375.636.2542.6530.6259.68

快速开始

安装

pip install transformers accelerate torch torchvision pillow einops

使用 Transformers

import torch
import torchvision.transforms as T
from torchvision.transforms.functional import InterpolationMode
from transformers import AutoModel, AutoTokenizer
from PIL import Image
IMAGENET_MEAN = (0.485, 0.456, 0.406)
IMAGENET_STD = (0.229, 0.224, 0.225)
def build_transform(input_size):
MEAN, STD = IMAGENET_MEAN, IMAGENET_STD
transform = T.Compose([
T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
T.Resize((input_size, input_size), interpolation=InterpolationMode.BICUBIC),
T.ToTensor(),
T.Normalize(mean=MEAN, std=STD)
])
return transform
def find_closest_aspect_ratio(aspect_ratio, target_ratios, width, height, image_size):
best_ratio_diff = float('inf')
best_ratio = (1, 1)
area = width * height
for ratio in target_ratios:
target_aspect_ratio = ratio[0] / ratio[1]
ratio_diff = abs(aspect_ratio - target_aspect_ratio)
if ratio_diff < best_ratio_diff:
best_ratio_diff = ratio_diff
best_ratio = ratio
elif ratio_diff == best_ratio_diff:
if area >
0.5 * image_size * image_size * ratio[0] * ratio[1]:
best_ratio = ratio
return best_ratio
def dynamic_preprocess(image, min_num=1, max_num=12, image_size=448, use_thumbnail=False):
orig_width, orig_height = image.size
aspect_ratio = orig_width / orig_height
# calculate the existing image aspect ratio
target_ratios = set(
(i, j) for n in range(min_num, max_num + 1) for i in range(1, n + 1) for j in range(1, n + 1) if
i * j <= max_num and i * j >= min_num)
  target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1])
  # find the closest aspect ratio to the target
  target_aspect_ratio = find_closest_aspect_ratio(
  aspect_ratio, target_ratios, orig_width, orig_height, image_size)
  # calculate the target width and height
  target_width = image_size * target_aspect_ratio[0]
  target_height = image_size * target_aspect_ratio[1]
  blocks = target_aspect_ratio[0] * target_aspect_ratio[1]
  # resize the image
  resized_img = image.resize((target_width, target_height))
  processed_images = []
  for i in range(blocks):
  box = (
  (i % (target_width // image_size)) * image_size,
  (i // (target_width // image_size)) * image_size,
  ((i % (target_width // image_size)) + 1) * image_size,
  ((i // (target_width // image_size)) + 1) * image_size
  )
  # split the image
  split_img = resized_img.crop(box)
  processed_images.append(split_img)
  assert len(processed_images) == blocks
  if use_thumbnail and len(processed_images) != 1:
  thumbnail_img = image.resize((image_size, image_size))
  processed_images.append(thumbnail_img)
  return processed_images
  def load_image(image_file, input_size=448, max_num=12):
  image = Image.open(image_file).convert('RGB')
  transform = build_transform(input_size=input_size)
  images = dynamic_preprocess(image, image_size=input_size, use_thumbnail=True, max_num=max_num)
  pixel_values = [transform(image) for image in images]
  pixel_values = torch.stack(pixel_values)
  return pixel_values
  # Load model
  MODEL_PATH = "baidu/Qianfan-VL-8B" # or Qianfan-VL-3B, Qianfan-VL-70B
  model = AutoModel.from_pretrained(
  MODEL_PATH,
  torch_dtype=torch.bfloat16,
  trust_remote_code=True,
  device_map="auto"
  ).eval()
  tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH, trust_remote_code=True)
  # Load and process image
  pixel_values = load_image("./example/scene_ocr.png").to(torch.bfloat16)
  # Inference
  prompt = "<image>请识别图中所有文字"
    with torch.no_grad():
    response = model.chat(
    tokenizer,
    pixel_values=pixel_values,
    question=prompt,
    generation_config={
    "max_new_tokens": 512
    },
    verbose=False
    )
    print(response)

使用vLLM

您可以通过vLLM官方Docker镜像部署千帆VL,实现高性能推理并兼容OpenAI API:

启动vLLM服务
docker run -d --name qianfan-vl \
--gpus all \
-v /path/to/Qianfan-VL-8B:/model \
-p 8000:8000 \
--ipc=host \
vllm/vllm-openai:latest \
--model /model \
--served-model-name qianfan-vl \
--trust-remote-code \
--hf-overrides '{"architectures":["InternVLChatModel"],"model_type":"internvl_chat"}'
调用API
curl 'http://127.0.0.1:8000/v1/chat/completions' \
--header 'Content-Type: application/json' \
--data '{
"model": "qianfan-vl",
"messages": [
{
"role": "user",
"content": [
{
"type": "image_url",
"image_url": {
"url": "https://qianfan-public-demo.bj.bcebos.com/qianfan-vl/2509/images/scene_ocr.png"
}
},
{
"type": "text",
"text": "<image>请识别图中所有文字"
  }
  ]
  }
  ]
  }'

或者使用Python和OpenAI SDK:

from openai import OpenAI
client = OpenAI(
api_key="EMPTY",
base_url="http://127.0.0.1:8000/v1"
)
response = client.chat.completions.create(
model="qianfan-vl",
messages=[
{
"role": "user",
"content": [
{
"type": "image_url",
"image_url": {
"url": "https://qianfan-public-demo.bj.bcebos.com/qianfan-vl/2509/images/scene_ocr.png"
}
},
{
"type": "text",
"text": "<image>请描述这张图片"
  }
  ]
  }
  ],
  max_tokens=512
  )
  print(response.choices[0].message.content)

训练详情

四阶段渐进式训练

  1. 跨模态对齐(1000亿token):建立视觉-语言关联
  2. 通用知识注入(3.5万亿token):构建强大基础能力
  3. 领域增强(3000亿token):专项OCR与推理能力
  4. 训练后优化(10亿token):指令跟随与偏好对齐

基础设施

  • 基于5000+块百度昆仑芯片训练
  • 单任务并行训练规模达5000芯片,创行业新纪录
  • 超90%的大规模分布式训练扩展效率
  • 创新的通信-计算融合技术

模型卡片

  • 研发团队:百度智能云千帆团队
  • 模型类型:视觉-语言Transformer
  • 语言支持:多语种
  • 许可协议:[详见模型卡片具体条款]
  • 基础架构:请参阅技术报告

引用

如研究中使用千帆-VL模型,请引用:

@misc{qianfan-vl-2025,
title={千帆-VL:领域增强型通用视觉语言模型},
author={千帆团队},
year={2025},
publisher={百度}
}

联系我们

访问百度千帆平台获取更多信息与API接入

致谢

本模型系列通过通用能力与领域增强的结合推动多模态AI重大突破,切实赋能产业应用。

posted @ 2025-10-10 10:01  wzzkaifa  阅读(34)  评论(0)    收藏  举报