离谱的libjpeg-turbo库,不同版本导致的度量学习精度还不一样
现象
最初是发现复现不了半年前的度量学习的精度,最后发现是libjpeg-turbo版本库升级导致的精度有偏差
唯一变动就是 libjpeg-turbo 版本 (2.1.4 ↔ 3.0.0),结果在 R@1 上确实有差异:
对于Contextual的CUB数据集的Epoch 0阶段
libjpeg-turbo=3.0.0 → R@1 ≈ 54.95
libjpeg-turbo=2.1.4 → R@1 ≈ 51.92
这也会导致最终的精度出现问题
只需要来回切换版本即可来回复现
conda install libjpeg-turbo==2.1.4
conda install libjpeg-turbo==3.0.0
这告诉我们做深度学习时,显卡和环境一定要固定统一,不要随便升级版本库**
附上Log
(pytorch310) jaxon@user-Super-Server:/data/jaxon/contextual/224x224$ conda install libjpeg-turbo==3.0.0
Channels:
- defaults
- https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
- conda-forge
- nvidia
- pytorch
Platform: linux-64
Collecting package metadata (repodata.json): done
Solving environment: -
done
## Package Plan ##
environment location: /home/jaxon/anaconda3/envs/pytorch310
added / updated specs:
- libjpeg-turbo==3.0.0
The following packages will be downloaded:
package | build
---------------------------|-----------------
krb5-1.21.3 | h723845a_4 286 KB
lcms2-2.17 | h717163a_0 242 KB https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
libcups-2.3.3 | hb8b1518_5 4.3 MB https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
libdeflate-1.23 | h86f0d12_0 71 KB https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
libgcrypt-lib-1.11.1 | hb9d3cd8_0 577 KB https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
libgpg-error-1.55 | h3f2d84a_0 305 KB https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
libjpeg-turbo-3.0.0 | hd590300_1 604 KB https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
libpq-16.9 | h87c4ccc_0 2.5 MB https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
libsystemd0-256.9 | h2774228_0 401 KB https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
libtiff-4.7.0 | hd9ff511_3 418 KB https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
libva-2.21.0 | h4ab18f5_2 185 KB https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
libwebp-1.5.0 | hae8dbeb_0 90 KB https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
libwebp-base-1.5.0 | h851e524_0 420 KB https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
mysql-common-8.3.0 | h70512c7_5 762 KB https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
mysql-libs-8.3.0 | ha479ceb_5 1.5 MB https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
openjpeg-2.5.2 | h0d4d230_1 373 KB
xorg-fixesproto-5.0 | hb9d3cd8_1003 11 KB https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
xorg-kbproto-1.0.7 | h5eee18b_1003 28 KB
xorg-libsm-1.2.6 | he73a12e_0 27 KB https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
xorg-libxfixes-5.0.3 | h7f98852_1004 18 KB https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
xorg-xextproto-7.3.0 | h5eee18b_1004 29 KB
xorg-xproto-7.0.31 | h5eee18b_1008 74 KB
------------------------------------------------------------
Total: 13.0 MB
The following NEW packages will be INSTALLED:
alsa-lib pkgs/main/linux-64::alsa-lib-1.2.14-h5eee18b_0
attr anaconda/cloud/conda-forge/linux-64::attr-2.5.2-h39aace5_0
cairo pkgs/main/linux-64::cairo-1.16.0-hb05425b_5
dbus pkgs/main/linux-64::dbus-1.13.18-hb2f20db_0
expat pkgs/main/linux-64::expat-2.6.4-h6a678d5_0
glib pkgs/main/linux-64::glib-2.84.2-h6a678d5_0
glib-tools pkgs/main/linux-64::glib-tools-2.84.2-h6a678d5_0
graphite2 pkgs/main/linux-64::graphite2-1.3.14-h295c915_1
gst-plugins-base anaconda/cloud/conda-forge/linux-64::gst-plugins-base-1.24.4-h9ad1361_0
gstreamer anaconda/cloud/conda-forge/linux-64::gstreamer-1.24.4-haf2f30d_0
harfbuzz pkgs/main/linux-64::harfbuzz-10.2.0-hf296adc_0
libcap anaconda/cloud/conda-forge/linux-64::libcap-2.71-h39aace5_0
libclang-cpp15 anaconda/cloud/conda-forge/linux-64::libclang-cpp15-15.0.7-default_h127d8a8_5
libclang13 pkgs/main/linux-64::libclang13-20.1.8-default_hee3e7a4_0
libflac anaconda/cloud/conda-forge/linux-64::libflac-1.4.3-h59595ed_0
libgcrypt-lib anaconda/cloud/conda-forge/linux-64::libgcrypt-lib-1.11.1-hb9d3cd8_0
libgpg-error anaconda/cloud/conda-forge/linux-64::libgpg-error-1.55-h3f2d84a_0
libllvm15 pkgs/main/linux-64::libllvm15-15.0.7-he89c38a_4
libllvm20 pkgs/main/linux-64::libllvm20-20.1.8-h465586c_0
libogg pkgs/main/linux-64::libogg-1.3.5-h27cfd23_1
libopus pkgs/main/linux-64::libopus-1.3.1-h5eee18b_1
libsndfile anaconda/cloud/conda-forge/linux-64::libsndfile-1.2.2-hc60ed4a_1
libsystemd0 anaconda/cloud/conda-forge/linux-64::libsystemd0-256.9-h2774228_0
libvorbis pkgs/main/linux-64::libvorbis-1.3.7-h7b6447c_0
mpg123 anaconda/cloud/conda-forge/linux-64::mpg123-1.32.9-hc50e24c_0
mysql-common anaconda/cloud/conda-forge/linux-64::mysql-common-8.3.0-h70512c7_5
mysql-libs anaconda/cloud/conda-forge/linux-64::mysql-libs-8.3.0-ha479ceb_5
nspr pkgs/main/linux-64::nspr-4.35-h6a678d5_0
nss anaconda/cloud/conda-forge/linux-64::nss-3.105-hd34e28f_0
pixman pkgs/main/linux-64::pixman-0.46.4-h7934f7d_0
ply pkgs/main/linux-64::ply-3.11-py310h06a4308_0
pulseaudio-client anaconda/cloud/conda-forge/linux-64::pulseaudio-client-17.0-hb77b528_0
pyqt5-sip pkgs/main/linux-64::pyqt5-sip-12.13.0-py310h5eee18b_1
qt-main anaconda/cloud/conda-forge/linux-64::qt-main-5.15.8-hc9dc06e_21
xcb-util-keysyms anaconda/cloud/conda-forge/linux-64::xcb-util-keysyms-0.4.0-h8ee46fc_1
xcb-util-wm anaconda/cloud/conda-forge/linux-64::xcb-util-wm-0.4.1-h8ee46fc_1
xorg-fixesproto anaconda/cloud/conda-forge/linux-64::xorg-fixesproto-5.0-hb9d3cd8_1003
xorg-kbproto pkgs/main/linux-64::xorg-kbproto-1.0.7-h5eee18b_1003
xorg-libice pkgs/main/linux-64::xorg-libice-1.1.2-h9b100fa_0
xorg-libsm anaconda/cloud/conda-forge/linux-64::xorg-libsm-1.2.6-he73a12e_0
xorg-libxrender anaconda/cloud/conda-forge/linux-64::xorg-libxrender-0.9.11-hd590300_0
xorg-renderproto anaconda/cloud/conda-forge/linux-64::xorg-renderproto-0.11.1-hb9d3cd8_1003
xorg-xextproto pkgs/main/linux-64::xorg-xextproto-7.3.0-h5eee18b_1004
xorg-xf86vidmodep~ anaconda/cloud/conda-forge/linux-64::xorg-xf86vidmodeproto-2.3.1-hb9d3cd8_1005
xorg-xproto pkgs/main/linux-64::xorg-xproto-7.0.31-h5eee18b_1008
The following packages will be REMOVED:
jpeg-9f-h5ce9db8_0
qtbase-6.7.3-hdaa5aa8_0
qtdeclarative-6.7.3-h7934f7d_1
qtsvg-6.7.3-he4bddd1_1
qttools-6.7.3-h5a8de97_1
qtwebchannel-6.7.3-h7934f7d_1
qtwebsockets-6.7.3-h7934f7d_1
The following packages will be UPDATED:
icu pkgs/main::icu-73.1-h6a678d5_0 --> anaconda/cloud/conda-forge::icu-73.2-h59595ed_0
krb5 1.20.1-h143b758_1 --> 1.21.3-h723845a_4
lcms2 pkgs/main::lcms2-2.16-hb9589c4_0 --> anaconda/cloud/conda-forge::lcms2-2.17-h717163a_0
libdeflate 1.22-hb9d3cd8_0 --> 1.23-h86f0d12_0
libjpeg-turbo 2.1.4-h166bdaf_0 --> 3.0.0-hd590300_1
libtiff pkgs/main::libtiff-4.5.1-hffd6297_1 --> anaconda/cloud/conda-forge::libtiff-4.7.0-hd9ff511_3
libwebp pkgs/main::libwebp-1.3.2-h11a3e52_0 --> anaconda/cloud/conda-forge::libwebp-1.5.0-hae8dbeb_0
libwebp-base pkgs/main::libwebp-base-1.3.2-h5eee18~ --> anaconda/cloud/conda-forge::libwebp-base-1.5.0-h851e524_0
openjpeg 2.5.2-he7f1fd0_0 --> 2.5.2-h0d4d230_1
The following packages will be SUPERSEDED by a higher-priority channel:
libcups pkgs/main::libcups-2.4.2-h252cb56_2 --> anaconda/cloud/conda-forge::libcups-2.3.3-hb8b1518_5
libpq pkgs/main::libpq-17.4-h02b6914_2 --> anaconda/cloud/conda-forge::libpq-16.9-h87c4ccc_0
libxcb anaconda/cloud/conda-forge::libxcb-1.~ --> pkgs/main::libxcb-1.15-h7f8727e_0
libxkbcommon pkgs/main::libxkbcommon-1.9.1-h69220b~ --> anaconda/cloud/conda-forge::libxkbcommon-1.7.0-h662e7e4_0
pillow pkgs/main::pillow-10.2.0-py310h5eee18~ --> anaconda/cloud/conda-forge::pillow-10.2.0-py310h01dd4db_0
xcb-util pkgs/main::xcb-util-0.4.1-h5eee18b_2 --> anaconda/cloud/conda-forge::xcb-util-0.4.0-hd590300_1
xcb-util-image pkgs/main::xcb-util-image-0.4.0-h5eee~ --> anaconda/cloud/conda-forge::xcb-util-image-0.4.0-h8ee46fc_1
xcb-util-renderut~ pkgs/main::xcb-util-renderutil-0.3.10~ --> anaconda/cloud/conda-forge::xcb-util-renderutil-0.3.9-hd590300_1
The following packages will be DOWNGRADED:
libegl 1.7.0-ha4b6fd6_2 --> 1.7.0-ha4b6fd6_0
libgl 1.7.0-ha4b6fd6_2 --> 1.7.0-ha4b6fd6_0
libglvnd 1.7.0-ha4b6fd6_2 --> 1.7.0-ha4b6fd6_0
libglx 1.7.0-ha4b6fd6_2 --> 1.7.0-ha4b6fd6_0
libva 2.22.0-h8a09558_1 --> 2.21.0-h4ab18f5_2
pyqt 6.7.1-py310h8dad735_2 --> 5.15.10-py310h6a678d5_1
sip 6.10.0-py310h6a678d5_0 --> 6.7.12-py310h6a678d5_1
xcb-util-cursor 0.1.5-h5eee18b_0 --> 0.1.4-h5eee18b_0
xkeyboard-config 2.43-hb9d3cd8_0 --> 2.42-h4ab18f5_0
xorg-libx11 1.8.10-h4f16b4b_1 --> 1.8.9-h8ee46fc_0
xorg-libxext 1.3.6-hb9d3cd8_0 --> 1.3.4-h0b41bf4_2
xorg-libxfixes 6.0.1-hb9d3cd8_0 --> 5.0.3-h7f98852_1004
xorg-xorgproto 2024.1-h5eee18b_1 --> 2024.1-h5eee18b_0
Proceed ([y]/n)?
Downloading and Extracting Packages:
Preparing transaction: done
Verifying transaction: done
Executing transaction: done
(pytorch310) jaxon@user-Super-Server:/data/jaxon/contextual/224x224$
(pytorch310) jaxon@user-Super-Server:/data/jaxon/contextual/224x224$
(pytorch310) jaxon@user-Super-Server:/data/jaxon/contextual/224x224$
(pytorch310) jaxon@user-Super-Server:/data/jaxon/contextual/224x224$ python train.py
save_result_path=/data/jaxon/contextual/224x224/results/cub/2025-09-21-10-38-23
Will Save The Best Model Path: /data/jaxon/contextual/224x224/results/cub/2025-09-21-10-38-23/Hybrid_cub_best.pt
Using random seed : 1
data_root=/data/jaxon/contextual/datasets/cub200/CUB_200_2011
Balanced Sampling
eval_transform:
Compose(
<dataset.utils.Identity object at 0x7f7649c2fa00>
<dataset.utils.Identity object at 0x7f7649c2fbe0>
Resize(size=256, interpolation=bilinear, max_size=None, antialias=True)
CenterCrop(size=(224, 224))
ToTensor()
Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
)
/data/jaxon/contextual/224x224/train.py:174: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead.
scaler = torch.cuda.amp.GradScaler()
Bottleneck:
Bottleneck(
(model): Linear(in_features=512, out_features=2048, bias=False)
)
Using Hybrid (Ours)
******************************************************************
Namespace(dataset='cub', sz_embedding=512, train_embedding=2048, gpu_id=0, nb_workers=8, loss='Hybrid', optimizer='adamw', weight_decay=0.0001, gem=1, projector_lr_multi=2.0, embedding_lr_multi=1.0, pos_margin=0.75, neg_margin=0.6, regsim=0.3, seed=1, alpha=32, mrg=0.1, IPC=4, warm=0, bn_freeze=1, testfreq=1, sz_batch=128, nb_epochs=150, bottleneck='linear', lr=0.00014, lr_decay_step=10, lr_decay_gamma=0.3, data_root='datasets/cub200/CUB_200_2011', gamma=0.1, lam=0.05, hierarchical=0, eps=0.04, xform_scale=0.16)
******************************************************************
Training parameters: {'dataset': 'cub', 'sz_embedding': 512, 'train_embedding': 2048, 'gpu_id': 0, 'nb_workers': 8, 'loss': 'Hybrid', 'optimizer': 'adamw', 'weight_decay': 0.0001, 'gem': 1, 'projector_lr_multi': 2.0, 'embedding_lr_multi': 1.0, 'pos_margin': 0.75, 'neg_margin': 0.6, 'regsim': 0.3, 'seed': 1, 'alpha': 32, 'mrg': 0.1, 'IPC': 4, 'warm': 0, 'bn_freeze': 1, 'testfreq': 1, 'sz_batch': 128, 'nb_epochs': 150, 'bottleneck': 'linear', 'lr': 0.00014, 'lr_decay_step': 10, 'lr_decay_gamma': 0.3, 'data_root': 'datasets/cub200/CUB_200_2011', 'gamma': 0.1, 'lam': 0.05, 'hierarchical': 0, 'eps': 0.04, 'xform_scale': 0.16}
Training for 150 epochs.
AdamW (
Parameter Group 0
amsgrad: False
betas: (0.9, 0.999)
capturable: False
differentiable: False
eps: 1e-08
foreach: None
fused: None
initial_lr: 0.00014
lr: 0.00014
maximize: False
weight_decay: 0.0001
Parameter Group 1
amsgrad: False
betas: (0.9, 0.999)
capturable: False
differentiable: False
eps: 1e-08
foreach: None
fused: None
initial_lr: 0.00014
lr: 0.00014
maximize: False
weight_decay: 0.0001
Parameter Group 2
amsgrad: False
betas: (0.9, 0.999)
capturable: False
differentiable: False
eps: 1e-08
foreach: None
fused: None
initial_lr: 0.00028
lr: 0.00028
maximize: False
weight_decay: 0.0001
)
0it [00:00, ?it/s]/data/jaxon/contextual/224x224/net/resnet.py:69: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
with torch.cuda.amp.autocast(enabled=self.fp):
Train Epoch: 0 [45/45 (98%)] Loss: 0.047655: : 45it [00:06, 6.77it/s]
**Evaluating on test data...**
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:04<00:00, 11.57it/s]
R@1 : 54.946
R@2 : 67.522
R@4 : 78.545
R@8 : 86.867
R@16 : 92.522
R@32 : 96.438
mAP : 23.270832002162933
mAP@R : 15.542897582054138
Recalls: [0.549459824442944, 0.675219446320054, 0.7854490209318028, 0.8686698176907495, 0.925219446320054, 0.9643821742066172, 23.270832002162933, 15.542897582054138]
test_log: epoch:0 R@1:0.549459824442944 R@2:0.675219446320054 R@4:0.7854490209318028 R@8:0.8686698176907495 R@16:0.925219446320054 R@32:0.9643821742066172 mAP:23.270832002162933 mAP@R:15.542897582054138
(pytorch310) jaxon@user-Super-Server:/data/jaxon/contextual/224x224$ conda install libjpeg-turbo==2.1.4
Channels:
- defaults
- https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
- conda-forge
- nvidia
- pytorch
Platform: linux-64
Collecting package metadata (repodata.json): done
Solving environment: done
## Package Plan ##
environment location: /home/jaxon/anaconda3/envs/pytorch310
added / updated specs:
- libjpeg-turbo==2.1.4
The following packages will be downloaded:
package | build
---------------------------|-----------------
libdeflate-1.22 | h5eee18b_0 68 KB
qt-main-5.15.2 | hb6262e9_12 53.7 MB
------------------------------------------------------------
Total: 53.8 MB
The following NEW packages will be INSTALLED:
jpeg pkgs/main/linux-64::jpeg-9f-h5ce9db8_0
libclang pkgs/main/linux-64::libclang-14.0.6-default_hc6dbbc7_2
libllvm14 pkgs/main/linux-64::libllvm14-14.0.6-hecde1de_4
The following packages will be UPDATED:
libcups anaconda/cloud/conda-forge::libcups-2~ --> pkgs/main::libcups-2.4.2-h252cb56_2
libpq anaconda/cloud/conda-forge::libpq-16.~ --> pkgs/main::libpq-17.4-h02b6914_2
The following packages will be SUPERSEDED by a higher-priority channel:
gst-plugins-base anaconda/cloud/conda-forge::gst-plugi~ --> pkgs/main::gst-plugins-base-1.14.1-h6a678d5_1
gstreamer anaconda/cloud/conda-forge::gstreamer~ --> pkgs/main::gstreamer-1.14.1-h5eee18b_1
lcms2 anaconda/cloud/conda-forge::lcms2-2.1~ --> pkgs/main::lcms2-2.16-hb9589c4_0
libdeflate anaconda/cloud/conda-forge::libdeflat~ --> pkgs/main::libdeflate-1.22-h5eee18b_0
libtiff anaconda/cloud/conda-forge::libtiff-4~ --> pkgs/main::libtiff-4.5.1-hffd6297_1
libwebp anaconda/cloud/conda-forge::libwebp-1~ --> pkgs/main::libwebp-1.3.2-h11a3e52_0
libwebp-base anaconda/cloud/conda-forge::libwebp-b~ --> pkgs/main::libwebp-base-1.3.2-h5eee18b_1
pillow anaconda/cloud/conda-forge::pillow-10~ --> pkgs/main::pillow-10.2.0-py310h5eee18b_0
qt-main anaconda/cloud/conda-forge::qt-main-5~ --> pkgs/main::qt-main-5.15.2-hb6262e9_12
The following packages will be DOWNGRADED:
krb5 1.21.3-h723845a_4 --> 1.20.1-h143b758_1
libclang13 20.1.8-default_hee3e7a4_0 --> 14.0.6-default_he11475f_2
libjpeg-turbo 3.0.0-hd590300_1 --> 2.1.4-h166bdaf_0
openjpeg 2.5.2-h0d4d230_1 --> 2.5.2-he7f1fd0_0
Proceed ([y]/n)? y
Downloading and Extracting Packages:
Preparing transaction: done
Verifying transaction: done
Executing transaction: done
(pytorch310) jaxon@user-Super-Server:/data/jaxon/contextual/224x224$ python train.py
save_result_path=/data/jaxon/contextual/224x224/results/cub/2025-09-21-10-40-40
Will Save The Best Model Path: /data/jaxon/contextual/224x224/results/cub/2025-09-21-10-40-40/Hybrid_cub_best.pt
Using random seed : 1
data_root=/data/jaxon/contextual/datasets/cub200/CUB_200_2011
Balanced Sampling
eval_transform:
Compose(
<dataset.utils.Identity object at 0x7a8e04993a00>
<dataset.utils.Identity object at 0x7a8e04993be0>
Resize(size=256, interpolation=bilinear, max_size=None, antialias=True)
CenterCrop(size=(224, 224))
ToTensor()
Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
)
/data/jaxon/contextual/224x224/train.py:174: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead.
scaler = torch.cuda.amp.GradScaler()
Bottleneck:
Bottleneck(
(model): Linear(in_features=512, out_features=2048, bias=False)
)
Using Hybrid (Ours)
******************************************************************
Namespace(dataset='cub', sz_embedding=512, train_embedding=2048, gpu_id=0, nb_workers=8, loss='Hybrid', optimizer='adamw', weight_decay=0.0001, gem=1, projector_lr_multi=2.0, embedding_lr_multi=1.0, pos_margin=0.75, neg_margin=0.6, regsim=0.3, seed=1, alpha=32, mrg=0.1, IPC=4, warm=0, bn_freeze=1, testfreq=1, sz_batch=128, nb_epochs=150, bottleneck='linear', lr=0.00014, lr_decay_step=10, lr_decay_gamma=0.3, data_root='datasets/cub200/CUB_200_2011', gamma=0.1, lam=0.05, hierarchical=0, eps=0.04, xform_scale=0.16)
******************************************************************
Training parameters: {'dataset': 'cub', 'sz_embedding': 512, 'train_embedding': 2048, 'gpu_id': 0, 'nb_workers': 8, 'loss': 'Hybrid', 'optimizer': 'adamw', 'weight_decay': 0.0001, 'gem': 1, 'projector_lr_multi': 2.0, 'embedding_lr_multi': 1.0, 'pos_margin': 0.75, 'neg_margin': 0.6, 'regsim': 0.3, 'seed': 1, 'alpha': 32, 'mrg': 0.1, 'IPC': 4, 'warm': 0, 'bn_freeze': 1, 'testfreq': 1, 'sz_batch': 128, 'nb_epochs': 150, 'bottleneck': 'linear', 'lr': 0.00014, 'lr_decay_step': 10, 'lr_decay_gamma': 0.3, 'data_root': 'datasets/cub200/CUB_200_2011', 'gamma': 0.1, 'lam': 0.05, 'hierarchical': 0, 'eps': 0.04, 'xform_scale': 0.16}
Training for 150 epochs.
AdamW (
Parameter Group 0
amsgrad: False
betas: (0.9, 0.999)
capturable: False
differentiable: False
eps: 1e-08
foreach: None
fused: None
initial_lr: 0.00014
lr: 0.00014
maximize: False
weight_decay: 0.0001
Parameter Group 1
amsgrad: False
betas: (0.9, 0.999)
capturable: False
differentiable: False
eps: 1e-08
foreach: None
fused: None
initial_lr: 0.00014
lr: 0.00014
maximize: False
weight_decay: 0.0001
Parameter Group 2
amsgrad: False
betas: (0.9, 0.999)
capturable: False
differentiable: False
eps: 1e-08
foreach: None
fused: None
initial_lr: 0.00028
lr: 0.00028
maximize: False
weight_decay: 0.0001
)
0it [00:00, ?it/s]/data/jaxon/contextual/224x224/net/resnet.py:69: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
with torch.cuda.amp.autocast(enabled=self.fp):
Train Epoch: 0 [45/45 (98%)] Loss: 0.043407: : 45it [00:06, 6.67it/s]
**Evaluating on test data...**
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:05<00:00, 8.24it/s]
R@1 : 51.924
R@2 : 64.686
R@4 : 75.861
R@8 : 85.365
R@16 : 91.442
R@32 : 95.763
mAP : 20.840424299240112
mAP@R : 13.548034429550171
Recalls: [0.5192437542201216, 0.6468602295746118, 0.7586090479405807, 0.8536461850101283, 0.9144159351789332, 0.9576299797434166, 20.840424299240112, 13.548034429550171]
test_log: epoch:0 R@1:0.5192437542201216 R@2:0.6468602295746118 R@4:0.7586090479405807 R@8:0.8536461850101283 R@16:0.9144159351789332 R@32:0.9576299797434166 mAP:20.840424299240112 mAP@R:13.548034429550171
(pytorch310) jaxon@user-Super-Server:/data/jaxon/contextual/224x224$ conda install libjpeg-turbo==3.0.0
Channels:
- defaults
- https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
- conda-forge
- nvidia
- pytorch
Platform: linux-64
Collecting package metadata (repodata.json): done
Solving environment: done
## Package Plan ##
environment location: /home/jaxon/anaconda3/envs/pytorch310
added / updated specs:
- libjpeg-turbo==3.0.0
The following packages will be downloaded:
package | build
---------------------------|-----------------
libclang-20.1.8 |default_h4fcecc7_0 143 KB
libtiff-4.7.0 | hc4654cb_2 419 KB https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
------------------------------------------------------------
Total: 561 KB
The following packages will be REMOVED:
jpeg-9f-h5ce9db8_0
The following packages will be UPDATED:
gst-plugins-base pkgs/main::gst-plugins-base-1.14.1-h6~ --> anaconda/cloud/conda-forge::gst-plugins-base-1.24.4-h9ad1361_0
gstreamer pkgs/main::gstreamer-1.14.1-h5eee18b_1 --> anaconda/cloud/conda-forge::gstreamer-1.24.4-haf2f30d_0
krb5 1.20.1-h143b758_1 --> 1.21.3-h723845a_4
lcms2 pkgs/main::lcms2-2.16-hb9589c4_0 --> anaconda/cloud/conda-forge::lcms2-2.17-h717163a_0
libclang 14.0.6-default_hc6dbbc7_2 --> 20.1.8-default_h4fcecc7_0
libclang13 14.0.6-default_he11475f_2 --> 20.1.8-default_hee3e7a4_0
libjpeg-turbo 2.1.4-h166bdaf_0 --> 3.0.0-hd590300_1
libtiff pkgs/main::libtiff-4.5.1-hffd6297_1 --> anaconda/cloud/conda-forge::libtiff-4.7.0-hc4654cb_2
libwebp pkgs/main::libwebp-1.3.2-h11a3e52_0 --> anaconda/cloud/conda-forge::libwebp-1.5.0-hae8dbeb_0
libwebp-base pkgs/main::libwebp-base-1.3.2-h5eee18~ --> anaconda/cloud/conda-forge::libwebp-base-1.5.0-h851e524_0
openjpeg 2.5.2-he7f1fd0_0 --> 2.5.2-h0d4d230_1
qt-main pkgs/main::qt-main-5.15.2-hb6262e9_12 --> anaconda/cloud/conda-forge::qt-main-5.15.8-hc9dc06e_21
The following packages will be SUPERSEDED by a higher-priority channel:
libcups pkgs/main::libcups-2.4.2-h252cb56_2 --> anaconda/cloud/conda-forge::libcups-2.3.3-hb8b1518_5
libpq pkgs/main::libpq-17.4-h02b6914_2 --> anaconda/cloud/conda-forge::libpq-16.9-h87c4ccc_0
pillow pkgs/main::pillow-10.2.0-py310h5eee18~ --> anaconda/cloud/conda-forge::pillow-10.2.0-py310h01dd4db_0
Proceed ([y]/n)? y
Downloading and Extracting Packages:
Preparing transaction: done
Verifying transaction: done
Executing transaction: done
(pytorch310) jaxon@user-Super-Server:/data/jaxon/contextual/224x224$ python train.py
save_result_path=/data/jaxon/contextual/224x224/results/cub/2025-09-21-10-42-44
Will Save The Best Model Path: /data/jaxon/contextual/224x224/results/cub/2025-09-21-10-42-44/Hybrid_cub_best.pt
Using random seed : 1
data_root=/data/jaxon/contextual/datasets/cub200/CUB_200_2011
Balanced Sampling
eval_transform:
Compose(
<dataset.utils.Identity object at 0x71018f3b7a60>
<dataset.utils.Identity object at 0x71018f3b7c40>
Resize(size=256, interpolation=bilinear, max_size=None, antialias=True)
CenterCrop(size=(224, 224))
ToTensor()
Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
)
/data/jaxon/contextual/224x224/train.py:174: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead.
scaler = torch.cuda.amp.GradScaler()
Bottleneck:
Bottleneck(
(model): Linear(in_features=512, out_features=2048, bias=False)
)
Using Hybrid (Ours)
******************************************************************
Namespace(dataset='cub', sz_embedding=512, train_embedding=2048, gpu_id=0, nb_workers=8, loss='Hybrid', optimizer='adamw', weight_decay=0.0001, gem=1, projector_lr_multi=2.0, embedding_lr_multi=1.0, pos_margin=0.75, neg_margin=0.6, regsim=0.3, seed=1, alpha=32, mrg=0.1, IPC=4, warm=0, bn_freeze=1, testfreq=1, sz_batch=128, nb_epochs=150, bottleneck='linear', lr=0.00014, lr_decay_step=10, lr_decay_gamma=0.3, data_root='datasets/cub200/CUB_200_2011', gamma=0.1, lam=0.05, hierarchical=0, eps=0.04, xform_scale=0.16)
******************************************************************
Training parameters: {'dataset': 'cub', 'sz_embedding': 512, 'train_embedding': 2048, 'gpu_id': 0, 'nb_workers': 8, 'loss': 'Hybrid', 'optimizer': 'adamw', 'weight_decay': 0.0001, 'gem': 1, 'projector_lr_multi': 2.0, 'embedding_lr_multi': 1.0, 'pos_margin': 0.75, 'neg_margin': 0.6, 'regsim': 0.3, 'seed': 1, 'alpha': 32, 'mrg': 0.1, 'IPC': 4, 'warm': 0, 'bn_freeze': 1, 'testfreq': 1, 'sz_batch': 128, 'nb_epochs': 150, 'bottleneck': 'linear', 'lr': 0.00014, 'lr_decay_step': 10, 'lr_decay_gamma': 0.3, 'data_root': 'datasets/cub200/CUB_200_2011', 'gamma': 0.1, 'lam': 0.05, 'hierarchical': 0, 'eps': 0.04, 'xform_scale': 0.16}
Training for 150 epochs.
AdamW (
Parameter Group 0
amsgrad: False
betas: (0.9, 0.999)
capturable: False
differentiable: False
eps: 1e-08
foreach: None
fused: None
initial_lr: 0.00014
lr: 0.00014
maximize: False
weight_decay: 0.0001
Parameter Group 1
amsgrad: False
betas: (0.9, 0.999)
capturable: False
differentiable: False
eps: 1e-08
foreach: None
fused: None
initial_lr: 0.00014
lr: 0.00014
maximize: False
weight_decay: 0.0001
Parameter Group 2
amsgrad: False
betas: (0.9, 0.999)
capturable: False
differentiable: False
eps: 1e-08
foreach: None
fused: None
initial_lr: 0.00028
lr: 0.00028
maximize: False
weight_decay: 0.0001
)
0it [00:00, ?it/s]/data/jaxon/contextual/224x224/net/resnet.py:69: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
with torch.cuda.amp.autocast(enabled=self.fp):
Train Epoch: 0 [45/45 (98%)] Loss: 0.047655: : 45it [00:06, 6.93it/s]
**Evaluating on test data...**
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47/47 [00:04<00:00, 10.55it/s]
R@1 : 54.946
R@2 : 67.522
R@4 : 78.545
R@8 : 86.867
R@16 : 92.522
R@32 : 96.438
mAP : 23.270832002162933
mAP@R : 15.542897582054138
Recalls: [0.549459824442944, 0.675219446320054, 0.7854490209318028, 0.8686698176907495, 0.925219446320054, 0.9643821742066172, 23.270832002162933, 15.542897582054138]
test_log: epoch:0 R@1:0.549459824442944 R@2:0.675219446320054 R@4:0.7854490209318028 R@8:0.8686698176907495 R@16:0.925219446320054 R@32:0.9643821742066172 mAP:23.270832002162933 mAP@R:15.542897582054138
(pytorch310) jaxon@user-Super-Server:/data/jaxon/contextual/224x224$
本文来自博客园,作者:JaxonYe,转载请注明原文链接:https://www.cnblogs.com/yechangxin/articles/19103317
侵权必究

浙公网安备 33010602011771号