有麻烦请先笑笑

萩 x H

我的时间很少,但我却有很多想法

深度学习笔记02

1.Episodic training & Batch training

https://zhuanlan.zhihu.com/p/84290146

2.FSL在FER问题上的过拟合问题

Note that, different from popular FSL tasks (such as image classification), which leverage a large number of base classes for training, our FER task involves only limited base classes (i.e., the number of classes in basic expression datasets is small). Hence, for episodic training-based methods, the sampled few-shot tasks are highly overlapped, leading to overfitting to the seen tasks.

CAM:网络权重可视化

教学
https://zhuanlan.zhihu.com/p/623073818
封装库及使用
https://blog.csdn.net/Sylvia_Lan/article/details/123309476

神经网络的计算量(FLOPs)、参数量(Params)、推理时间(FPS)的定义及实现方法

https://blog.csdn.net/qq_43307074/article/details/127688761

推理模型的结构

参考博客:https://blog.csdn.net/Einstellung/article/details/102545347
使用torchsummary工具

信息瓶颈理论

参考博客:https://blog.csdn.net/lt1103725556/article/details/122096242
互信息的概念:X与Y不相干的信息量。
IB理论把深度学习阶段分为两部分,前一段时间尽量增加中间feature和Y的互信息,后一段时间尽量压缩X和中间feature的互信息,使得中间feature包含X最精华的信息。

posted @ 2024-02-24 21:47  萩xh  阅读(21)  评论(0)    收藏  举报