论文中遇到的深度学习名词解释(持续更新)

在看论文时经常会有一些专业名词不知道具体的解释,baidu/bing搜索后也对概念拿捏的不是很准确,因此记录下wiki/一些外国论坛内比较清晰的解释,方便自己回顾的同时也希望能帮助到有需要的小伙伴呀~

 

  • embedding:An embedding is a relatively low-dimensional space into which you can translate high-dimensional vectors. Embeddings make it easier to do machine learning on large inputs like sparse vectors representing words. Ideally, an embedding captures some of the semantics of the input by placing semantically similar inputs close together in the embedding space. An embedding can be learned and reused across models.

 

  • normalized variable:Normalization or Standardization is a process of transforming a variable into a more analytically useful form, usually using a ratio. Raw statistical data is often susceptible to misinterpretation, and normalization is one method of correcting for this.(相当于是正则化以后的数据)

 

  • latent variable:(as opposed to observable variant) are variants that are not directly observed but are rather inferred (through a mathematical model) from other variables that are observed (directly measured). Mathematical models that aim to explain observed variables in terms of latent variables are called latent variable models.

 

  • NB distribution:In probability theory and statistics, the negative binomial distribution is a discrete probability distribution that models the number of successes in a sequence of independent and identically distributed Bernoulli trials before a specified (non-random) number of failures (denoted r) occurs. For example, we can define rolling a 6 on a die as a failure, and rolling any other number as a success, and ask how many successful rolls will occur before we see the third failure (r = 3). In such a case, the probability distribution of the number of non-6s that appear will be a negative binomial distribution. 

 

  • generative model and discriminative model: generative model can generate new data instances, discriminative model discriminate between different data instances. Given a set of datas instances X and a set of labels Y, 
    • Generative models capture the joint probability p(X, Y), or just p(X) if there are no labels.
    • Discriminative models capture the conditional probability p(Y | X).
posted @ 2021-11-17 20:07  DARK_happy  阅读(289)  评论(0)    收藏  举报