仅显示标题浏览

Bati's eHome of Tech

  :: 首页 :: 博问 :: 闪存 :: 新随笔 :: 联系 :: :: 管理 ::

Around the middle of the 20th century the Russian theoretician Andre Tikhonov was working on the solution of ill-posed problems. These are mathematical problems for which no unique solution exists because, in effect, there is not enough information specified in the problem. It is necessary to supply extra information (or assumptions) and the mathematical technique Tikhonov developed for this is known as regularisation.

Tikhonov's work only became widely known in the West after the publication in 1977 of his book [29]. Meanwhile, two American statisticians, Arthur Hoerl and Robert Kennard, published a paper in 1970 [11] on ridge regression, a method for solving badly conditioned linear regression problems. Bad conditioning means numerical difficulties in performing the matrix inverse necessary to obtain the variance matrix. It is also a symptom of an ill-posed regression problem in Tikhonov's sense and Hoerl & Kennard's method was in fact a crude form of regularisation, known now as zero-order regularisation [25].

In the 1980's, when neural networks became popular, weight decay was one of a number of techniques `invented' to help prune unimportant network connections. However, it was soon recognised [8] that weight decay involves adding the same penalty term to the sum-squared-error as in ridge regression. Weight-decay and ridge regression are equivalent.

While it is admittedly crude, I like ridge regression because it is mathematically and computationally convenient and consequently other forms of regularisation are rather ignored here. If the reader is interested in higher-order regularisation I suggest looking at [25] for a general overview and [16] for a specific example (second-order regularisation in RBF networks).

We next describe ridge regression from the perspective of bias and variance and how it affects the equations for the optimal weight vector, the variance matrix and the projection matrix. A method to select a good value for the regularisation parameter, based on a re-estimation formula, is then presented. Next comes a generalisation of ridge regression which, if radial basis functions are used, can be justly called local ridge regression. It involves multiple regularisation parameters and we describe a method for their optimisation. Finally, we illustrate with a simple example.

posted on 2008-05-21 15:42  Bati  阅读(1096)  评论(0编辑  收藏  举报