The Full Wiki

More info on Cover's theorem

Cover's theorem: Wikis

Advertisements

Note: Many of our articles have direct quotes from sources you can cite, within the Wikipedia article! This article doesn't yet, but we're working on it! See more info or our list of citable articles.

Encyclopedia

From Wikipedia, the free encyclopedia

Cover's Theorem is a statement in computational learning theory and is one of the primary theoretical motivations for the use of non-linear kernel methods in machine learning applications. The theorem states that given a set of training data that is not linearly separable, one can with high probability transform it into a training set that is linearly separable by projecting it into a higher dimensional space via some non-linear transformation.

A complex pattern-classification problem, cast in a high-dimensional space nonlinearly, is more likely to be linearly separable that in a low-dimensional space, provided tat the space is no densely populated.

Cover, T.M. , Geometrical and Statistical properties of systems of linear inequalities with applications in pattern recognition., 1965

References

Haykin, Simon (2009). Neural Networks and Learning Machines Third Edition. Upper Saddle River, New Jersey: Pearson Education Inc. pp. 232-236. ISBN 978-0-13-147139-9.  

Cover, T.M. (1965). "Geometrical and Statistical properties of systems of linear inequalities with applications in pattern recognition". IEEE Transactions on Electronic Computers EC-14: 326-334.  


Advertisements






Got something to say? Make a comment.
Your name
Your email address
Message