The Full Wiki

WordNet: Wikis


Note: Many of our articles have direct quotes from sources you can cite, within the Wikipedia article! This article doesn't yet, but we're working on it! See more info or our list of citable articles.


From Wikipedia, the free encyclopedia

WordNet is a lexical database for the English language[1]. It groups English words into sets of synonyms called synsets, provides short, general definitions, and records the various semantic relations between these synonym sets. The purpose is twofold: to produce a combination of dictionary and thesaurus that is more intuitively usable, and to support automatic text analysis and artificial intelligence applications. The database and software tools have been released under a BSD style license and can be downloaded and used freely. The database can also be browsed online. WordNet was created and is being maintained at the Cognitive Science Laboratory of Princeton University under the direction of psychology professor George A. Miller. Development began in 1985. Over the years, the project received funding from government agencies interested in machine translation. As of 2009, the WordNet team includes the following members of the Cognitive Science Laboratory: George A. Miller, Christiane Fellbaum, Randee Tengi, Pamela Wakefield, Helen Langone and Benjamin R. Haskell. WordNet has been supported by grants from the National Science Foundation, DARPA, the Disruptive Technology Office (formerly the Advanced Research and Development Activity), and REFLEX. George Miller and Christiane Fellbaum were awarded the 2006 Antonio Zampolli Prize for their work with WordNet.


Database contents

As of 2006, the database contains about 150,000 words organized in over 115,000 synsets for a total of 207,000 word-sense pairs; in compressed form, it is about 12 megabytes in size.[2]

WordNet distinguishes between nouns, verbs, adjectives and adverbs because they follow different grammatical rules. Every synset contains a group of synonymous words or collocations (a collocation is a sequence of words that go together to form a specific meaning, such as "car pool"); different senses of a word are in different synsets. The meaning of the synsets is further clarified with short defining glosses (Definitions and/or example sentences). A typical example synset with gloss is:

good, right, ripe -- (most suitable or right for a particular purpose; "a good time to plant tomatoes"; "the right time to act"; "the time is ripe for great sociological changes")

Most synsets are connected to other synsets via a number of semantic relations. These relations vary based on the type of word, and include:

  • Nouns
    • hypernyms: Y is a hypernym of X if every X is a (kind of) Y (canine is a hypernym of dog)
    • hyponyms: Y is a hyponym of X if every Y is a (kind of) X (dog is a hyponym of canine)
    • coordinate terms: Y is a coordinate term of X if X and Y share a hypernym (wolf is a coordinate term of dog, and dog is a coordinate term of wolf)
    • holonym: Y is a holonym of X if X is a part of Y (building is a holonym of window)
    • meronym: Y is a meronym of X if Y is a part of X (window is a meronym of building)
  • Verbs
    • hypernym: the verb Y is a hypernym of the verb X if the activity X is a (kind of) Y (to perceive is an hypernym of to listen)
    • troponym: the verb Y is a troponym of the verb X if the activity Y is doing X in some manner (to lisp is a troponym of to talk)
    • entailment: the verb Y is entailed by X if by doing X you must be doing Y (to sleep is entailed by to snore)
    • coordinate terms: those verbs sharing a common hypernym (to lisp and to yell)
  • Adjectives
    • related nouns
    • similar to
    • participle of verb
  • Adverbs
    • root adjectives

While semantic relations apply to all members of a synset because they share a meaning but are all mutually synonyms, words can also be connected to other words through lexical relations, including antonyms (opposites of each other) which are derivationally related, as well.

WordNet also provides the polysemy count of a word: the number of synsets that contain the word. If a word participates in several synsets (i.e. has several senses) then typically some senses are much more common than others. WordNet quantifies this by the frequency score: in which several sample texts have all words semantically tagged with the corresponding synset, and then a count provided indicating how often a word appears in a specific sense.

The morphology functions of the software distributed with the database try to deduce the lemma or root form of a word from the user's input; only the root form is stored in the database unless it has irregular inflected forms.

Knowledge structure

Both nouns and verbs are organized into hierarchies, defined by hypernym or IS A relationships. For instance, the first sense of the word dog would have the following hypernym hierarchy; the words at the same level are synonyms of each other: some sense of dog is synonymous with some other senses of domestic dog and Canis familiaris, and so on. Each set of synonyms (synset), has a unique index and shares its properties, such as a gloss (or dictionary) definition.

 dog, domestic dog, Canis familiaris
    => canine, canid
       => carnivore
         => placental, placental mammal, eutherian, eutherian mammal
           => mammal
             => vertebrate, craniate
               => chordate
                 => animal, animate being, beast, brute, creature, fauna
                   => ...

At the top level, these hierarchies are organized into base types, 25 primitive groups for nouns, and 15 for verbs. These groups form lexicographic files at a maintenance level. These primitive groups are connected to an abstract root node that has, for some time, been assumed by various applications that use WordNet.

In the case of adjectives, the organization is different. Two opposite 'head' senses work as binary poles, while 'satellite' synonyms connect to each of the heads via synonymy relations. Thus, the hierarchies, and the concept involved with lexicographic files, do not apply here the same way they do for nouns and verbs.

The network of nouns is far deeper than that of the other parts of speech. Verbs have a far bushier structure, and adjectives are organized into many distinct clusters. Adverbs are defined in terms of the adjectives they are derived from, and thus inherit their structure from that of the adjectives.

Psychological justification

The goal of WordNet was to develop a system that would be consistent with the knowledge acquired over the years about how human beings process language. Anomic aphasia, for example, creates a condition that seems to selectively encumber individuals' ability to name objects; this makes the decision to partition the parts of speech into distinct hierarchies more of a principled decision than an arbitrary one.

In the case of hyponymy, psychological experiments revealed that individuals can access properties of nouns more quickly depending on when a characteristic becomes a defining property. That is, individuals can quickly verify that canaries can sing because a canary is a songbird (only one level of hyponymy), but require slightly more time to verify that canaries can fly (two levels of hyponymy) and even more time to verify canaries have skin (multiple levels of hyponymy). This suggests that we too store semantic information in a way that is much like WordNet, because we only retain the most specific information needed to differentiate one particular concept from similar concepts.[3]

WordNet as an ontology

The hypernym/hyponym relationships among the noun synsets can be interpreted as specialization relations between conceptual categories. In other words, WordNet can be interpreted and used as a lexical ontology in the computer science sense. However, such an ontology should normally be corrected before being used since it contains hundreds of basic semantic inconsistencies such as (i) the existence of common specializations for exclusive categories and (ii) redundancies in the specialization hierarchy. Furthermore, transforming WordNet into a lexical ontology usable for knowledge representation should normally also involve (i) distinguishing the specialization relations into subtypeOf and instanceOf relations, and (ii) associating intuitive unique identifiers to each category. Although such corrections and transformations have been performed and documented as part of the integration of WordNet 1.7 into the cooperatively updatable knowledge base of WebKB-2, most projects claiming to re-use WordNet for knowledge-based applications (typically, knowledge-oriented information retrieval) simply re-use it directly. WordNet has also been converted to a formal specification, by means of a hybrid bottom-up top-down methodology to automatically extract association relations from WordNet, and interpret these associations in terms of a set of conceptual relations, formally defined in the DOLCE foundational ontology[4].

Problems and Limitations

Unlike other dictionaries, WordNet does not include information about etymology, pronunciation and the forms of irregular verbs and contains only limited information about usage.

The actual lexicographical and semantic information is maintained in lexicographer files, which are then processed by a tool called grind to produce the distributed database. Both grind and the lexicographer files are freely available in a separate distribution, but modifying and maintaining the database requires expertise.

Though WordNet contains a sufficiently wide range of common words, it does not cover special domain vocabulary. Since it is primarily designed to act as an underlying database for different applications, those applications cannot be used in specific domains that are not covered by WordNet.

In most works that claim to have integrated WordNet into other ontologies, the content of WordNet has not simply been corrected when semantic problems have been encountered; instead, WordNet has been used as an inspiration source but heavily re-interpreted and updated whenever suitable. This was the case when, for example, the top-level ontology of WordNet was re-structured[5] according to the OntoClean based approach or when WordNet was used as a primary source for constructing the lower classes of the SENSUS ontology.

WordNet is the most commonly used computational lexicon of English for Word Sense Disambiguation (WSD), a task aimed to assigning the most appropriate senses (i.e. synsets) to words in context. However, it has been argued that WordNet encodes sense distinctions that are too fine-grained even for humans. This issue prevents WSD systems from achieving high performance. The granularity issue has been tackled by proposing clustering methods that automatically group together similar senses of the same word[6][7][8].


WordNet has been used for a number of different purposes in information systems, including word sense disambiguation, information retrieval, automatic text classification, automatic text summarization, and even automatic crossword puzzle generation.

A project at Brown University started by Jeff Stibel, James A. Anderson, Steve Reiss and others called Applied Cognition Lab created a disambiguator using WordNet in 1998.[9] The project later morphed into a company called Simpli, which is now owned by ValueClick. George Miller joined the Company as a member of the Advisory Board. Simpli built an Internet search engine that utilized a knowledge base principally based on WordNet to disambiguate and expand keywords and synsets to help retrieve information online. WordNet was expanded upon to add increased dimensionality, such as intentionality (used for x), people (Albert Einstein) and colloquial terminology more relevant to Internet search (i.e., blogging, ecommerce). Neural network algorithms searched the expanded WordNet for related terms to disambiguate search keywords (Java, in the sense of coffee) and expand the search synset (Coffee, Drink, Joe) to improve search engine results.[10] Before the company was acquired, it performed searches across search engines such as Google, Yahoo!, and others.

Another prominent example of the use of WordNet is to determine the similarity between words. Various algorithms have been proposed, and these include considering the distance between the conceptual categories of words, as well as considering the hierarchical structure of the WordNet ontology. A number of these WordNet-based word similarity algorithms are implemented in a Perl package called WordNet::Similarity, and in a Python package called NLTK.


Princeton maintains a list of related projects that includes links to some of the widely used application programming interfaces available for accessing WordNet using various programming languages and environments.

Extensions and linked data

Wordnet is connected to several databases of the Semantic Web. WordNet is also commonly re-used via mappings between the WordNet categories (i.e. synsets) and the categories from other ontologies. Most often, only the top-level categories of WordNet are mapped.

  • The SUMO ontology[11] has produced a mapping between all of the WordNet synsets, (including nouns, verbs, adjectives and adverbs), and SUMO classes. The most recent addition of the mappings provides links to all of the more specific terms in the MId-Level Ontology (MILO), which extends SUMO.
  • DBpedia[13], a database of structured information, is also linked to WordNet.
  • The EuroWordNet project[14] has produced WordNets for several European languages and linked them together; these are not freely available however. The Global Wordnet project attempts to coordinate the production and linking of "wordnets" for all languages. Oxford University Press, the publisher of the Oxford English Dictionary, has voiced plans to produce their own online competitor to WordNet[citation needed].
  • The eXtended WordNet[15] is a project at the University of Texas at Dallas which aims to improve WordNet by semantically parsing the glosses, thus making the information contained in these definitions available for automatic knowledge processing systems. It is also freely available under a license similar to WordNet's.
  • WOLF (WordNet Libre du Français), a French version of WordNet[16].
  • The MultiWordNet project[17], a multilingual WordNet aimed at producing an Italian WordNet strongly aligned with the Princeton WordNet.
  • ImageNet is an image database organized according to the WordNet hierarchy (currently only the nouns), in which each node of the hierarchy is depicted by hundreds and thousands of images[18]. Currently it has an average of over five hundred images per node.
  • BioWordnet, a biomedical extension of wordnet was abandoned due to issues about stability over versions. [19]
  • The BalkaNet project[20] has produced WordNets for six European languages (Bulgarian, Czech, Greek, Romanian, Turkish and Serbian). For this project, freely available XML-based WordNet editor was developed. This editor - VisDic - is not in active development anymore, but is still used for the creation of various WordNets. Its successor, DEBVisDic, is client-server application and is currently used for the editing of several WordNets (Dutch in Cornetto project, Polish, Hungarian, several African languages, Chinese).
  • WordNet has also been automatically linked to Wikipedia categories, as a result of the WikiTax2WordNet project.[21]

Related projects

  • FrameNet is a project similar to WordNet. It consists of a lexicon which is based on annotating over 100,000 sentences with their semantic properties. The unit in focus is the lexical frame, a type of state or event together with the properties associated with it.
  • An independent project titled wordNet with an initial lower case w is an ongoing project to links words and phrases via a custom Web crawler.
  • Lexical markup framework (LMF) is a work in progress within ISO/TC37 in order to define a common standardized framework for the construction of lexicons, including WordNet.


  1. ^ G. A. Miller, R. Beckwith, C. D. Fellbaum, D. Gross, K. Miller. 1990. WordNet: An online lexical database. Int. J. Lexicograph. 3, 4, pp. 235-244.
  2. ^ WNSTATS(7WN) manual page
  3. ^ Collins A., Quillian M. R. 1972. Experiments on Semantic Memory and Language Comprehension. In Cognition in Learning and Memory. Wiley, New York.
  4. ^ A. Gangemi, R. Navigli, P. Velardi. The OntoWordNet Project: Extension and Axiomatization of Conceptual Relations in WordNet, In Proc. of International Conference on Ontologies, Databases and Applications of SEmantics (ODBASE 2003), Catania, Sicily (Italy), 2003, pp. 820-838.
  5. ^ A. Oltramari, A. Gangemi, N. Guarino, and C. Masolo. 2002. Restructuring WordNet's Top-Level: The OntoClean approach. In Proc. of OntoLex'2 Workshop, Ontologies and Lexical Knowledge Bases (LREC 2002). Las Palmas, Spain, pp. 17-26.
  6. ^ E. Agirre, O. Lopez. 2003. Clustering WordNet Word Senses. In Proc. of the Conference on Recent Advances on Natural Language (RANLP’03), Borovetz, Bulgaria, pp. 121-130.
  7. ^ R. Navigli. Meaningful Clustering of Senses Helps Boost Word Sense Disambiguation Performance, In Proc. of the 44th Annual Meeting of the Association for Computational Linguistics joint with the 21st International Conference on Computational Linguistics (COLING-ACL 2006), Sydney, Australia, July 17-21st, 2006, pp. 105-112.
  8. ^ R. Snow, S. Prakash, D. Jurafsky, A. Y. Ng. 2007. Learning to Merge Word Senses, In Proc. of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), Prague, Czech Republic, pp. 1005-1014.
  9. ^ O. Malik. How google is that?. Forbes, 10.04.1999
  10. ^ P. J. Hane. Beyond Keyword Searching—Oingo and Introduce Meaning-Based Searching. InfoToday, Posted On December 20, 1999.
  11. ^ A. Pease, I. Niles, J. Li. 2002. The suggested upper merged ontology: A large ontology for the Semantic Web and its applications. In Proc. of the AAAI-2002 Workshop on Ontologies and the Semantic Web, Edmonton, Canada.
  12. ^ S. Reed and D. Lenat. 2002. Mapping Ontologies into Cyc. In Proc. of AAAI 2002 Conference Workshop on Ontologies For The Semantic Web, Edmonton, Canada, 2002
  13. ^ C. Bizer, J. Lehmann, G. Kobilarov, S. Auer, C. Becker, R. Cyganiak, S. Hellmann, DBpedia - A crystallization point for the Web of Data. Web Semantics, 7(3), 2009, pp. 154-165
  14. ^ P. Vossen, Ed. 1998. EuroWordNet: A Multilingual Database with Lexical Semantic Networks. Kluwer, Dordrecht, The Netherlands.
  15. ^ S. M. Harabagiu, G. A. Miller, D. I. Moldovan. 1999. WordNet 2 - A Morphologically and Semantically Enhanced Resource. In Proc. of the ACL SIGLEX Workshop: Standardizing Lexical Resources, pp. 1-8.
  16. ^ S. Benoît, F. Darja. 2008. Building a free French wordnet from multilingual resources. In Proc. of Ontolex 2008, Marrakech, Maroc.
  17. ^ E. Pianta, L. Bentivogli, C. Girardi. 2002. MultiWordNet: Developing an aligned multilingual database. In Proc. of the 1st International Conference on Global WordNet, Mysore, India, pp. 21-25.
  18. ^ J. Deng, W. Dong, R. Socher, L. Li, K. Li, L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In Proc. of 2009 IEEE Conference on Computer Vision and Pattern Recognition
  19. ^ M. Poprat, E. Beisswanger, U. Hahn. 2008. Building a BIOWORDNET by Using WORDNET’s Data Formats and WORDNET’s Software Infrastructure - A Failure Story. In Proc. of the Software Engineering, Testing, and Quality Assurance for Natural Language Processing Workshop, pp. 31-39.
  20. ^ D. Tufis, D. Cristea, S. Stamou. 2004. Balkanet: Aims, methods, results and perspectives. A general overview. Romanian J. Sci. Tech. Inform. (Special Issue on Balkanet), 7(1-2), pp. 9-43.
  21. ^ S. Ponzetto, R. Navigli. Large-Scale Taxonomy Mapping for Restructuring and Integrating Wikipedia, In Proc. of the 21st International Joint Conference on Artificial Intelligence (IJCAI 2009), Pasadena, California, July 14-17th, 2009, pp. 2083-2088.

See also

External links



Up to date as of January 15, 2010

Definition from Wiktionary, a free dictionary



Wikipedia has an article on:



word + net


  • word + net (stress on first syllable)





  1. (linguistics, artificial intelligence) WordNet is a wordnet, a semantically structured lexical database, for the English language at Princeton University.
    • 1996, Keith J. Holyoak, Paul Thagard, Mental Leaps: Analogy in Creative Thought, page 259
      Copycat uses a network of concepts, called a Slipnet, to find correspondences between nonidentical objects, just as ARCS uses WordNet-style semantic information to find similar concepts.
    • 1997, Josephine A. Edwards, Geoffrey Kingscott, Language Industries Atlas, page 334
      Recently the Group started a project of creating a thesaurus of the WordNet type for Estonian.
    • 2001, Philippe Martin and Peter Eklund, "Large-scale cooperatively built KBs", Harry S. Delugach, Gerd Stumme, Conceptual Structures: Broadening the Base : 9th International Conference on, page 232
      By (partly) mirroring one another, general servers would probably share a similar general WordNet-like or CYC-like ontology […].
    • 2002, Nicoletta Calzolari et al., "Towards a Standard for a Multilingual Lexical Entry: The EAGLES/ISLE Initiative", in Alexander Gelbukh, ed., Computational Linguistics and Intelligent Text Processing: Third, page 270
      One very interesting possibility seems to be to complement WordNet-style lexicons with the SIMPLE design, thereby trying to get at a more comprehensive and coherent architecture for the development of semantic lexical resources.
    • 2003, A Practical Guide to Lexicography, edited by Piet Van Sterkenburg, page 198
      Chai (2000) uses WordNet for information extraction, ie the identification and extraction of domain specific target information from a document [....]
    • 2005, Paul Buitelaar, Philipp Cimiano, Bernardo Magnini, Ontology Learning from Text: Methods, Evaluation And Applications, page 64
      For conference, which has 3 senses in WordNet, we get the following candidate taxonomic relations: [....]


External links


Got something to say? Make a comment.
Your name
Your email address