The Full Wiki

Subject indexing: Wikis

Advertisements
  

Note: Many of our articles have direct quotes from sources you can cite, within the Wikipedia article! This article doesn't yet, but we're working on it! See more info or our list of citable articles.

Encyclopedia

From Wikipedia, the free encyclopedia

Subject indexing is the act of describing a document by index terms to indicate what the document is about or to summarize its content. Indexes are constructed, separately, on three distinct levels: terms in a document such as a book; objects in a collection such as a library; and documents (such as books and articles) within a field of knowledge.

Subject indexing is used in information retrieval especially to create bibliographic databases to retrieve documents on a particular subject. Examples of academic indexing services are Zentralblatt MATH, Chemical Abstracts and PubMed. The index terms were mostly assigned by experts but author keywords are also common.

The process of indexing begins with any analysis of the subject of the document. The indexer must then identify terms which appropriately identify the subject either by extracting words directly from the document or assigning words from a controlled vocabulary [1]. The terms in the index are then presented in a systematic order.

Indexers must decide how many terms to include and how specific the terms should be. Together this gives a depth of indexing.

Contents

Subject analysis

The first step in indexing is to decide on the subject matter of the document. In manual indexing, the indexer would consider the subject matter in terms of answer to a set of questions such as "Does the document deal with a specific product, condition or phenomenon?" [2]. As the analysis is influenced by the knowledge and experience of the indexer, it follows that two indexers may analyse the content differently and so come up with different index terms. This will impact on the success of retrieval. Automatic indexing follows set processes of analysing frequencies of word patterns and comparing results to other documents in order to assign to subject categories. This requires no understanding of the material being indexed therefore leads to more uniform indexing but this is at the expense of the true meaning being interpreted. A computer program will not understand the meaning of statements and may therefore fail to assign some relevant terms or assign incorrectly. Human indexers focus their attention on certain parts of the document such as the title, abstract, summary and conclusions, as analysing the full text in depth is costly and time consuming [3] An automated system takes away the time limit and allows the entire document to be analysed, but also has the option to be directed to particular parts of the document.

Term selection

The second stage of indexing involves the translation of the subject analysis into a set of index terms. This can involve extracting from the document or assigning from a controlled vocabulary. With the ability to conduct a full text search widely available, many people have come to rely on their own expertise in conducting information searches and full text search has become very popular. Subject indexing and its experts, professional Indexers, Catalogers, and Librarians, remains crucial to information organization and retrieval. These experts understand controlled vocabularies and are able to find information that cannot be located by full text search. The cost of expert analysis to create subject indexing is not easily compared to the cost of hardware, software and labor to manufacture a comparable set of full-text, fully searchable materials. With new web applications that allow every user to annotate documents, social tagging has gained popularity especially in the Web.[4]

One application of indexing, the book index, remains relatively unchanged despite the information revolution.

Advertisements

Extraction indexing

Extraction indexing involves taking words directly from the document. It uses natural language and lends itself well to automated techniques were word frequencies are calculated and those with a frequency over a pre-determined threshold are used as index terms. A stop-list containing common words such as the, and would be referred to and such stop words would be excluded as index terms. Automated extraction indexing may lead to loss of meaning of terms by indexing single words as opposed to phrases. Although it is possible to extract commonly occurring phrases, it becomes more difficult if key concepts are inconsistently worded in phrases. Automated extraction indexing also has the problem that even with use of a stop-list to remove common words such as “the,” some frequent words may not be useful for allowing discrimination between documents. For example, the term glucose is likely to occur frequently in any document related to diabetes. Therefore use of this term would likely return most or all the documents in the database. Post-co-ordinated indexing where terms are combined at the time of searching would reduce this effect but the onus would be on the searcher to link appropriate terms as opposed to the information professional. In addition terms that occur infrequently may be highly significant for example a new drug may be mentioned infrequently but the novelty of the subject makes any reference significant. One method for allowing rarer terms to be included and common words to be excluded by automated techniques would be a relative frequency approach were frequency of a word in a document is compared to frequency in the database as a whole. Therefore a term that occurs more often in a document than might be expected based on the rest of the database could then be used as an index term, and terms that occur equally frequently throughout will be excluded. Another problem with automated extraction is that it does not recognise when a concept is discussed but is not identified in the text by an indexable keword [5].

Assignment indexing

An alternative is assignment indexing where index terms are taken from a controlled vocabulary. This has the advantage of controlling for synonyms as the preferred term is indexed and synonyms or related terms direct the user to the preferred term. This means the user can find articles regardless of the specific term used by the author and saves the user from having to know and check all possible synonyms [6]. It also removes any confusion caused by homographs by inclusion of a qualifying term. A third advantage is that it allows the linking of related terms whether they are linked by hierarchy or association, e.g. an index entry for an oral medication may list other oral medications as related terms on the same level of the hierarchy but would also link to broader terms such as treatment. Assignment indexing is used in manual indexing to improve inter-indexer consistency as different indexers will have a controlled set of terms to choose from. Controlled vocabularies do not completely remove inconsistencies as two indexers may still interpret the subject differently[2].

Index presentation

The final phase of indexing is to present the entries in a systematic order. This may involve linking entries. In a pre-coordinated index the indexer determines the order in which terms are linked in an entry by considering how a user may formulate their search. In a post-coordinated index, the entries are presented singly and the user can link the entries through searches, most commonly carried out by computer software. Post-coordination results in a loss of precision in comparison to pre-coordination [7]

Depth of Indexing

Indexers must make decisions about what entries should be included and how many entries an index should incorporate. The depth of indexing describes the thoroughness of the indexing process with reference to exhaustivity and specificity [8]

Exhaustivity

An exhaustive index is one which lists all possible index terms. Greater exhaustivity gives a higher recall, or more likelihood of all the relevant articles being retrieved, however, this occurs at the expense of precision. This means that the user may retrieve a larger number of irrelevant documents or documents which only deal with the subject in little depth. In a manual system a greater level of exhaustivity brings with it a greater cost as more man hours are required. The additional time taken in an automated system would be much less significant. At the other end of the scale, in a selective index only the most important aspects are covered [9]. Recall is reduced in a selective index as if an indexer does not include enough terms, a highly relevant article may be overlooked. Therefore indexers should strive for a balance and consider what the document may be used. They may also have to consider the implications of time and expense.

Specificity

The specificity describes how closely the index terms match the topics they represent [10] An index is said to be specific if the indexer uses parallel descriptors to the concept of the document and reflects the concepts precisely [11]. Specificity tends to increase with exhaustivity as the more terms you include, the narrower those terms will be.

See also

References

  1. ^ F. W. Lancaster (2003): "Indexing and abstracting in theory and practise". Third edition. London, Facet ISBN 1-85604-482-3. page 6
  2. ^ a b G.G. Chowdhury (2004): "Introduction to modern information retrieval". Third Edition. London, Facet. ISBN 1-85604-480-7. page 71
  3. ^ F. W. Lancaster (2003): "Indexing and abstracting in theory and practise". Third edition. London, Facet ISBN 1-85604-482-3. page 24
  4. ^ Voss, Jakob (2007). "Tagging, Folksonomy & Co - Renaissance of Manual Indexing?". Proceedings of the International Symposium of Information Science. pp. 234–254. http://arxiv.org/abs/cs/0701072.  
  5. ^ J. Lamb (2008): Human or computer produced indexes? [online] Sheffield, Society of Indexers. Accessed 15 January 2009.
  6. ^ C. Tenopir (1999): "Human or automated, indexing is important". Library Journal 124(18) pages 34-38.
  7. ^ D. Bodoff and A. Kambil, (1998): "Partial coordination. I. The best of pre-coordination and post-coordination." Journal of the American Society for Information Science, 49(14), 1254-1269.
  8. ^ D.B. Cleveland and A.D. Cleveland (2001): "Introduction to indexing and abstracting". 3rd Ed. Englewood, libraries Unlimited, Inc. ISBN 1-56308-641-7. page 105
  9. ^ B.H. Weinberg (1990): "Exhaustivity of indexes: Books, journals, and electronic full texts; Summary of a workshop presented at the 1999 ASI Annual Conference". Key Words, 7(5), pages 1+.
  10. ^ J.D. Anderson (1997): Guidelines for indexes and related information retrieval devices [online]. Bethesda, Maryland, Niso Press. 10 December 2008.
  11. ^ D.B. Cleveland and A.D. Cleveland (2001): "Introduction to indexing and abstracting". 3rd Ed. Englewood, libraries Unlimited, Inc. ISBN 1-56308-641-7. page 106

Advertisements






Got something to say? Make a comment.
Your name
Your email address
Message