The Full Wiki

Meta-analysis: Wikis

Advertisements
  
  

Note: Many of our articles have direct quotes from sources you can cite, within the Wikipedia article! This article doesn't yet, but we're working on it! See more info or our list of citable articles.

Encyclopedia

From Wikipedia, the free encyclopedia

In statistics, a meta-analysis combines the results of several studies that address a set of related research hypotheses. This is normally done by identification of a common measure of effect size, which is modelled using a form of meta-regression. Resulting overall averages when controlling for study characteristics can be considered meta-effect sizes, which are more powerful estimates of the true effect size than those derived in a single study under a given single set of assumptions and conditions.

Contents

History

The first meta-analysis was performed by Karl Pearson in 1904, in an attempt to overcome the problem of reduced statistical power in studies with small sample sizes; analyzing the results from a group of studies can allow more accurate data analysis.[1][2] However, the first meta-analysis of all conceptually identical experiments concerning a particular research issue, and conducted by independent researchers, has been identified as the 1940 book-length publication Extra-sensory perception after sixty years, authored by Duke University psychologists J. G. Pratt, J. B. Rhine, and associates.[3] This encompassed a review of 145 reports on ESP experiments published from 1882 to 1939, and included an estimate of the influence of unpublished papers on the overall effect (the file-drawer problem). Although meta-analysis is widely used in epidemiology and evidence-based medicine today, a meta-analysis of a medical treatment was not published until 1955. In the 1970s, more sophisticated analytical techniques were introduced in educational research, starting with the work of Gene V. Glass, Frank L. Schmidt and John E. Hunter. The online Oxford English Dictionary lists the first usage of the term in the statistical sense as 1976 by Glass.[4] The statistical theory surrounding meta-analysis was greatly advanced by the work of Nambury S. Raju, Larry V. Hedges, Harris Cooper, Ingram Olkin, John E. Hunter, Jacob Cohen, Thomas C. Chalmers, and Frank L. Schmidt.

Advantages of meta-analysis

Advantages of meta-analysis (eg. over classical literature reviews, simple overall means of effect sized etc.) include:

  • Derivation and statistical testing of overall factors / effect size parameters in related studies
  • Generalization to the population of studies
  • Ability to control for between study variation
  • Including moderators to explain variation
  • Higher statistical power to detect an effect than in ‘n=1 sized study sample’

Steps in a meta-analysis

1. Search of literature

2. Selection of studies (‘incorporation criteria’)

  • Based on quality criteria, e.g. the requirement of randomization and blinding in a clinical trial
  • Selection of specific studies on a well-specified subject, e.g. the treatment of breast cancer.
  • Decide whether unpublished studies are included to avoid publication bias (file drawer problem: see below)

3. Decide which dependent variables or summary measures are allowed. For instance:

  • Differences (discrete data)
  • Means (continuous data)
  • Hedges g is a popular summary measure for continuous data that is standardized in order to eliminate scale differences, but it incorporates an index of variation between groups:
\delta=\frac{\mu_t-\mu_c}{\sigma}, in which μt is the treatment mean, μc is the control mean, σ2 the pooled variance.

4. Model selection (see next paragraph)

For reporting guidelines, see QUOROM statement [5] [6]

Meta-regression models

Generally, three types of models can be distinguished in the literature on meta-analysis: simple regression, fixed effects meta-regression and random effects meta-regression.

Advertisements

Simple regression

The model can be specified as

y_j=\beta_0+ \beta_1 x_{1j}+\beta_2 x_{2j}+\cdots+\varepsilon

Where yj is the effect size in study j and β0(intercept) the estimated overall effect size. x_{i.} (i=1\ldots k) are parameters specifying different study characteristics. \varepsilon specifies the between study variation. Note that this model does not allow specification of within study variation.

Fixed-effects meta-regression

Fixed-effects meta-regression assumes that the true effect size θ is normally distributed with \mathcal{N}(\theta,\sigma_\theta) where \sigma_\theta^2 is the within study variance of the effect size. A fixed effects meta-regression model thus allows for within study variability, but no between study variability because all studies have expected fixed effect size θ, i.e. \varepsilon=0.

y_j=\beta_0+\beta_1 x_{1j}+\beta_2 x_{2j}+\ldots+\eta_j

Where \sigma^2_{\eta_j} is the variance of the effect size in study j. Fixed effects meta-regression ignores between study variation. As a result, parameter estimates are biased if between study variation can not be ignored. Furthermore, generalizations to the population are not possible.

Random effect meta-regression

Random effect meta-regression rests on the assumption that θ in \mathcal {N}(\theta,\sigma_i) is a random variable following a (hyper-)distribution \mathcal{N}(\theta,\sigma_\theta).

y_j=\beta_0+\beta_1 x_{1j}+\beta_2 x_{2j}+\ldots+\eta+\varepsilon_j

Where again \sigma^2_{\varepsilon_j} is the variance of the effect size in study j. Between study variance \sigma^2_{\eta} is estimated using common estimation procedures for random effects models (restricted maximum likelihood (REML) estimators).

Applications in modern science

Modern meta-analysis does more than just combine the effect sizes of a set of studies. It can test if the studies' outcomes show more variation than the variation that is expected because of sampling different research participants. If that is the case, study characteristics such as measurement instrument used, population sampled, or aspects of the studies' design are coded. These characteristics are then used as predictor variables to analyze the excess variation in the effect sizes. Some methodological weaknesses in studies can be corrected statistically. For example, it is possible to correct effect sizes or correlations for the downward bias due to measurement error or restriction on score ranges.

Meta-analysis leads to a shift of emphasis from single studies to multiple studies. It emphasizes the practical importance of the effect size instead of the statistical significance of individual studies. This shift in thinking has been termed Meta-analytic thinking. The results of a meta-analysis are often shown in a forest plot.

Results from studies are combined using different approaches. One approach frequently used in meta-analysis in health care research is termed 'inverse variance method'. The average effect size across all studies is computed as a weighted mean, whereby the weights are equal to the inverse variance of each studies' effect estimator. Larger studies and studies with less random variation are given greater weight than smaller studies. Other common approaches include the Mantel Haenszel method[7] and the Peto method.

A recent approach to studying the influence that weighting schemes can have on results has been proposed through the construct of gravity, which is a special case of combinatorial meta-analysis.

Signed differential mapping is a statistical technique for meta-analyzing studies on differences in brain activity or structure which used neuroimaging techniques such as fMRI, VBM or PET.

Weaknesses

A weakness of the method is that sources of bias are not controlled by the method. A good meta-analysis of badly designed studies will still result in bad statistics. Robert Slavin has argued that only methodologically sound studies should be included in a meta-analysis, a practice he calls 'best evidence meta-analysis'. Other meta-analysts would include weaker studies, and add a study-level predictor variable that reflects the methodological quality of the studies to examine the effect of study quality on the effect size.

Another weakness of the method is the heavy reliance on published studies, which may increase the effect as it is very hard to publish studies that show no significant results. This publication bias or "file-drawer effect" (where non-significant studies end up in the desk drawer instead of in the public domain) should be seriously considered when interpreting the outcomes of a meta-analysis. Because of the risk of publication bias, many meta-analyses now include a "failsafe N" statistic that calculates the number of studies with null results that would need to be added to the meta-analysis in order for an effect to no longer be reliable.

Other weaknesses are Simpson's Paradox (two smaller studies may point in one direction, and the combination study in the opposite direction); the coding of an effect is subjective; the decision to include or reject a particular study is subjective; there are two different ways to measure effect: correlation or standardized mean difference; the interpretation of effect size is purely arbitrary; it has not been determined if the statistically most accurate method for combining results is the fixed effects model or the random effects model; and, for medicine, the underlying risk in each studied group is of significant importance, and there is no universally agreed-upon way to weight the risk.

The example provided by the Rind et al. controversy illustrates an application of meta-analysis which has been the subject of subsequent criticisms of many of the components of the meta-analysis.

File drawer problem

The file drawer problem describes the often observed fact that only results with significant parameters are published in academic journals. As a result the distribution of effect sizes are biased, skewed or completely cut off. This can be visualized with a funnel plot which is a scatter plot of sample size and effect sizes. There are several procedures available to correct for the file drawer problem, once identified, such as simulating the cut off part of the distribution of study effects.

References

  1. ^ O'Rourke, Keith (2007-12-01). "An historical perspective on meta-analysis: dealing quantitatively with varying study results". J R Soc Med 100 (12): 579-582. doi:10.1258/jrsm.100.12.579. http://jrsm.rsmjournals.com. Retrieved 2009-09-10.  
  2. ^ Egger, M; G D Smith (1997-11-22). "Meta-Analysis. Potentials and promise". BMJ (Clinical Research Ed.) 315 (7119): 1371-1374. ISSN 0959-8138. http://www.bmj.com/archive/7119/7119ed.htm. Retrieved 2009-09-10.  
  3. ^ Bösch, H. (2004). Reanalyzing a meta-analysis on extra-sensory perception dating from 1940, the first comprehensive meta-analysis in the history of science. In S. Schmidt (Ed.), Proceedings of the 47th Annual Convention of the Parapsychological Association, University of Vienna, (pp. 1-13)
  4. ^ meta-analysis. Oxford English Dictionary. Oxford University Press. Draft Entry June 2008. Accessed 28 March 2009. "1976 G. V. Glass in Educ. Res. Nov. 3/2 My major interest currently is in what we have come to call..the meta-analysis of research. The term is a bit grand, but it is precise and apt... Meta-analysis refers to the analysis of analyses."
  5. ^ http://www.consort-statement.org/resources/related-guidelines-and-initiatives/
  6. ^ http://www.consort-statement.org/index.aspx?o=1346
  7. ^ Mantel, N.; Haenszel, W. (1959). "Statistical aspects of the analysis of data from the retrospective analysis of disease". Journal of the National Cancer Institute 22: 719–748. PMID 13655060.  
  • Cooper, H. & Hedges, L.V. (1994). The Handbook of Research Synthesis. New York: Russell Sage.
  • Cornell, J. E. & Mulrow, C. D. (1999). Meta-analysis. In: H. J. Adèr & G. J. Mellenbergh (Eds). Research Methodology in the social, behavioral and life sciences (pp. 285--323). London: Sage.
  • Norman, S.-L. T. (1999). Tutorial in Biostatistics. Meta-Analysis: Formulating, Evaluating, Combining, and Reporting. Statistics in Medicine, 18, 321-359.
  • Sutton, A.J., Jones, D.R., Abrams, K.R., Sheldon, T.A., & Song, F. (2000). Methods for Meta-analysis in Medical Research. London: John Wiley. ISBN 0-471-49066-0
  • Higgins JPT, Green S (editors). Cochrane Handbook for Systematic Reviews of Interventions Version 5.0.1 [updated September 2008]. The Cochrane Collaboration, 2008. Available from www.cochrane-handbook.org

Further reading

  • Owen, A.B. (2009). Karl Pearson’s meta-analysis revisited. Annals of Statistics, 37 (6B), 3867–3892.

See also

External links

Software


Study guide

Up to date as of January 14, 2010

From Wikiversity

Bellcurve.svg Subject classification: this is a statistics resource .
25%.svg Completion status: this resource is ~25% complete.

Contents

Meta-analysis is a systematic technique for reviewing, analysing, and summarising quantitative research studies on specific topics or questions. The purpose of this page is gather information and resources about how to conduct a meta-analysis. Thus, the target audience includes, for example, post-graduate students conducting a meta-analysis or beginning researchers interesting in conducting a meta-analysis. This page could also be useful for students involved in research method coursework which includes a section on understanding the use and application of meta-analysis.

Introduction

  1. A meta-analysis can be thought of as a "study of studies".
  2. Use of the technique has flourished, particularly in the social, health, and medical sciences since it was developed in the 1970's in response to controversy over traditional, subjective literature review methods (specifically, at the time, those used in to review the psychotherapy outcome studies).

Lecture slides

  1. Practical meta-analysis (Lecture slides; Wilson, 1999)
  2. How to do meta-analysis (Lecture slides; Basu, 2005)

How to do a meta-analysis

  1. Meta-analysis involves analysing the summary data from many studies. It can be performed by hand, using a spreadsheet and formulae, using scripts, syntax or macros with generic statistics software packages, or by using dedicated meta-analysis software packages.
  2. Before starting, identify a clear question(s), e.g., "What are the outcomes of psychotherapy?"
    1. Questions can also involve the effect of independent variables, e.g., "Are the outcomes of psychotherapy similar for males and females?"
    2. Read other related meta-analyses to get a feel for the kinds of questions asked.
    3. Make sure that any independent variables (IVs) and dependent variables (DVs) are very clearly defined.
    4. Because of the importance of establishing a well-defined question and variables, developing a peer-reviewed proposal for a meta-analytic study is strongly recommended.
  3. Establish clear criteria for selection of studies, e.g., does it need to be published in a peer-reviewed journal, or will you also accept theses and non-peer reviewed papers (e.g., conference papers)?
  4. Conduct an exhaustive and systematic literature search, recording your steps along the way (important for the Method - must allow replication)
  5. Create a "coding sheet" - this is this list of fields (variables) you want to extract from each study, and how each of the variables are to be coded - get this peer-reviewed, otherwise you will limit the potential/quality of your analyses
  6. Enter the data - one study per row, but note that there may be multiple outcomes and/or groups of interest for each study, in which case each of these will receive their own row in the database, with a column to code which type of outcome was measured.
  7. Analyse the data using spreadsheet formulae, or by writing syntax commands for a generic statistics package, or by using a dedicated meta-analysis software package (with in-built meta-analysis tools).

Effect sizes

  1. Central to understanding meta-analysis is an understanding of effect sizes.
  2. The chief value of effect sizes in the context of meta-analysis is that they provide a way to standardise effects across studies using different measures, allowing for common analysis.
  3. There many possible effect sizes, but essentially there are two commonly reported types in meta-analysis:
    1. Correlational: e.g., r (product-moment correlation)
    2. Mean differences: e.g., Cohen's d, Hedge's g, etc.

Limitations

  1. An important limitation of meta-analysis is that its results can only be as good as the original data is valid.
  2. Meta-analysis can only analyse the role of independent variables in explaining variance in dependent variables if sufficient data is provided in the original studies.
  3. "Apples and oranges" effect - i.e., there is a risk/tendency in meta-analysis to average/mash together disparate effects.
  4. Can lack in qualitative insight (e.g., as may be more likely to be contributed by an expert conducting a traditional literature review).

Example meta-analytic studies

  1. Hattie, J., Biggs, J., Purdie, N. (1996). Effects of learning skills interventions on student learning: A meta-analysis. Review of Educational Research, 66, 99-136.
  2. Hattie, J., Marsh, H. W., Neill, J. T., & Richards, G. E. (1997). Adventure education and Outward Bound: Out-of-class experiences that make a lasting difference. Review of Educational Research, 67, 43-87.
  3. Purdie, N., Hattie, J., Carroll, A. (2002). A review of the research on interventions for attention deficit hyperactivity disorder: What works best? Review of Educational Research, 77, 61-99.

Software

Comparison table

Some dedicated meta-analysis software includes:

Name URL License $Cost Trial or Demo? Version Notes
CMA http://www.meta-analysis.com Proprietary ~1000 Yes 2
RevMan http://www.cc-ims.net/RevMan  ? Free for non-commercial use Yes 5 For organising reviews; for MA, see [1]
Metawin http://www.metawinsoft.com Proprietary 150
MIX http://www.mix-for-meta-analysis.info  ? 0

Alternative (software

Non-dedicated, generic statistics software which can be used for conducting meta-analysis include:

Other comparisons/lists

  1. w:Meta-analysis#Software
  2. http://www.um.es/facpsi/metaanalysis/software.php
  3. http://www.lehanathabane.com/personal/metalinks.htm
  4. http://www.med.umich.edu/csp/Course%20materials/Fall%202005/Rogers_Meta%20Analysis%20software%20packages.pdf

Tasks

References

Lipsey, M. W., & Wilson, D. B. (2001). Practical meta-analysis. Sage. Thousand Oaks, CA.

See also

External links


Advertisements






Got something to say? Make a comment.
Your name
Your email address
Message