Ocean reanalysis is an objective method of combining historical ocean observations with a general ocean model (typically a computational model) driven by historical estimates of surface winds, heat, and freshwater, by way of a data assimilation algorithm to reconstruct historical changes in the state of the ocean. Historical observations are sparse and insufficient for understanding the history of the ocean and its circulation. By utilizing data assimilation techniques in combination with advanced computational models of the global ocean, researchers are able to interpolate the historical observations to all points in the ocean using the knowledge of physics that is embedded in the computational models. This process has an analog in the construction of atmospheric reanalyses and is closely related to ocean state estimation.
Contents 
A number of efforts have been initiated in recent years to apply data assimilation to estimate the physical state of the ocean (temperature, salinity, currents, sea level) in recent years. However, here we focus on the more limited number of reanalysis efforts spanning multiple decades. Nine of these are described in ^{[1]}. Among the nine there are three alternative state estimation approaches. The first approach is used by the ‘nomodel’ analyses, for which temperature or salinity observations update a first guess provided by climatological monthly estimates. The second approach is that of the sequential data assimilation analyses, which march forward in time from a previous analysis using a numerical simulation of the evolving temperature and other variables produced by an ocean general circulation model (data assimilation). The simulation provides the first guess of the state of the ocean (temperature, salinity, etc.) at the next analysis time, while corrections are made to this first guess based on observations of variables such as temperature, salinity, or sea level. The third approach is 4Dvar, which in the implementation described uses the initial conditions and surface forcing as control variables to be modified in order to be consistent with the observations as well as a numerical representation of the equations of motion through iterative solution of a giant optimization problem.
ISHII and LEVITUS begin with a first guess of the climatological monthly upperocean temperature based on climatologies produced by the NOAA National Oceanographic Data Center. The innovations are mapped onto the analysis levels. ISHII uses and alternative 3DVAR approach to do an objective mapping with a smaller decorrelation scale in midlatitudes (300 km) that elongates in the zonal direction by a factor of 3 at equatorial latitudes. LEVITUS begins similarly to ISHII, but uses the technique of Cressman and Barnes with a homogeneous scale of 555 km to objectively map the temperature innovation onto a uniform grid.
The sequential approaches can be further divided into those using Optimal Interpolation and its more sophisticated cousin the Kalman Filter, and those using 3DVar. Among the nine mentioned above INGV and SODA use versions of Optimal Interpolation. CERFACS, GODAS, and GFDL all use 3DVar. To date we are unaware of any attempt to use Kalman Filter for multidecadal ocean reanalyses.
One innovative attempt by GECCO has been made to apply 4DVar to the decadal ocean estimation problem. This approach faces daunting computational challenges, but provides some interesting benefits including satisfying some conservation laws and the construction of the ocean model adjoint.
