In computer science, lattice problems are a class of optimization problems on lattices. The conjectured intractability of such problems is central to construction of secure latticebased cryptosystems. For applications in such cryptosystems, lattices over vector spaces (often ) or free modules (often ) are generally considered.
For all the problems below, assume that we are given (in addition to other more specific inputs) a basis for the vector space V and a norm N. The norms usually considered are L^{2}. However, other norms (such as L^{p}) are also considered and show up in variety of results^{[1]}. Let λ(L) denote the length of the shortest nonzero vector in the lattice L: .
Contents 
In SVP, a basis of a vector space V and a norm N (often L^{2}) are given for a lattice L and one must find the shortest nonzero vector in V, as measured by N, in L. In other words, the algorithm should output a nonzero vector v such that N(v) = λ(L).
In the γapproximation version SVP_{γ}, one must find a nonzero lattice vector of length at most γλ(L).
The exact version of the problem is NPhard.^{[2]} Approach tecniques: LenstraLenstraLovasz Basis reduction produces a "relatively short vector" in polynomially time, but does not solve the problem. Kannan's HKZ basis reduction algorithm solves the problem in time where n is the dimension. This is great. Lastly, Schnorr presented a technique that interpolates between LLL and HKZ called Block Reduction. Block reduction works with HKZ bases and if the number of blocks is chosen to be larger than the dimension, the resulting algorithm Kannan's full HKZ basis reduction.
The problem GapSV P_{β} consists of differentiating between the instances of SVP in which the answer is at most 1 or larger than β, where β can be a function a fixed function of n, the number of vectors. Given a basis for the lattice, the algorithm must decide whether or λ(L) > β. Like other promise problems, the algorithm is allowed to err on all other cases.
Yet another version of the problem is GapSV P_{ζ,γ} for some functions ζ,γ. The input to the algorithm is a basis B and a number d. It is assured that all the vectors in the Gram–Schmidt orthogonalization are of length at least 1, and that and that where n is the dimension. The algorithm must accept if , and reject if . For large ζ (ζ(n) > 2^{n / 2}), the problem is equivalent to GapSV P_{γ} because^{[3]} a preprocessing done using the LLL algorithm makes the second condition (and hence, ζ) redundant.
In CVP, a basis of a vector space V and a metric M (often L_{2}) are given for a lattice L, as well as a vector v in V but not necessarily in L. It is desired to find the vector in L closest to v (as measured by M). In the γapproximation version CVP_{γ}, one must find a lattice vector at distance at most γ. It is a more general case of the shortest vector problem.
It is a more general case of the shortest vector problem. It is easy^{[4]} to show that given an oracle for CVP_{γ} (defined below), one can solve SVP_{γ} by making some queries to the oracle. The naive method to find the shortest vector by calling the CVP_{γ} oracle to find the closest vector to 0 does not work because 0 is itself a lattice vector and the algorithm could potentially output 0.
The reduction from SVP_{γ} to CVP_{γ} is as follows: Suppose that the input to the SVP_{γ} problem is the basis for lattice . Consider the basis and let x_{i} be the vector returned by CVP_{γ}(B^{ i},b_{i}). The claim is that the shortest vector in the set {x_{i} − b_{i}} is the shortest vector in the given lattice.
Goldreich et al.^{[5]} showed that any hardness of SVP implies the same hardness for CVP. Using PCP tools, Arora et al.^{[6]} showed that CVP is hard to approximate within factor unless . Dinur et al.^{[7]} strengthened this by giving a NPhardness result with ε = (loglogn)^{c} for c < 1 / 2.
^{[8]}
^{[9]}
The algorithms for CVP have benn rediscovered and used in several fields, including the integer ambiguity resolution of carrierphase GNSS (GPS).
This problem is similar to the GapSVP problem. For GapCV P_{β}, the input consists of a lattice basis and a vector v and the algorithm must answer whether
The problem is trivially contained in NP for any approximation factor.
Schnorr^{[10]}, in 1987, showed that deterministic polynomial time algorithms can solve the problem for . Ajtai et al.^{[11]} showed that probabilistic algorithms can achieve a slightly better approximation factor of β = 2^{O(nloglogn / logn)}
In 1993, Banaszczyk^{[12]} showed that GapCV P_{n} is in . In 2000, Goldreich and Goldwasser^{[13]} showed that puts the problem in both NP and coAM. In 2005, Ahoronov and Regev^{[14]} showed that for some constant c, the problem with is in .
For lower bounds, Dinur et al.^{[15]} showed in 1998 that the problem is NPhard for β = n^{o(1 / loglogn)}.
Given a lattice L of dimension n, the algorithm must output n linearly independent so that where the right hand side considers all basis of the lattice.
In the γapproximate version, given a lattice L with dimension n, find n linearly independent vectors of length max v_{i} ≤ γλ(L)
This problem is similar to CVP. Given a vector such that its distance from the lattice is at most λ(L) / 2, the algorithm must output the closest lattice vector to it.
Given a basis for the lattice, the algorithm must find the largest distance (or in some versions, its approximation) from any vector to the lattice.
Many problems become easier if the input basis consists of short vectors. An algorithm that solves the Shortest Basis Problem (SBP) must, given an lattice basisB, output an equivalent basis B' such that the length of the longest vector in B' is as short as possible.
The approximation version SBP_{γ} problem consist of finding a basis whose longest vector is at most γ times longer than the longest vector in the shortest basis.
Average case hardness of problems forms a basis for proofsofsecurity for most cryptographic schemes. However, experimental evidence suggests that most NPhard problems lack this property: they are probably only worst case hard. Many lattice problems have been conjectured or proven to be averagecase hard, making them an attractive class of problems to base cryptographic schemes on. Moreover, worstcase hardness of some lattice problems have been used to create secure cryptographic schemes. The use of worstcase hardness in such schemes makes them among the very few schemes that are very likely secure even against Quantum computers.
The above lattice problems are easy to solve if the algorithm is provided with a "good" basis. Lattice reduction algorithms aim, given a basis for a lattice, to output a new basis consisting of relatively short, nearly orthogonal vectors. The LLL algorithm^{[16]} was an early efficient algorithm for this problem which could output a almost reduced lattice basis in polynomial time. This algorithm and its further refinements were used to break several cryptographic schemes, establishing its status as a very important tool in cryptanalysis. The success of LLL on experimental data led to a belief that lattice reduction might be an easy problem in practice. However, this belief was challenged when in the late 1990s, several new results on the hardness of lattice problems were obtained, starting with the result of Ajtai^{[17]}.
In his seminal papers^{[17]}^{[18]}, Ajtai showed that the SVP problem was NPhard and discovered some connections between the worstcase complexity and averagecase complexity of some lattice problems. Building on these results, Ajtai and Dwork^{[19]} created a publickey cryptosystem whose security could be proven using only the worst case hardness of a certain version of SVP, thus making it the first^{[20]} result to have used worstcase hardness to create secure systems.
The SVP by example 
The CVP by example 
