Contact
CoCalc Logo Icon
StoreFeaturesDocsShareSupport News AboutSign UpSign In
| Download

LLL Play Files

Views: 198

A lattice is the set (a discrete subgroup) of all integer combinations of some vectors ({\vec{v_1}, \ldots, \vec{v_k}} \in \mathbb{R}^n), (L = { a_1 \cdot \vec{v_1} + \cdots + a_k \cdot \vec{v_k} | a_i \in \mathbb{Z} } \subset \mathbb{R}^n ). We can call the set ({v_1, \ldots, v_k}) a basis of (L)

A given lattice, (L), may have an infinite number of distinct bases. However, the volume of their fundamental shape is an invariant, the determinant of the lattice (which can be found by taking the determinant of a basis for the lattice).

M = identity_matrix(ZZ, 4) # a starting basis (The best basis BTW) VS = MatrixSpace(ZZ, 4, 1) #this is effectively the lattice generated by M v = VS.random_element() #some element which is in our lattice print M print v
[1 0 0 0] [0 1 0 0] [0 0 1 0] [0 0 0 1] [-2] [-6] [ 4] [-6]
U = random_matrix(ZZ, 4, 4, algorithm="unimodular") print U # the rows of U are also a basis of L
[ -7 -26 -158 172] [ -4 -15 -91 99] [ -1 -7 -38 39] [ 1 9 44 -40]
weights = U.inverse()*v print weights
[-1378] [-2516] [ 904] [ 394]
U*weights == v
True

Notice that with a "nice" basis for (L) the weights which produce (v \in L) are going to be close to the size of (v). With a bad basis our weights were very large for such a small vector.

A good question to ask about a lattice: What is the shortest vector?

Given only a generic basis, this is an NP-hard problem

LLL (Lenstra, Lenstra, Lovasz) is a polynomial time algorithm which takes a basis for a lattice, and gives a "reduced" basis. The reduced basis gives an approximate solution to the shortest vector problem (SVP) and then some.

#Let's make some small matrices that might have difficult to find small vectors. M = matrix(ZZ, 3,3) M[0,0] = fibonacci(20) M[0,1] = fibonacci(42) #M[0,2] = fibonacci(53) M[1,0] = fibonacci(21) M[1,1] = fibonacci(40) #M[1,2] = fibonacci(54) M[2,0] = fibonacci(22) M[2,1] = fibonacci(41) M[2,2] = 1#fibonacci(52) print M
[ 6765 267914296 0] [ 10946 102334155 0] [ 17711 165580141 1]
M.LLL()
[ -1 -6765 6765] [ 0 -4181 -10946] [ 21892 0 1]
C = 100 MM = identity_matrix(3).augment(C*M) #to see the amount of work I attach an identity and scale up the more important part RES = MM.LLL() RES.submatrix(0,3,3,3)/C #to display I remove my multiplicative factor RES.submatrix(0,0,3,3) #and show the U which did the trick
[ -1 -6765 6765] [ -1 -10946 -4181] [ 21892 0 1] [ 0 -10946 6765] [ 0 6765 -4181] [ -1 1 1]

Successive Minima of a lattice (nice intro reference):

λi(L):=inf{rdim(span(LBˉ(0,r)))i} \lambda_i(L) := \textrm{inf}\{r | \textrm{dim}(\textrm{span}(L \cap \bar{B}(0,r))) \geq i\}

As a picture:

Note that the successive minima are an invariant of the lattice. They represent the best theoretical basis (although it might not be achievable as the dimension grows).

A reduced basis, (\vec{b_1}, \ldots, \vec{b_d}), will have lengths which approximate the successive minima. Depending on the algorithm there is an approximation factor, (\alpha \approx \sqrt{4/3}), such that:

b1α(d1)/2detL1/d \parallel \vec{b_1} \parallel \leq \alpha^{(d-1)/2} | \textrm{det} L |^{1/d}

AND biαdiλiαibi \frac{\parallel \vec{b_i^{*}} \parallel}{\alpha^{d-i}} \leq \lambda_i \leq \alpha^{i} \parallel \vec{b_i^{*}} \parallel

Note that (\parallel \vec{b_i^{*}} \parallel) is the length of the (i^{\textrm{th}}) Gram-Schmidt vector. So: bibiαibi \parallel \vec{b_i^{*}} \parallel \leq \parallel \vec{b_i} \parallel \leq \alpha^i \parallel \vec{b_i^{*}} \parallel

A reduced basis has the property that the Gram-Schmidt lengths "cannot drop too fast" which is captured by: biαbi+1 \frac{\parallel \vec{b_i^{*}} \parallel}{\alpha} \leq \parallel \vec{b_{i+1}^{*}} \parallel

As an LLL researcher my particular flavor is to attack applications by changing the underlying lattice so that there is a big gap between (\lambda_i). That way an approximation will be enough.

A lattice is the set (a discrete subgroup) of all integer combinations of some vectors ({\vec{v_1}, \ldots, \vec{v_k}} \in \mathbb{R}^n), (L = { a_1 \cdot \vec{v_1} + \cdots + a_k \cdot \vec{v_k} | a_i \in \mathbb{Z} } \subset \mathbb{R}^n ). We can call the set ({v_1, \ldots, v_k}) a basis of (L)

A given lattice, (L), may have an infinite number of distinct bases. However, the volume of their fundamental shape is an invariant, the determinant of the lattice (which can be found by taking the determinant of a basis for the lattice).

M = identity_matrix(ZZ, 4) # a starting basis (The best basis BTW) VS = MatrixSpace(ZZ, 4, 1) #this is effectively the lattice generated by M v = VS.random_element() #some element which is in our lattice print M print v
[1 0 0 0] [0 1 0 0] [0 0 1 0] [0 0 0 1] [-2] [-6] [ 4] [-6]
U = random_matrix(ZZ, 4, 4, algorithm="unimodular") print U # the rows of U are also a basis of L
[ -7 -26 -158 172] [ -4 -15 -91 99] [ -1 -7 -38 39] [ 1 9 44 -40]
weights = U.inverse()*v print weights
[-1378] [-2516] [ 904] [ 394]
U*weights == v
True

Notice that with a "nice" basis for (L) the weights which produce (v \in L) are going to be close to the size of (v). With a bad basis our weights were very large for such a small vector.

A good question to ask about a lattice: What is the shortest vector?

Given only a generic basis, this is an NP-hard problem

LLL (Lenstra, Lenstra, Lovasz) is a polynomial time algorithm which takes a basis for a lattice, and gives a "reduced" basis. The reduced basis gives an approximate solution to the shortest vector problem (SVP) and then some.

#Let's make some small matrices that might have difficult to find small vectors. M = matrix(ZZ, 3,3) M[0,0] = fibonacci(20) M[0,1] = fibonacci(42) #M[0,2] = fibonacci(53) M[1,0] = fibonacci(21) M[1,1] = fibonacci(40) #M[1,2] = fibonacci(54) M[2,0] = fibonacci(22) M[2,1] = fibonacci(41) M[2,2] = 1#fibonacci(52) print M
[ 6765 267914296 0] [ 10946 102334155 0] [ 17711 165580141 1]
M.LLL()
[ -1 -6765 6765] [ 0 -4181 -10946] [ 21892 0 1]
C = 100 MM = identity_matrix(3).augment(C*M) #to see the amount of work I attach an identity and scale up the more important part RES = MM.LLL() RES.submatrix(0,3,3,3)/C #to display I remove my multiplicative factor RES.submatrix(0,0,3,3) #and show the U which did the trick
[ -1 -6765 6765] [ -1 -10946 -4181] [ 21892 0 1] [ 0 -10946 6765] [ 0 6765 -4181] [ -1 1 1]

Successive Minima of a lattice (nice intro reference):

λi(L):=inf{rdim(span(LBˉ(0,r)))i} \lambda_i(L) := \textrm{inf}\{r | \textrm{dim}(\textrm{span}(L \cap \bar{B}(0,r))) \geq i\}

As a picture:

Note that the successive minima are an invariant of the lattice. They represent the best theoretical basis (although it might not be achievable as the dimension grows).

A reduced basis, (\vec{b_1}, \ldots, \vec{b_d}), will have lengths which approximate the successive minima. Depending on the algorithm there is an approximation factor, (\alpha \approx \sqrt{4/3}), such that:

b1α(d1)/2detL1/d \parallel \vec{b_1} \parallel \leq \alpha^{(d-1)/2} | \textrm{det} L |^{1/d}

AND biαdiλiαibi \frac{\parallel \vec{b_i^{*}} \parallel}{\alpha^{d-i}} \leq \lambda_i \leq \alpha^{i} \parallel \vec{b_i^{*}} \parallel

Note that (\parallel \vec{b_i^{*}} \parallel) is the length of the (i^{\textrm{th}}) Gram-Schmidt vector. So: bibiαibi \parallel \vec{b_i^{*}} \parallel \leq \parallel \vec{b_i} \parallel \leq \alpha^i \parallel \vec{b_i^{*}} \parallel

A reduced basis has the property that the Gram-Schmidt lengths "cannot drop too fast" which is captured by: biαbi+1 \frac{\parallel \vec{b_i^{*}} \parallel}{\alpha} \leq \parallel \vec{b_{i+1}^{*}} \parallel

As an LLL researcher my particular flavor is to attack applications by changing the underlying lattice so that there is a big gap between (\lambda_i). That way an approximation will be enough.