**About**

\(\newcommand{\eps}{\varepsilon}

\newcommand{\R}{\ensuremath{\mathbb{R}}}

\renewcommand{\vec}[1]{\ensuremath{\boldsymbol{#1}}}

\newcommand{\lat}{\mathcal{L}}

\newcommand{\ensuremath}[1]{#1}

\newcommand{\NP}{\mathsf{NP}}

\newcommand{\CVP}{\mathsf{CVP}}

\newcommand{\CVPP}{\mathsf{CVPP}}

\newcommand{\SVP}{\mathsf{SVP}}

\)

**Papers**

*Just Take the Average! An Embarrassingly Simple \(2^n\)-Time Algorithm for SVP (and CVP)*. (arxiv, show abstract)

Divesh Aggarwal, NSD.

Submitted.

We show a \(2^{n+o(n)}\)-time (and space) algorithm for the Shortest Vector Problem on lattices (SVP) that works by repeatedly running an embarrassingly simple “pair and average” sieving-like procedure on a list of lattice vectors. This matches the running time (and space) of the current fastest known algorithm, due to Aggarwal, Dadush, Regev, and Stephens-Davidowitz (ADRS, in STOC, 2015), with a far simpler algorithm. Our algorithm is in fact a modification of the ADRS algorithm, with a certain careful rejection sampling step removed.The correctness of our algorithm follows from a more general “meta-theorem,” showing that such rejection sampling steps are unnecessary for a certain class of algorithms and use cases. In particular, this also applies to the related \(2^{n + o(n)}\)-time algorithm for the Closest Vector Problem (CVP), due to Aggarwal, Dadush, and Stephens-Davidowitz (ADS, in FOCS, 2015), yielding a similar embarrassingly simple algorithm for \(\gamma\)-approximate CVP for any \(\gamma = 1+2^{-o(n/\log n)}\). (We can also remove the rejection sampling procedure from the \(2^{n+o(n)}\)-time ADS algorithm for exact CVP, but the resulting algorithm is still quite complicated.)(close)

*On the Quantitative Hardness of CVP*. (arXiv, show abstract)

Huck Bennett, Alexander Golovnev, NSD.

FOCS, 2017.

For odd integers \(p \geq 1\) (and \(p = \infty\)), we show that the Closest Vector Problem in the \(\ell_p\) norm (\(\CVP_p\)) over rank $n$ lattices cannot be solved in \(2^{(1-\eps) n}\) time for any constant \(\eps > 0\) unless the Strong Exponential Time Hypothesis (SETH) fails. We then extend this result to “almost all” values of \(p \geq 1\), not including the even integers.

This comes tantalizingly close to settling the quantitative time complexity of the important special case of \(\CVP_2\) (i.e., \(\CVP\) in the Euclidean norm), for which a \(2^{n +o(n)}\)-time algorithm is known. In particular, our result applies for any \( p = p(n) \neq 2\) that approaches \(2\) as \(n \to \infty\).We also show a similar SETH-hardness result for \(\SVP_\infty\); hardness of approximating \(\CVP_p\) to within some constant factor under the so-called Gap-ETH assumption; and other quantitative hardness results for \(\CVP_p\) and \(\CVPP_p\) for any finite \(p \geq 1 \) under different assumptions.(close)

*New (and Old) Proof Systems for Lattice Problems*.

Navid Alamati, Chris Peikert, NSD.

Submitted.*A Reverse Minkowski Theorem*. (arXiv, TCS+ talk, show abstract)

Oded Regev, NSD.

STOC, 2017.

Invited to the special issue of the SIAM Journal on Computing.

We prove a conjecture due to Dadush, showing that if \(\lat \subset \R^n\) is a lattice such that \(\det(\lat’) \ge 1\) for all sublattices \(\lat’ \subseteq \lat\), then \( \displaystyle \sum_{\vec y \in \lat} e^{-t^2 \|\vec y\|^2} \le 3/2 \; ,\) where \(t := 10(\log n + 2)\). From this we also derive bounds on the number of short lattice vectors and on the covering radius.

(close)*Pseudorandomness of Ring-LWE for Any Ring and Modulus*. (ePrint, show abstract)

Chris Peikert, Oded Regev, NSD.

STOC, 2017.

We give a polynomial-time quantum reduction from worst-case (ideal) lattice problems directly to the*decision*version of (Ring-)LWE. This extends to decision all the worst-case hardness results that were previously known for the search version, for the same or even better parameters and with no algebraic restrictions on the modulus or number field. Indeed, our reduction is the first that works for decision Ring-LWE with*any number field*and*any modulus*.

(close)*Implementing BP-Obfuscation Using Graph-Induced Encoding*. (ePrint)

Shai Halevi, Tzipora Halevi, Victor Shoup, NSD.

Submitted.*On the Lattice Distortion Problem*. (arXiv, show abstract)

Huck Bennett, Daniel Dadush, NSD.

ESA, 2016.

We introduce and study the*Lattice Distortion Problem*(LDP). LDP asks how “similar” two lattices are. I.e., what is the minimal distortion of a linear bijection between the two lattices? LDP generalizes the Lattice Isomorphism Problem (the lattice analogue of Graph Isomorphism), which simply asks whether the minimal distortion is one.As our first contribution, we show that the distortion between any two lattices is approximated up to a \(n^{O(\log n)}\) factor by a simple function of their successive minima. Our methods are constructive, allowing us to compute low-distortion mappings that are within a \(2^{O(n (\log \log n)^2/\log n)}\) factor of optimal in polynomial time and within a \(n^{O(\log n)}\) factor of optimal in singly exponential time. Our algorithms rely on a notion of basis reduction introduced by Seysen (Combinatorica 1993), which we show is intimately related to lattice distortion. Lastly, we show that LDP is NP-hard to approximate to within any constant factor (under randomized reductions), by a reduction from the Shortest Vector Problem.

(close)*Search-to-Decision Reductions for Lattice Problems with Approximation Factors (Slightly) Greater Than One*. (arXiv)

NSD.

APPROX, 2016.*Message Transmission with Reverse Firewalls— Secure Communication on Corrupted Machines*. (ePrint, CRYPTO talk)

Yevgeniy Dodis, Ilya Mironov, NSD.

CRYPTO, 2016.*Discrete Gaussian Sampling Reduces to CVP and SVP*. (arXiv, show abstract)

NSD.

SODA, 2016.

The discrete Gaussian \(D_{\lat – \vec{t}, s}\) is the distribution that assigns to each vector \(\vec{x}\) in a shifted lattice \(\lat – \vec{t}\) probability proportional to \(e^{-\pi \|\vec{x}\|^2/s^2}\). It has long been an important tool in the study of lattices. More recently, algorithms for discrete Gaussian sampling (DGS) have found many applications in computer science. In particular, polynomial-time algorithms for DGS with very high parameters \(s\) have found many uses in cryptography and in reductions between lattice problems. And, in the past year, Aggarwal, Dadush, Regev, and Stephens-Davidowitz showed \(2^{n+o(n)}\)-time algorithms for DGS with a much wider range of parameters and used them to obtain the current fastest known algorithms for the two most important lattice problems, the Shortest Vector Problem (SVP) and the Closest Vector Problem (CVP).Motivated by its increasing importance, we investigate the complexity of DGS itself and its relationship to CVP and SVP. Our first result is a polynomial-time dimension-preserving reduction from DGS to CVP. There is a simple reduction from CVP to DGS, so this shows that DGS is equivalent to CVP. Our second result, which we find to be more surprising, is a polynomial-time dimension-preserving reduction from

*centered*DGS (the important special case when \(\vec{t} = \vec0\)) to SVP. In the other direction, there is a simple reduction from \(\gamma\)-approximate SVP for any \(\gamma = \Omega(\sqrt{n/\log n})\), and we present some (relatively weak) evidence to suggest that this might be the best achievable approximation factor.We also show that our CVP result extends to a much wider class of distributions and even to other norms.

(close)*Dimension-Preserving Reductions Between Lattice Problems*. (PDF, show abstract)

NSD.

Brief survey.

Computational problems on lattices have found a remarkable number of applications in computer science. In particular, over the past twenty years, many strong cryptographic primitives have been constructed with their security based on the (worst-case) hardness of various lattice problems.Due to their importance, there has been much work towards understanding the relationship between these problems. For the parameters that typically interest us, the fastest known algorithms for lattice problems run in time that is exponential in the dimension of the lattice. Therefore, we are typically interested in reductions that

*preserve*this dimension. (We actually relax this notion slightly and consider a reduction to be “dimension-preserving” if it increases the dimension by at most an additive constant.)We summarize many of the known results in this area.

(close)*Solving the Closest Vector Problem in \(2^n\) Time—The Discrete Gaussian Strikes Again!*(arXiv, show abstract)

Divesh Aggarwal, Daniel Dadush, NSD.

FOCS, 2015.

We give a \(2^{n+o(n)}\)-time and space randomized algorithm for solving the*exact*Closest Vector Problem (CVP) on \(n\)-dimensional Euclidean lattices. This improves on the previous fastest algorithm, the deterministic \(\widetilde{O}(4^{n})\)-time and \(\widetilde{O}(2^{n})\)-space algorithm of Micciancio and Voulgaris [MV13].We achieve our main result in three steps. First, we show how to modify the sampling algorithm from ADRS15 to solve the problem of discrete Gaussian sampling over

*lattice shifts*, \(\lat – \vec{t}\), with very low parameters. While the actual algorithm is a natural generalization of ADRS15, the analysis uses substantial new ideas. This yields a \(2^{n+o(n)}\)-time algorithm for approximate CVP with the very good approximation factor \(\gamma = 1+2^{-o(n/\log n)}\). Second, we show that the approximate closest vectors to a target vector \(\vec{t}\) can be grouped into “lower-dimensional clusters,” and we use this to obtain a recursive reduction from exact CVP to a variant of approximate CVP that “behaves well with these clusters.” Third, we show that our discrete Gaussian sampling algorithm can be used to solve this variant of approximate CVP.The analysis depends crucially on some new properties of the discrete Gaussian distribution and approximate closest vectors, which might be of independent interest.

(close)*An Inequality for Gaussians on Lattices*. (arXiv, show abstract)

Oded Regev, NSD.

SIAM J. Discrete Mathematics (SIDMA), 31(2) 2017.

We show that for any lattice \(\lat \subseteq \R^n\) and vectors \(\vec{x}, \vec{y} \in \R^n\),

\[

\rho(\lat + \vec{x})^2 \rho(\lat + \vec{y})^2 \leq \rho(\lat)^2 \rho(\lat + \vec{x} + \vec{y}) \rho(\lat + \vec{x} – \vec{y})

\; ,

\] where \(\rho\) is the Gaussian mass function \(\rho(A) := \sum_{\vec{w} \in A} \exp(-\pi \|\vec{w}\|^2)\). We show a number of applications, including bounds on the moments of the discrete Gaussian distribution, various monotonicity properties of the heat kernel on flat tori, and a positive correlation inequality for Gaussian measures on lattices.

(close)*Solving the Shortest Vector Problem in \(2^n\) Time via Discrete Gaussian Sampling*. (arXiv, Simons talk, show abstract)

Divesh Aggarwal, Daniel Dadush, Oded Regev, NSD.

STOC, 2015.

We give a randomized \(2^{n+o(n)}\)-time and space algorithm for solving the Shortest Vector Problem (SVP) on \(n\)-dimensional Euclidean lattices. This improves on the previous fastest algorithm: the deterministic \(\widetilde{O}(4^n)\)-time and \(\widetilde{O}(2^n)\)-space algorithm of Micciancio and Voulgaris [MV13].In fact, we give a conceptually simple algorithm that solves the (in our opinion, even more interesting) problem of discrete Gaussian sampling (DGS). More specifically, we show how to sample \(2^{n/2}\) vectors from the discrete Gaussian distribution at

*any parameter*in \(2^{n+o(n)}\) time and space. (Prior work only solved DGS for very large parameters.) Our SVP result then follows from a natural reduction from SVP to DGS. We also show that our DGS algorithm implies a \(2^{n + o(n)}\)-time algorithm that approximates the Closest Vector Problem to within a factor of \(1.97\).In addition, we give a more refined algorithm for DGS above the so-called

*smoothing parameter*of the lattice, which can generate \(2^{n/2}\) discrete Gaussian samples in just \(2^{n/2+o(n)}\) time and space.Among other things, this implies a \(2^{n/2+o(n)}\)-time and space algorithm for \(1.93\)-approximate decision SVP.

(close)*Cryptographic Reverse Firewalls*. (ePrint, slides)

Ilya Mironov, NSD.

Eurocrypt, 2015.*How to Eat Your Entropy and Have It Too–Optimal Recovery Strategies for Compromised RNGs*. (ePrint, CRYPTO talk)

Yevgeniy Dodis, Adi Shamir, NSD, Daniel Wichs.

CRYPTO, 2014.

Invited to the special issue of Algorithmica.*On the Closest Vector Problem with a Distance Guarantee*. (arXiv)

Daniel Dadush, Oded Regev, NSD.

CCC, 2014.

(Previous title:*On Bounded Distance Decoding and the Closest Vector Problem with Preprocessing*.)*The Cyclic Sieving Phenomenon on the Alternating Sign Matrices*. (pdf)

NSD, Alex Cloninger.

Manuscript, 2007*On Link Patterns and Alternating Sign Matrices*. (pdf)

Fraser Chiu Kim Hong, Alex Cloninger, NSD.

Manuscript, 2007

**Selected Talks Available Online**

- “A Reverse Minkowski Theorem.” (youtube)

Invited by TCS+, March 2017. - “Message Transmission with Reverse Firewalls—Secure Communication on Corrupted Machines.” (youtube)

CRYPTO, 2016. - “Cryptographic Reverse Firewalls.” (slides)

NYU Cryptography Reading Group, February 2016. - “Solving SVP in 2^n Time Using Discrete Gaussian Sampling.” (youtube)

Invited by Simons Institute Cryptography Program, July 2015. - “How to Eat Your Entropy and Have It Too–Optimal Recovery Strategies for Compromised RNGs.” (youtube)

CRYPTO, 2014. - “The Halting Problem, Incompleteness, and the Limits of Mathematics.” (youtube)

cSplash (a lecture series for high school students), April 2014. - “The FM-Index.” (youtube)

Invited by Seven Bridges Genomics, January 2014. - “What Makes Poker Awesome (and Deep)?” (youtube)

Invited by NYU Game Center, March 2013.