Talk summary

Grimm’s conjecture

This post is based on a Linfoot talk by Shanta Laishram.

Grimm’s conjecture. Let n+1, n+2,\ldots, n+k be consecutive composite numbers. Then there exist distinct primes P_i such that P_i | (n+i) for 1 \le i \le k.

Amazingly, this simple conjecture was not published until 1969. It has been verified for n < 1.9 \times 10^{10}. The calculations were economised through an argument beginning with Hall’s marriage theorem.

Grimm’s function. Let g(n) be the largest integer k such that distinct prime divisors exist for n+1, \ldots, n+k.

Henceforth, let p_i denote the ith smallest prime. Then Grimm’s conjecture is equivalent to the statement that g(p_i) \ge p_{i+1}-p_i for all i. It is this relationship to the prime gap that makes Grimm’s conjecture not only interesting but significant. Indeed, assuming Grimm’s conjecture, upper bounds on Grimm’s function are upper bounds for the prime gap.

Henceforth, we consider the prime gap p_{i+1}-p_i for large i. The best unconditional prime gap (for large i) is p_i^{0.525}, due to Baker, Harman and Pintz. The Riemann hypothesis implies the prime gap bound \ll \sqrt{p_i} \log p_i. This is due to Harald Cramér, who conjectured the bound \ll (\log p_i)^2.

Incredibly, the current conditional result based on Grimm’s conjecture beats the RH one! This follows immediately from recent work of Laishram and Ram Murty based on smooth numbers and the Selberg sieve.

Theorem (Laishram and Ram Murty, 2012). There exists n_0 such that if n \ge n_0 then

g(n) < n^{0.46}.

Thus Grimm’s conjecture also implies Legendre’s conjecture asymptotically. I’d be interested to know if the n_0 can be made explicit in the above Theorem.

Advertisements
Standard
Introductory post

Introduction to diophantine equations in many variables

Suppose we want to study the integer zeros of a polynomial in s-1 variables. Equivalently we can homogenise and consider a form in s variables. Let’s start with the simplest case: an additive form with nonzero integer coefficients.

F(\mathbf{x}) = a_1 x_1^k + \ldots + a_s x_s^k \qquad \qquad (1)

If s is small, then the problem can be attacked arithmetically. We shall be concerned with the situation where there are many variables. How many solutions do we expect in a box of length B? We begin with a naive heuristic. There are \asymp B^s possibilities for \mathbf{x}, and for these \mathbf{x} there are \asymp B^k values taken by F. If the values taken by F were ‘random’, then (1) would have roughly B^{s-k} integer solutions \mathbf{x} \in [-B,B]^s, for large B.

A lot of the time this is actually true in a precise sense. What can go wrong? If k is even and the coefficients a_1, \ldots, a_s all have the same sign, then (1) admits only the trivial solution, since it does not even have nontrivial real solutions. If

F(x,y,z) = x^5 + 3y^5 + 9z^5 \qquad \qquad (1)

then F cannot have nontrivial zeros, by infinite descent (any zero has x,y,z all divisible by 3, which rules out the possibility of a smallest nontrivial zero). See the introduction of this for more examples.

The examples above illustrate failures of real solubility and 3-adic solubility, respectively. Given real and p-adic solubility for all primes p (collectively referred to as local solubility), we might expect (1) to admit a nontrivial solution. This philosophy is known as a Hasse principle, or local-global principle. It’s not always true, but if s is large then it holds under mild conditions.

In fact much more can be said. In a future post, I’ll explain how the circle method can provide an asymptotic formula for the number of integer solutions \mathbf{x} \in [-B,B]^s to (1). This asymptotic formula, as well as being a constant times B^{s-k}, is the product of the local densities of solutions to (1). This is often called a quantitative Hasse principle.

Standard