Prime numbers, beautiful and useful

Dear readers,

As my second interview had to be delayed due to the holidays, I am going to tell you instead about a topic that has always fascinated me by its simplicity, beauty and at the same time usefulness in real life. That topic is prime numbers. I will begin by describing some relevant theoretical results, then explain how they form the basis of modern online security, and then briefly discuss how quantum computers – if they ever come into existence – would force us to redesign a large number of our security systems.

First, let me remind you what prime numbers are. They are the numbers that are only divisible by 1 and themselves. Their sequence begins with 2, 3, 5, 7, 11, 13 and never ends, as the Ancient Greeks already knew. Indeed, if we supposed there were only finitely many of them, we could multiply them all and add 1 to get a result which would be larger than any of them, yet not divisible by any (because of the 1), leading to a contradiction. This proof technique is called reductio ad absurdum. In fact, the great Dirichlet showed that there are infinitely many primes in any arithmetic progression (a sequence you start with a number, called the first term, and keep adding another number, called the difference – for example, if the first term is 2 and the difference 3, you get 2, 5, 8, 11, etc), as long as the first term and the difference are integer and have no common factors. In 2004, Ben Green and Terry Tao showed that there are in fact arbitrarily long sequences of consecutive primes in arithmetic progression (a different and somewhat more surprising result because of the requirement that the primes be consecutive).

So what makes the primes important? They are the building blocks from which all integers are made. As you may remember from grade school, every integer can be factored into primes – and not only that, but this factoring is unique except for the order. The second property is not always true in other number systems. And now we get to the crux of the modern cryptographic protocols. While checking that a given number is prime is easy (fast on a modern computer), and multiplying numbers is easy as well, factoring an integer that is composed of as few as two primes can be hard (even with a supercomputer).

How can we check that a number is prime? There are some properties that a prime number satisfies, and few composite numbers satisfy them. One such property comes from Fermat’s little theorem, not to be confused with Fermat’s (in)famous last theorem, which states that if p is prime then ap-1-1 is divisible by p for any a between 1 and p-1. On the other hand, it turns out that if p is not prime, then fewer than half of all the possible bases a between 1 and p-1 still satisfy this property (such numbers p are called pseudoprimes to the base a), unless you happen to be very unlucky. So, by performing a number of tests with random bases one can determine with a great degree of confidence whether a given number p is prime. This approach is an example of a probabilistic algorithm, one which gives the right result with high probability if repeated many times. Interestingly, the first fast deterministic (not chance-based) algorithm for testing primality was found only in 2002, by two undergraduate students at the Indian Institute of Technology Kanpur, Neeraj Kayal and Nitin Saxena, and their advisor, Manindra Agrawal.

If you’ve watched a security certificate being generated on your computer (which generally takes a few seconds, depending on the certificate’s “grade”), here is exactly what happens – it produces two large numbers, checks that they are both prime (there are enough prime numbers to go around – if you want a 128-bit number you need to check about 100 candidates on average, by the prime number theorem) and multiplies them together. This gives you your “public key”, which you can share freely with the world. It allows other people to send you messages by encrypting them with your public key; however, since getting the original two prime numbers from your public key is hard (only your computer knows them because it generated them in the first place), you are the only one who can actually decrypt them!

This, in a nutshell, is the RSA system, named for MIT researchers Rivest, Shamir and Adleman. They are given the credit for publishing it first, but it had already been known to Clifford Cocks who, however, could not publish it because he worked for the British intelligence. Public-key cryptography was a revolutionary advance in technology – now you no longer need to worry about your encrypted messages being intercepted, because decrypting them is all but impossible – but it relies on the fact that factoring large numbers is hard. We believe that this is true with ordinary computers, though we cannot prove it for sure. This is where yet another surprise came in 1994: Shor’s quantum factoring algorithm!

Without going into details about quantum computers, let me just say that they are a very special mode of computation which relies on the quantum states of particles. While a regular bit has only two states, 0 and 1, a quantum bit (qubit for short) can exist in a superposition of these states (for an idea of what superposition is, think about two different sound frequencies you can hear at once), and only when it’s measured does it collapse to a well-defined state (the so-called collapse of the wave function). Now, it turns out that some things that are difficult to do on a regular computer are easier on a quantum one. And factoring large numbers just happens to be one of those things, as shown by Peter Shor in 1994. It is another probabilistic algorithm, and only requires a quantum computer large enough to write down the integer to be factored twice. For instance, for numbers of the size we discussed, this is 500 qubits.

Where does this leave us and our security systems? Well, on the one hand, quantum computers are still in their infancy as far as the technology goes (this is one field where theory has far outpaced practice), and the largest one in existence today cannot factor numbers greater than 100. On the other hand it may well happen that the technology will one day evolve to being able to build large quantum computers, so that all our security systems will be easily broken. If that happens, it may be necessary to also use quantum computers for building new security systems. Meanwhile, there are alternative systems being developed such as elliptic curve cryptography (the same elliptic curves used by Wiles to prove Fermat’s last theorem, which are plane curves defined by the equation y2 = x3 + ax + b). Perhaps not surprisingly, they also rely on prime numbers – but in a rather different way. Namely, they are based on counting the remainders that points on the elliptic curve can have when they are divided by a prime number.

Godfrey Harold Hardy, author of the famous and delightful book “A mathematician’s apology”, spent most of his life working on prime numbers (including the notorious and still unresolved Riemann hypothesis), prided himself on the lack of practical applications of his work. He notably said, I have never done anything ‘useful’. No discovery of mine has made, or is likely to make, directly or indirectly, for good or ill, the least difference to the amenity of the world”. Ironically, some of his work on prime numbers has turned out to be quite useful, and in my opinion, it is a credit to mathematicians, both pure and applied, that such a beautiful topic has also found such important applications in our everyday life.

The Fifty States of Nate

Dear readers,

Today’s post deals with psephology, the study of elections, and more specifically, the application of mathematics and statistics to it. This field has gained notoriety over the past few years in the United States, largely thanks to the spectacularly successful (and often attacked) predictions by Nate Silver in his blog. While I am not a psephology expert by any means, I understand enough about Nate Silver’s models, and mathematical models in general, to hopefully provide you with an interesting perspective.

The first point I want to make is that, although Nate Silver did predict the outcome of the 2012 United States elections correctly in all 50 states, this in itself is neither his most significant achievement nor an extremely impressive one. His most significant achievement is to highlight the principled application of mathematics and statistics to an area that has up to now been subject to large amounts of human bias. In this, he is far from being alone; however, thanks to his blog’s association with the New York Times, he is the most visible. I give him credit for drawing attention to the power of mathematics and statistics.

Nate Silver is also far from providing the most accurate predictions of the outcome of the last elections. In a detailed analysis, the Center for Applied Rationality actually shows that predictions made by Drew Linzer and Sam Wang were somewhat more accurate when not only the prediction, but also the fraction voting for each party, are taken into account. As for whether predicting 50 outcomes correctly is in fact impressive, consider that there was considerable uncertainty in only 9 of the 50 states. This means that even a completely random prediction made by 9 coin flips had a roughly 0.2% chance of being perfect. Of course, this probability goes up tremendously if good data is available from election polls.

Furthermore, while mathematical models are great for bringing together data from different sources (in Nate’s case, poll data, as well as some understanding of how voter preferences change over time), they are also vulnerable to the biases of this input data. In particular, if the data sources themselves are noisy or inaccurate, even a good mathematical model may go astray. This phenomenon, known as garbage in, garbage out, was partly responsible for Nate Silver’s rather poor prediction of the outcome of the 2010 UK elections. Indeed, poll data in the UK is not disaggregated by region as it is in the US, so that more uncertainty about the regional variation was necessarily present which appears to have thrown Nate off.

About a month before the 2012 US election, Nate Silver himself pointed out, “I’m sure that I have a lot riding on the outcome […]. I’m also sure I’ll get too much credit if the prediction is right and too much blame if it is wrong.” This brings me to my next point. Although mathematical models are frequently used to predict reality, this is neither their only nor even their main use. It may seem like a good idea to evaluate models by how well they predict relaity, but their real value is in helping us understand which factors (or variables) are important and which are less important. Their main role is to provide insights into complex phenomena, whether they are climate change, etiology of diseases, or election outcomes.

Unfortunately, Nate Silver seems to miss the opportunity to allow his models to provide these insights by keeping them private. This is the main concern I have about his work. Perhaps the academic idealist in me wants to be able to reproduce the results of any modeling effort, if only in principle, and Nate may have valid commercial or ethical reasons for not disclosing his models. Still, it would be a good idea for the sake of transparency to allow his fellow psephologists to look “under the hood”, just like the aforementioned Drew Linzer, Sam Wang, and many others do. I strongly believe that transparency and open sharing is critical for advancing the field, as well as preventing the reliance on “tweaks” that may temporarily improve model performance, but are detrimental in the long run.

In any case, I just added Nate Silver’s recent book to my reading list. It seems that the book tackles a wide variety of applications of mathematical modeling (much like I’m planning to do in this blog). I’ll be sure to read it over the New Year break and report on the experience, and I welcome your comments if you’ve already had a chance to read it, or simply want to comment on another issue discussed here.

Is a Museum of Mathematics necessary?

Dear readers,

Today, 12/12/12, is an important date for mathematics in the USA. Not just because it is a date which has three identical components (something that we won’t see again for another 88-odd years), and not even because it’s one of the 12 dates every year that are abbreviated the same way in the USA and the rest of the world (where, as you likely know, they are written with the days before the month). It is also significant because it marks the opening ceremony of the Museum of Mathematics in New York City.

If you read the New York Times, you may recognize the title of this post as a play on an article that appeared there this past summer, entitled Is Algebra Necessary? This misguided and naïve article was among the motivating factors for this blog, as it made me realize how undervalued and misunderstood mathematics education is even among the educated. Several good rebuttals to this article failed to get a larger conversation started, and it is this larger conversation that I want to contribute to with this blog.

Mathematics education is a topic I plan to address from many different angles next year, but for today, I will simply discuss whether a Museum of Mathematics is a step in the right direction. Glen Whitney, the museum’s director, told the New York Times that its mission is to shape cultural attitudes and dispel the bad rap that most people give math – which happens to be one of my blog’s objectives as well, so I could not agree more with his motivation. The big question is whether a museum can achieve this goal.

My main concern is that an appreciation for mathematics is unlikely to come from visiting exhibits in a museum – rather, I believe it can only appear from positive exposure to mathematics in everyday life. If a child dislikes the subject in school, they may expect going to a math museum to be boring. Just as with art, music or a language, a taste for mathematics is an acquired one. It usually requires a certain level of maturity and discipline, which children are unlikely to acquire without outside encouragement.

For this reason, parental influence, as well as that of peers, is critical. An environment that views math in a positive light will have its effect with or without the museum, while a hostile environment may not prepare a child for being receptive towards the museum’s exhibits. Without innovative changes in the mathematics curriculum, a more favorable attitude towards the subject, and a deeper understanding of the role of mathematics in our culture, the impact of the Museum of Mathematics may remain limited.

Despite my skepticism I remain open-minded. On my upcoming trip to New York City I will make sure to stop by the museum and see how deeply its exhibits engage me. I will report on my experience in an upcoming blog post, so stay tuned. Perhaps abstract concepts really can be made fascinating – and concrete – in a museum setting. I am just not sure that the museum can make enough of a difference.

There is, however, at least one thing that I’m pretty sure about. Dr. Whitney spent a decade working at an investment firm, and, in his own words, decided to engage in an activity with “more direct socially redeeming value”. While the social impact of his current undertaking may be limited, it will at least be significantly more positive than that of his previous job. And that, in itself, is definitely a good thing!

Of baseball and earthquakes: an interview with Richard Hoshino

Dear readers,

I’m delighted to kick off my interview series with a very special person, Richard Hoshino. I first met Richard at the Math training camp organized by the Canadian Mathematical Society in the winter of 2001, where he conducted some really engaging sessions on a topic I wasn’t very good at: inequalities. I left the camp with much more confidence in my math problem-solving skills than I had arrived with.

In 2005 I attended Richard’s keynote presentation at the annual Canadian Undergraduate Mathematics Conference (CUMC) in Kingston. This presentation inspired all of us to strive for a healthy balance between research, teaching, and volunteer work in our careers, rather than adopting the “publish or perish” mentality so prevalent in academic circles. I was greatly inspired by Richard’s presentation.

When McGill University was voted to host the following year’s conference and I headed the organizing committee, it was an easy decision to invite Richard back as a keynote speaker, and his presentation (the slides alone don’t do justice to Richard’s oratory skills), touching on his work at the Canada Border Services Agency, showed us a great example of combining mathematics research with public service.

Even after Richard left Canada to live in Japan with his spouse, we stayed in touch and he continued to inspire me, especially when he put his mathematical skills to use to help earthquake relief efforts. He is currently writing a book to inspire young mathematicians and working on a postdoctoral fellowship in Tokyo that just got featured in Japan’s leading English-language newspaper!

You can download our conversation (I still need to figure out how to make it possible to stream it directly). Hope you enjoy this interview!

To find out more about Richard’s work you can visit his homepage.
You can also check out his advisor’s book that he mentioned in the interview.
This is Richard’s research article with an application to music.
Also check out Quest University where Richard is about to start teaching.
Finally, his most recent article on scheduling a game show is on pages 14-15.