Exciting prime breakthroughs!

Dear readers,

These last few weeks have seen two very exciting developments in number theory, my “first love” in mathematics. In particular, two important conjectures about prime numbers have been proven for the first time.

The first one states that there are infinitely many pairs of prime numbers that differ by at most 70 million. One would ultimately like to establish that there are infinitely many pairs that differ by 2, but showing that the smallest gaps between prime numbers are bounded is a huge step in the right direction. An interesting twist on the story is that it was proven by a relatively unknown mathematician, Yitang Zhang, using a modification of well-known techniques, which, surprisingly, most experts in the field believed would not be sufficient. This is reminiscent of the AKS primality testing algorithm I discussed in an earlier post.

The second one states that every odd number can be expressed as the sum of 3 primes, and was proven by my friend Harald Helfgott (I actually met Harald not through our work on mathematics, but through our common interest in the constructed language Esperanto; that, however, is a story for another time). This is the so-called “odd Goldbach conjecture”.  The harder statement, known as the “even Goldbach conjecture”, states that every even number can be expressed as the sum of 2 primes. It’s unlikely we’ll see a proof of that one anytime soon, but then again, we’ve had many unexpected surprises in this area recently.

However, in today’s post I’d like to highlight the recent work of another friend, Yufei Zhao, whom I met during my math competition days. Yufei investigated a topic more broadly related to making use of random patterns in prime numbers (and not just them). I highly encourage you to check out his blog, and especially this post. It discusses the context of the recent results, Yufei’s own work, and a possible way forward to make progress on these tantalizing, long-standing results. For my part, let me just add that one of the attractions of number theory is that it is extremely easy to make statements which are probably true, but immeasurably difficult to prove (Fermat’s infamous last theorem being just one example). What makes it worthwhile to explore such questions, however, is that even if one doesn’t find the Eldorado (prove the difficult result), there are a lot of pretty gems to be found along the way.

Mathematics and computers – in memory of Kenneth Appel

Dear readers,

It’s been almost a month now since we lost one of the revolutionaries of 20th century mathematics, Dr. Kenneth Ira Appel. The obituaries in the New York Times as well as The Economist do a good job of describing his life and most famous contribution to mathematics, the computer-assisted proof of the Four Color Theorem, a result stating that any map can be colored with 4 colors without any neighboring countries sharing the same color. In this post, I will discuss his broader contribution to the way a lot of mathematics is done today, namely, in collaboration with computers.

A fact that many people tend to forget is that computers have their origins in logic and mathematics. The foundational work of the logician Alonzo Church and the mathematicians Alan Turing and John von Neumann led to the development of both the theory of computation (now a branch of computer science) and the modern computer. While technological advances allowed for the miniaturization of computers to the point where our cell phones today are faster and more powerful than the early computers which occupied entire large rooms, it was the work of these mathematicians that made it possible for computers to exist in the first place.

Soon after their first appearance, computers were put to use for solving various complex problems, mostly related to physics and engineering, such as simulating flight trajectories, predicting the dynamics of fluids, and finding optimal allocations of resources. The size and scope of problems solvable by computers grew together with the improvements in the underlying technology (often described by Moore’s Law, which says that various characteristics determining the speed of a computer double roughly every 18 months). However, mathematicians on the whole took a surprisingly long time before accepting and making use of the new technology in their work, sticking to the traditional approach of proving theorems “by hand”.

This all changed after Kenneth Appel and his co-author, Wolfgang Haken, published their 1976 proof of the Four Color Theorem, which had baffled mathematicians for almost 125 years (this is by no means a record, however, as Fermat’s Last Theorem had remained unproven for over 350 years). This proof consisted of a significant new advance that enumerated 1936 potentially problematic configurations, and the result of a large-scale computer effort checking that 4 colors suffice for all of these configurations. While initially met with significant skepticism, this proof eventually became accepted by the mathematical community, and further simplified over the years, to the point that very little doubt about the correctness of the result remains.

This early success provided the impetus for the development of entire new fields of mathematics, such as automated theorem proving and interactive theorem proving. These deal with the finding and verifying the proof of a theorem by using a formal system describing the transformations (logical steps) allowed in creating proofs. So far, a few other important problems have been solved with the assistance of a computer, including Kepler’s conjecture on the optimal packing of spheres and a problem related to the behavior of the Lorenz system of differential equations.

Many similar ideas are also used in computer algebra systems such as Maple, which have found a wide range of applications. But most importantly, mathematicians now routinely use computers to explore the validity of a statement before trying to prove it. This essentially allows many branches of mathematics to proceed by the scientific method, where a hypothesis is formulated, tested experimentally (on a computer), and refined if necessary. The final stage of writing down a proof is still usually done by the mathematician, but the preliminary stage of building intuition and narrowing down the options is now much more pleasant, since computers are much better than humans at many routine tasks such as manipulating algebraic expressions. This revolution, ushered in by Kenneth Appel’s work, will continue to permeate 21st century mathematics, and I predict that it will encompass more and more subfields.

What to do if you hit the mathematics wall?

Dear readers,

I recently read a very interesting article called “What it feels like to be bad at math”. I could relate to a lot of what the author was describing, and found it very insightful. In today’s post I’m going to talk about my own experience of finding my boundary in mathematics, compare it to something that happens a lot when running a marathon, and describe three solutions that worked for me and could hopefully work for you.

I experienced very similar emotions to the ones the article describes at several occasions in my mathematical career. Perhaps the most memorable was a course on probability theory that I tried taking in graduate school. After four or five lectures I was already pretty lost. There were some objective factors involved – the course was based on measure theory, which I neither fully understood nor particularly liked in my undergraduate years; other students in the class were better prepared than I was in this particular area; and I had not built a good intuition for probability because I never formally studied it, and instead tried to pick it up as I needed it. But these objective factors quickly gave rise to self-doubt, a feeling of inadequacy, and a fear of failure. Thankfully, the drop deadline had not yet passed and I left the course.

But the emotional response I describe here is not at all unique to mathematics. In fact, when I was running my first marathon a few months later, something very similar happened. While I did experience some moderate discomfort in various parts of my body throughout the race, at mile 23, I suddenly started experiencing a strong discomfort in my entire body at once. This is what long-distance runners call “hitting the wall”. I knew that this might happen, but was hoping it was the kind of thing that only happens to others. My body was ready to give up, and in my mind I felt the same influx of self-doubt and fear of failure as I did in the probability course. Thankfully, after taking a break, eating some gel and drinking some water I talked myself into starting to run again, slowly but steadily, and actually finished the race.

So, what was different about the marathon compared to the mathematics course? First, I felt well prepared for the marathon (I had been training for it for months), but not for the course (I had never formally studied probability). Second, I had a lot of people I was accountable to in the marathon (all those who had donated money to the cause I was supporting), while none of my friends were taking the course, so the consequences of dropping out were not as serious. Finally, I think that the biggest difference was the desire – I really wanted to finish the marathon, while taking the course, and doing well in it, was a nice-to-have rather than a must-have for me. Yet the emotions were very similar in both cases. This suggests that you can create the conditions for yourself that will maximize your chances of getting past the wall, by being well-prepared, being accountable to others and having a strong desire to do it.

Fine, you might say, but what if I’m already in that situation and it’s too late to change my preparedness, accountability or desire? What can I do to get past the wall (either mathematical or athletic)? Well, here are three things that worked for me.

First, take a break! This will help you regain some of your presence of mind, and you will be able to make a better decision. In running, this could mean slowing down your pace, switching to walking, or even actually stopping for a bit (which is what I did after I hit the wall at mile 18 during my second marathon). In mathematics, this could mean going for a walk, working on a different project or course, or leaving math behind for a few days. Some of my most satisfying discoveries resulted from coming back to a problem or subject after leaving it behind for a brief period of time.

Second, go back to the basics! Focus on the simple skills you already mastered, and let these get you through the challenging part. In running, this could mean focusing on your breathing, or paying attention to the rhythm of your feet hitting the ground (this is especially helpful for dealing with minor discomfort). In mathematics, this could mean going back to a concept or a related subject that you understood well before, or reviewing some earlier definitions and theorems. Interestingly, reading or seeing something for the second time makes it feel more familiar, and hence easier.

Third, break it down! To paraphrase what one of my mentors, Hillary Rettig, says in her book, “the wall isn’t a monolith; it’s a giant spaghetti snarl with many “strands,” each representing a particular obstacle or trigger.” What strands are in your way? In running, they could be the worn-out soles of your shoes, the pain in your back, and the fear of disappointing someone waiting for you at the finish line. In mathematics, they could be your dislike of messy algebraic manipulations, your relative lack of preparation, and your fear of failure. Whatever they are, dealing with each one individually is easier than dealing with all of them in a monolithic way. Sometimes, understanding the different pieces of the wall will alone be enough to get you past it.

While these simple tips may not always resolve the issue, they will definitely help you put things in perspective. If you then decide to leave the race (or drop the project) that may well be the right decision; there is no shame in failing at a goal that challenged you to find your limits, and you gained some valuable knowledge in the process. And if you decide to continue, good luck – you may be surprised at how far you can actually go. We all have our limits (both physical and mental), and it’s important to accept them, but most of the time, they are much farther than we think!

Do Great Scientists Really Not Need Math?

Dear readers,

My apologies for not putting up a post last week – I hope that this will be a very occasional exception to the rule. Today, I have a really contentious topic to discuss: E. O. Wilson’s article, “Great Scientists Don’t Need Math”. I’ll start by giving my own summary of the article, and then share some thoughts on the merits of its argument.

E. O. Wilson is a well-known evolutionary biologist who’s had an illustrious career (in addition to his scientific achievements, he holds two Pulitzer Prizes for general non-fiction). The article is based on his book, “Letters to a Young Scientist”. He argues that aspiring scientists should not be discouraged from pursuing science if they feel that they lack mathematical ability, because a deep understanding of and intuition for their field can compensate for such limitations by. If needed, a scientist can always collaborate with a mathematician by explaining their intuition and asking for help in making it rigorous. Meanwhile, additional mathematical skills can always be acquired later on as necessary (Wilson gives his personal example of sitting in an undergraduate calculus class as a 32-year old Harvard professor). He concludes by saying: “For every scientist, there exists a discipline for which his or her level of mathematical competence is enough to achieve excellence.”

Before I critique this argument, I’d like to note a few other people who have done so before me: Edward Frenkel on Slate, David Bailey and Jonathan Borwein at the Huffington Post, Brian McGill on Dynamic Ecology, and Jon Wilkins on his own blog (the latter critique, focusing on Wilson’s flawed interpretation of collaboration with mathematicians, is definitely worth reading if you only have time to read one more).

My first problem with Wilson’s argument is his assumption that mathematics is little more than “number-crunching”, while science is all about “concepts”. These days, the majority of data analysis is done by computer algorithms, but the ideas behind these algorithms are often as insightful as the “concepts” in science. Furthermore, mathematics not only provides a systematic way of thinking about scientific concepts, but also leads to insights that may not be obtainable directly from one’s conceptual understanding of the field. For instance, quantum mechanics, one of the great breakthroughs of 20th century physics, is notorious for its reliance on mathematics and its impenetrability to intuition, as attested by the quote by Richard Feynman: “I think I can safely say that nobody understands quantum mechanics.”

The second problem I see in Wilson’s argument is his assumption that mathematics can be learned much later on in one’s scientific career, while scientific concepts need to be learned as early as possible. In my experience, it is often much harder for biologists to learn mathematical concepts at a later stage in their career than vice versa. I’ve met a number of biology graduate students and postdocs who have asked me for help with mathematical techniques, whether for simulation, data analysis, or modeling. At the same time, several of my colleagues from graduate school have gone on to learn the skills required to perform biological experiments after getting their degree in applied mathematics, and are now successfully working in biology. In this, I agree with the late Gian-Carlo Rota, who wrote this: “When an undergraduate asks me whether he or she should major in mathematics rather than in another field that I will simply call X, my answer is the following: If you major in mathematics, you can switch to X anytime you want to, but not the other way around.”

The final part of Wilson’s argument that I take issue with is his confusion around the term “advanced mathematics”, which he uses to describe algebra and calculus. They are already a necessary part of the science undergraduate curriculum, and rightly so. Physics needs calculus for electromagnetic waves and linear algebra for quantum mechanics, chemistry relies on calculus to describe the rates of chemical reactions or the foundations of thermodynamics, and biology needs differential equations to model population dynamics – all of these are topics studied at the undergraduate level today. The social sciences are no exception, as disciplines such as psychology and sociology require knowledge of probability theory and basic statistical methods, while economics makes extensive use of game theory. With accumulating evidence that a large fraction of scientific discoveries may be erroneous, it is more important than ever for aspiring scientists to have a solid grasp of at least the basic ideas of the advanced mathematics that Wilson discusses to avoid publishing meaningless work.

There is, however, one point on which I agree with Wilson – mathematics can indeed be a “bugbear” for many aspiring scientists. The solution, however, is not the one Wilson advocates – to put off one’s mathematical education until later and focus on developing one’s scientific intuition. Instead, it is to dedicate the necessary effort as early on as possible to learning mathematics (especially the mathematics needed in one’s field of choice) so that it can pay off later. Fortunately, just like Wilson himself says, learning mathematics is similar to learning a foreign language (though, I would add, much easier because of its mostly logical structure) – a consistent effort leads to steady improvement. The real problem that we should be addressing is not reducing the need for mathematics in science, but reducing the fear of mathematics among aspiring scientists (and others), a topic that I plan to revisit later this month.