Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The crowd went wild; I've never made a group of experts so angry as...

Also not a number theorist...but I'd bet those so-called experts had invested far, far too many of their man-years in that unproven conjecture. All of which effort and edifice would collapse into the dumpster if some snot-nosed little upstart like you, using crude computation, achieved overnight fame by finding a counter-example.

(If I could give my many-decades-ago younger self some advice for math grad school, one bit of that would be: For any non-trivial "Prove X" assignment, start by spending at least 1/4 of my time budget looking for counter-examples. For academic assignments, that's 99% likely to fail. But the insight you'll get into the problem by trying will be more worth it. And the other 1% of the time you'll look like a genius. And - as soon as you attempt real math research, those odds shift enormously, in favor of the counterexample-first approach.)



> All of which effort and edifice would collapse into the dumpster if some snot-nosed little upstart like you, using crude computation, achieved overnight fame by finding a counter-example.

Not at all. In fact, if I had found a counterexample, it would cause a flurry of new research to quantify exactly how wrong the BSD conjecture is. Such a finding would actually be a boon to their career! That's why my response is curiosity, and not to sneer at them for protecting their extremely secure tenured careers.

Edit 1: And if you think you've found a counterexample to a long-standing conjecture with a computation, you'd better be damned sure that your computation is correct before opening your mouth in public. And that takes a ton of work in the case of the BSD conjecture, because you've almost certainly identified a bug in the extremely complex code underlying that computation. If I ever thought I was holding onto such a counterexample, I'd approach a human calculator like Ralph Greenberg as my first step (after internal checks: re-running the code on another computer to rule out cosmic bit flips, and perhaps running more naive, unoptimized implementations).

Edit 2: This attitude pervades my software development career, and I've brought it to my foray into superconducting circuit design: a bug report brings joy to my life, and I aim to shower the reporter with praise (which may involve chocolate). There is nothing more satisfying than being proven wrong, because it helps us collectively move toward greater truths.


> Edit 2: This attitude pervades my software development career, and I've brought it to my foray into superconducting circuit design: a bug report brings joy to my life, and I aim to shower the reporter with praise (which may involve chocolate). There is nothing more satisfying than being proven wrong, because it helps us collectively move toward greater truths.

Not to mention that the usual wisdom of "don't kill the messenger" applies equally to bug reporters! Someone finding a bug in your code doesn't mean they willed it into existence; the bug would still be there even if you didn't know about it.


> my foray into superconducting circuit design

Curious, what do you work on? (I also research superconductivity.)


I've got a brief bio in my profile. The major thrust of my role is to develop architectures of digital/analog circuits for quantum computers, and I've lately been getting into device design. That's miles away from superconductivity research (at most, I could say I'm in applied superconductivity), so allow me to fawn a little... I'd love to be doing that kind of work!


Recently I found myself happy to find bug in code I have writtdn for those reasons.


To put a little color on the BSD conjecture, it states that the rank (0, 1, 2, 3, etc.) of rational points on an elliptic curve is related to the residue (coefficient of 1/q) of the L-function for the curve. There are some additional multiplicative factors, in particular the size of the Tate-Shafarevich group.

No one knows how to compute the size of that group in general (in fact no one has proved that it's finite!). Computing the rank of a curve via non-analytic means is more akin to a bespoke proof than a straightforward computation (see Noam Elkies' work).

So saying you're going to disprove BSD with blind computation is rather naive unless you're sitting on several career-defining proofs and not sharing them.


If the BSD rank conjecture were false, then the simplest counterexample might be an elliptic curve with algebraic rank 4 and analytic rank 2. This could be established for a specific curve by rigorously numerically computing the second derivative of the L-series at 1 to some number of digits and getting something nonzero (which is possible because elliptic curves are modular - see work of Dikchitser). This is a straightforward thing to do computations about and there are large tables of rank 4 curves. This is also exactly the problem I suggested to the OP in grad school. :-)

In number theory doing these sorts of “obvious computational investigations” is well worth doing and led to many of the papers I have written. I remember doing one in grad school and being shocked when we found a really interesting example in minutes, which led to a paper.


Then again it might just be a misunderstanding. And the so-called experts, in aggregate, turn out to be right more often than not.

Say I show up to a physics conference and proclaim that I hope my computational effort will disprove some major physical law. Well, you better have a good delivery, or the joke might not land well!

Sometimes people take things a little too literally after attending hours of very dry talks at serious seminars. I wouldn't read too much into it.


> All of which effort and edifice would collapse into the dumpster

Except it wouldn't, because the work towards the BSD would still be right and applicable to other problems. If someone proved the Riemann hypothesis false, all of our math (and there is a lot of it) surrounding the problem isn't immediately made worthless. The same is true for any mathematical conjecture.

I don't doubt the rest of your comment might have played a role, however.


> Also not a number theorist...but I'd bet those so-called experts had invested far, far too many of their man-years in that unproven conjecture. All of which effort and edifice would collapse into the dumpster if some snot-nosed little upstart like you, using crude computation, achieved overnight fame by finding a counter-example.

Are you maybe confusing math academia for psychology or social sciences? There is no replication crisis in math, no house of cards of self-proclaimed experts riding on bullshit. Mathematicians are _actually experts_ at a deep and extremely rigorous technical field -- many of them are even experts at computational approaches to problems! -- and when outsiders and upstarts resolve old conjectures, mathematicians generally react by celebrating them and showering them with fame, job offers and gushing articles in Quanta.


Maths may not have a replication crisis like some other areas, but when I go to maths events, it seems widely agreed there are far too many papers with incorrect theorems, it's just no-one cares about those papers, so it doesn't matter.

It turns out to be very, very common (as discussed in the linked article) that when someone really carefully reads old papers, the proofs turn out to be wrong. They are often fixable, but the point of the paper was to prove the result, not just state it. What tends to save these papers is that enough extra results have been built on top of them, and (usually), if there had been an issue, it would have showed up as an inconsistency in one of the later results.

The trunk is (probably) solid, but there are a lot of rotten leaves, and even the odd branch.


> no house of cards

As I understand TFA, from a formalist’s perspective, this is not necessarily the case. People were building on swathes of mathematics that seem proven and make intuitive sense, but needed formal buttressing.

> _actually experts_ at a deep and rigorous technical field

Seeing as the person you’re addressing was a mathematics graduate student, I’m sure they know this.


Yep. Here's an easy-looking one, that lasted just under 2 centuries (quoting Wikipedia) -

> In number theory, Euler's conjecture is a disproved conjecture related to Fermat's Last Theorem. It was proposed by Leonhard Euler in 1769. It states that for all integers n and k greater than 1, if the sum of n many kth powers of positive integers is itself a kth power, then n is greater than or equal to k...

> ...

> Euler's conjecture was disproven by L. J. Lander and T. R. Parkin in 1966 when, through a direct computer search on a CDC 6600, they found a counterexample for k = 5.[3] This was published in a paper comprising just two sentences.[3]

> [3] - Lander, L. J.; Parkin, T. R. (1966). "Counterexample to Euler's conjecture on sums of like powers". Bull. Amer. Math. Soc. ...


What exactly are you saying this is an example of?

It's certainly not something that people believed and built stuff on the basis of; it was never regarded as anything more than a conjecture and I would be a little surprised if even one paper was published that took the conjecture as a hypothesis, even explicitly (i.e., "We show that if Euler's conjecture is true then ...").

It's also not, so far as I know, a case where anyone reacted with defensiveness, horror, insecurity, etc., when a counterexample was found. They published a paper in a reputable journal. They don't seem to have had much trouble getting it published, if they discovered the counterexample in 1966 and the paper was published in a 1966 issue of said journal.

So if you're suggesting that this is a case where "people were building on swathes of mathematics that seem proven and make intuitive sense, but needed formal buttressing", I'd like to see some evidence. Same if you're suggesting that this is a case where "so-called experts had invested far, far too many of their man-years in that unproven conjecture" and there'd be a hostile reaction to a counterexample.

On the other hand, if you're not suggesting either of those things, I'm not sure what the connection to the rest of the discussion is.


> What exactly are you saying this is an example of?

A prominent conjecture in number theory, taken quite seriously for centuries, but which was quickly and rather easily disproven once computers became powerful enough.

No, it is not a exact analogy for Fermat, nor BSD, nor Riemann, nor ...

My initial point of interest was u/bootby's comment - why the heck would a room full of experts (presumably noteworthy math professors) become so angry at some grad student's comment? Then /usr/baruz's comment, about things which "seem proven and make intuitive sense, but needed formal buttressing". On occasion, "seemed" and "intuition" prove to be wrong, and Euler was a pretty-good example that.


they didn't become angry, they became excited.

and a famous conjecture is by definition something for which all the experts know that its truth is UNKNOWN (even in cases where most experts believe it's true).


> Seeing as the person you’re addressing was a mathematics graduate student, I’m sure they know this.

The OP (u/boothby) was not the person I was addressing (u/bell-cot).


Does this not imply that /u/bell-cot had been a graduate student in mathematics?

> If I could give my many-decades-ago younger self some advice for math grad school


It's HN bread-and-butter to insist that all experts are wrong, and it must be because BIG <random field> is just protecting the sweet sweet lower middle-class living of being a tenured professor or something.


The worst part of middle-class life in the US is its precariousness. A guaranteed lower middle-class living for life is a pearl indeed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: