[HN Gopher] All Mersenne prime exponents below 100m tested at le... ___________________________________________________________________ All Mersenne prime exponents below 100m tested at least once Author : potiuper Score : 57 points Date : 2020-12-04 19:38 UTC (3 hours ago) (HTM) web link (www.mersenne.org) (TXT) w3m dump (www.mersenne.org) | schoen wrote: | When I was at EFF, I helped run the Cooperative Computing Awards | project (https://www.eff.org/awards/coop/), which was funded | decades ago by an individual donor and provides cash rewards for | finding prime numbers of certain sizes. | | Because of EFF's advocacy for cryptography, people always thought | that there was a more direct connection between the Mersenne | prime search and cryptographic techniques, but the connection is | at best distant and tenuous. The record primes that GIMPS finds | and that can receive EFF's awards are | | * much (much much) bigger than the primes that we use for | cryptography | | * too big to do almost any kind of practical computation with | | * not secret, so couldn't be used as part of a secret key | | The distribution or explicit values of Mersenne primes (and, | correspondingly, perfect numbers) also don't really help in our | assessment of the difficulty of the RSA problem or the security | of public-key cryptography, because they aren't closely related | to any of the fastest factorization algorithms that apply to | arbitrary numbers. | | (Mersenne primes are much easier to deterministically test for | primality than numbers in general, which is why the record primes | that GIMPS finds are Mersenne primes. The runtime complexity of | primality-testing algorithms for special-form numbers is | sometimes much less than algorithms for an arbitrary number, and | this is a great example.) | | I think the main contribution of projects like GIMPS to | cryptography and computer security lies in getting people more | interested in number theory and mathematics in general. Hopefully | that will lead to faster improvements in humanity's knowledge of | mathematics in ways that lead to more secure public-key | algorithms or to better public assessments of the algorithms that | we have now. (Although a lot of the important current research on | public-key cryptography is not about factorization but rather | about elliptic-key systems and post-quantum systems.) | | I always felt I had a hard time explaining to people that the | awards existed in order to publicize _the idea of cooperative | distributed computing projects_ , which were quite novel when | EFF's awards were first created, and which were a demonstration | of the power of the Internet to help people who've never even met | each other work together. (The awards, contrary to the hopes and | dreams of math enthusiasts and cranks around the world, didn't | anticipate progress through new _research_ or _ideas_ so much | through more _volunteer computer time_.) | | Probably today we have ample other demonstrations of that power, | and that concept is no longer in much doubt, and the GIMPS | project probably no longer even registers for most people as a | particularly impressive example. But it is impressive in | continuing to set multiple world records on one of the simplest, | purest, most basic mathematical problems. | Blahah wrote: | > not secret | | Clearly, no smaller primes are secret either, nor are they so | numerous that iterating over them is computationally difficult. | And mathematically, anyone can discover primes. Why does it | matter whether primes are secret? | IvyMike wrote: | > ...nor are they so numerous that iterating over them is | computationally difficult | | That's not correct; for a given bound x, the expected number | of primes is x/ln(x). | | For example, if you need to choose a prime of 100 digits, | there are something like 10^96 of them; you can find a random | one using probabilistic methods, do your cryptography, and | know that your attacker would have to try 10^96 primes to | break it. Since they can't really do this, this is not the | weak point of your cryptosystem. | | If you chose a Mersenne prime, there are only 51 of them. | Your attacker would have to try less than 10^2 of them. | centimeter wrote: | Some cryptosystems like RSA are based on secret primes. | Density of primes is about 1/log(n), so there are plenty of | primes to choose from. | | Some cryptosystems like ECDSA are based on secret elements | selected from a field, and the primes there are public. | Blahah wrote: | Sorry, just to be clear, they are obscure, not secret. | Anyone can compute primes. No prime is secret. | grifball wrote: | I guess I don't fully understand this, but RSA works so | there MUST be "secret" primes (as in, then can be | randomly generated like a symmetric key can). | ifdefdebug wrote: | Like in: anyone can combine characters. No password is | secret? | blintz wrote: | > too big to do almost any kind of practical computation with | | There are definitely possible (though perhaps not immediate) | applications for primes of this size. For example, in 2009, | researchers were awarded $100k for the discovery of the | Mersenne prime 2^43112609-1. | | This number, while too large for RSA or ECC, could very well be | useful for (R)LWE schemes (used to make post-quantum secure | public-key cryptography and partial/fully homomorphic | encryption schemes). In particular, it is free to modularly | reduce by this prime (take the bottom 43112609 bits), so | schemes that rely on doing large matrix multiplication on a | modular elements (or even elements in a polynomial ring) would | benefit greatly. | | Since 43112609 bits is only 5.3MB, this is not at all a | ridiculous size - multiplying numbers of 2^26 bits is feasible | in well under 1 second in modern hardware, and so making | accumulation and modular reduction free is actually valuable. | mlyle wrote: | > In particular, it is free to modularly reduce by this prime | (take the bottom 43112609 bits) | | Am I missing something? 2^3 - 1 is a Mersenne prime; but | taking a number mod 7 is not taking the 3 low bits (there's a | relatively low cost trick to take numbers modulo b-1, the | same "divide by 9" tricks we can do in base 10... but it's | not as easy as looking at the last digit). | [deleted] | lkbm wrote: | Looking at the recent history: 2020-11-27 All | exponents below 99 million tested at least once. 2020-11-17 | All exponents below 98 million tested at least once. | 2020-11-17 All exponents below 97 million tested at least once. | 2020-11-09 All exponents below 96 million tested at least once. | 2020-11-03 All exponents below 95 million tested at least once. | 2020-10-24 All exponents below 94 million tested at least once. | 2020-10-09 All exponents below 93 million tested at least once. | 2020-10-04 All exponents below 92 million tested at least once. | 2020-06-20 All exponents below 91 million tested at least once. | 2020-04-09 All exponents below 90 million tested at least once. | 2020-03-01 All exponents below 89 million tested at least once. | | What happened in June/July that caused it to stop, and in October | to make it suddenly restart? Did they just switch from sequential | to a grouping that checked all but a small portion of each | million undone until November? September 17, 2020 | All tests below 53 million verified. | | Makes me think part of it is them spending extra time verifying | old tests. Just curious 1. if that's all it is and 2. why / | whether this was triggered by something. Was there a bug found | that raised doubt in previous tests? | | _EDIT: Formatting_ | barkingcat wrote: | I'm only tangencially related to the project, but I do have a | spare machine running the software. | | Yes, there was a big push to re-verify old abandoned tests. | First, understand that the project requires multiple tests | results from each exponent - I believe it needs a triplicate | confirmation (ie 3 different computers have to run the same | test and all have to match) At some point, they developed a | new, different algorithm that can do the check in one go (but | it's more expensive computationally). However, most of the pre- | existing work was done using the type of calculation that | needed the triplicate check. | | There were a lot of exponents where some computer claimed one | of the "verification tests" (ie it was already rejected as a | prime, but we needed 2 more computers to verify), but then | stopped mid way (it's pretty common since these primality tests | take about 3-5 days on a modern fast threadripper - so it could | take up to a month on slower machines from years ago). | | It's very unlikely that these exponents signified primes (since | they were already verified to NOT be prime by 1 computer, and | it's just the rechecks that are pending) These abandoned | exponents formed a huge amount of "leftovers" that people | didn't want as work units because they are most likely not | prime, and if you were hunting for primes, these are not the | work you want to do. | | So at some point, the project said they wanted a big push into | picking up abandoned re-verifications and giving them a much | higher priority to computers that could complete the test in a | few days [either by finishing off what was left of the | triplicate check, or by using the new algorithm that just | needed one pass] - thus getting them out of the queue once and | for all for good. If you have a fast computer (ie modern day | i9, threadripper, xeon, anything with AVX512, etc) chances are | you were assigned this type of rechecking task unless you | rejected the work units. | | It's not just the gap you noticed in June, this whole project | of looking for and verifying old exponents has been ongoing for | the bulk of 2020. | | Another idea is that the project is always trying out new | number techniques and algorithms and always looking for new | instruction sets to use etc - a change or tweek in algorithm | could cause this kind of "pause" as well, the client computers | don't make the shift all at once (there's likely some | verification, where a small subset of clients run both | algorithms in parallel to see if the results are the same, etc) | and then machines are moved onto the new algorithm as | verifications are complete, etc. | | EDIT: As I wrote this post, I started remembering more about | the different algorithms the projects used. All of the text | above can be check with the prime95 forums, where they discuss | these algorithm changes, the push to re-verify, etc. | SethTro wrote: | Ben Delo joined the project and has been running several | hundred AWS cores.[1] | | Slightly related a new method to validate the result was found | which means that they no longer need to run a 2nd double check | run to validate that there wasn't floating point rounding | errors[2]. Interestingly this credits a HN thread as the source | of inspiration[3]. | | [1] | https://www.mersenneforum.org/showthread.php?t=24819&highlig... | [2] https://mersenneforum.org/showthread.php?p=546560 [3] | https://news.ycombinator.com/item?id=20587753 | [deleted] | stillbourne wrote: | I think it got hugged to death. | imglorp wrote: | https://webcache.googleusercontent.com/search?q=cache:https:... | | https://web.archive.org/web/*/https://www.mersenne.org/repor... ___________________________________________________________________ (page generated 2020-12-04 23:00 UTC)