
AI decodes the Memoration phenomenon—an extended prime bias—via the Birch test. See how number theorists use large language models to find deep L-function patterns.
AI Uncovers the Memoration Phenomenon: A New Frontier in Number Theory
Published by Brav
Table of Contents
TL;DR
- I learned how AI is now spotting a new “memory-like” bias in L-functions.
- The Memoration phenomenon extends Chebyshev’s prime bias to all L-functions.
- A new Birch test evaluates AI proofs on interpretability, novelty and non-triviality.
- The first successful I and N passes came in 2023, marking a landmark for AI-guided math.
- Practical steps: gather data, run a fine-tuned language model, verify with the Birch test, formalize in Lean, and share results at ICERM or Bristol.
Why this matters
I used to spend months poring over tables of primes, looking for a pattern that might hint at a deeper theory. The frustration? Twofold. First, even after the AI spotted a pattern, I still had to wrestle with the mathematics to decide if it was a real signal or a random fluctuation. Second, the AI’s “black box” offered little insight into why the pattern emerged. In the world of advanced research, that was a bottleneck.
The Memoration phenomenon changes the game. It promises a systematic way to capture biases that are hidden in the data and, crucially, gives a test—called the Birch test—that forces the AI to produce results that humans can interpret and verify. The community now has a shared language for talking about AI-driven conjectures, and conferences at ICERM and Bristol are already dedicated to it.
Core concepts
| Term | What it means | Why it matters |
|---|---|---|
| Chebyshev bias | A tendency for primes congruent to 3 mod 4 to outnumber those ≡ 1 mod 4 | The first observed bias that hinted at deeper arithmetic structure |
| Memoration phenomenon | A generalization of prime bias to all L-functions | It predicts a “memory” effect where the sign of an L-function’s root number influences prime distributions |
| Birch test | A 3-part test (A-test, I-test, N-test) for AI-generated conjectures | It filters out spurious patterns and rewards interpretability and novelty |
| Dirichlet characters | Arithmetic functions that encode residue classes | They provide a natural playground for the Memoration phenomenon |
| Modular forms | Complex analytic functions with deep number-theoretic significance | Their Fourier coefficients reveal the murmuration effect |
The Memoration conjecture says that, for any L-function in the Langlands program, the oscillatory behavior of its zeros should mirror the bias seen in prime distributions. The conjecture was raised by AI-guided exploration—data-driven experiments that pushed the limits of machine learning in pure math.
How to apply it
- Collect a massive dataset – Pull the coefficients of the L-functions you’re interested in (e.g., Dirichlet characters, modular form Fourier coefficients).
- Fine-tune a language model – Use a large-language model (LLM) pre-trained on mathematical literature and further trained on your dataset.
- Run the Birch test –
- A-test: Does the AI’s conjecture survive the strictness of the Birch test?
- I-test: Is the conjecture interpretable?
- N-test: Does it provide new, non-trivial insight? The first AI system to pass the I- and N-tests in 2023 marked a breakthrough AMS Notices — Birch Test Strictness (2025).
- Formalize in Lean – Convert the AI’s reasoning into a Lean proof. The Lean community has millions of lines of math, but billions are needed for fully automated formalization Oliver et al. — Murmurations of Dirichlet Characters (2023).
- Peer review – Submit your results to a workshop at ICERM or a conference in Bristol. Peer review is the final sanity check; AI can’t skip it.
A 2025 arXiv preprint on Maass forms extends the murmurations to a new family of L-functions ArXiv — Murmurations of Maass Forms (2025).
Metrics that matter
| Metric | Target | Rationale |
|---|---|---|
| AI success rate | 2 % for advanced problems | Reflects the current frontier; higher rates signal progress |
| Lean proof length | < 200 lines | Shows efficiency and clarity |
| Funding per project | $50 k+ | Ensures sustainability for long-term research |
Pitfalls & edge cases
- No unconditional proof of Chebyshev bias – The bias is still proven only under the Riemann hypothesis Sarnak & Rubinstein — Chebyshev’s Bias (1994).
- AI fails the A-test – Many models stumble here because they lack a human-readable narrative.
- Generalization uncertainty – The Memoration phenomenon might not hold for every L-function, especially beyond degree-1 characters.
- Funding gaps – DARPA’s EXP math does not currently back AI-guided projects, and public funding is scarce.
- Ethical concerns – As AI gains autonomy, questions about attribution and responsibility surface.
Quick FAQ
What is the Memoration phenomenon? It’s a pattern-spotting result that extends Chebyshev’s prime bias to all L-functions in the Langlands program, showing that the sign of an L-function’s root number influences prime distribution.
How does the Birch test work? The Birch test evaluates AI conjectures on three axes: A (strictness), I (interpretability), and N (novelty). Passing I and N is a milestone.
Why is the LLM needed? LLMs can sift through billions of data points, spot subtle oscillations, and formulate conjectures that would be too laborious for a human.
Can this be automated end-to-end? Full automation is still a goal; current pipelines require human oversight for the Birch test and Lean formalization.
What are the next steps for researchers? Attend ICERM workshops, submit AI-generated conjectures for review, and collaborate with Lean developers to streamline formalization.
Conclusion
I’m standing on a new horizon. AI has not only found a fresh bias in the world of L-functions, but it has also built a test— the Birch test— that lets us ask the same question we would ask a human: “Is this a meaningful pattern?” The road ahead is clear: gather more data, refine your LLM, and run the Birch test. If you’re a computational number theorist or an AI researcher, this is your frontier. If you’re a funding body, now is the time to back the next wave of AI-guided discovery.



