New chips, batteries, and solar devices depend on finding inorganic crystals that stay intact under real conditions. A large team at Google DeepMind and collaborators published GNoME (Graph Networks for Materials Exploration) to scale AI materials discovery using graph neural networks and large-scale density functional theory (DFT). DFT is a standard physics-based way to estimate atomic energies; here it scores candidate crystal layouts before lab work. The work adds a long list of candidate stable inorganic crystals for other groups to test using their own lab criteria.
What the study added to the stable-crystal list
About 20,000 experimentally identified ICSD crystals are computationally stable, in Google’s public summary of the project. That summary is outreach, not the journal paper.
The Nature paper presents GNoME as a way to narrow candidate lists so labs can screen options before spending months on trial-and-error synthesis. It states merged computational databases held about 48,000 stable crystals by mid-2023, that GNoME found more than 2.2 million structures stable relative to earlier datasets, and that after updating the convex hull (the energy rule that marks which crystal arrangements count as stable compared with alternatives for the same elements, given the databases used), 381,000 were new stable entries for a total near 421,000, an almost order-of-magnitude increase in known stable crystals.
The same blog post compared overall yield to nearly 800 years’ worth of knowledge as a headline metaphor. That line explains scale for a general audience and does not appear as a timed claim in the Nature study itself.
How the model builds candidates and checks them with physics
Crystal prediction is combinatorial. GNoME uses graph neural networks: each crystal is represented as a graph of atoms and their local connections, and the network predicts total energy to filter huge pools of guesses.
Two paths appear in the paper. One structural path perturbs known crystal layouts, including symmetry-aware partial substitutions. The other compositional path starts from chemical formulas without a known structure and uses random structure search plus DFT to generate candidates.
Promising guesses are not final answers. The team relaxes short-listed layouts with DFT in active learning rounds so new high-quality energy data feed the next training cycle. Early structural hit rates were under about 6% and compositional hits under about 3% before scaling; after the reported rounds, structural hit rate rose above 80% and the composition-only pipeline reached 33%, with final energy error near 11 milli-electronvolts per atom on relaxed cells and 736 independent ICSD experiments matching the model’s stable outputs.
With r²SCAN, a newer density-functional choice used to double-check subsets, 84% of discovered binary and ternary materials retained negative phase-separation energies, and 86.8% of tested quaternaries remained stable on the r²SCAN convex hull. After filters, the paper reports about 52,000 layered compounds and 528 promising lithium-ion conductor candidates. Phase-separation energy is the energy gap toward breaking into competing simpler phases.
Large-scale AI screening also shows up in battery-material search
Industrial groups describe separate workflows that pair AI with supercomputing to scan huge pools of candidates. Microsoft reports an AI plus HPC screen of more than 32 million battery-material candidates in one documented campaign. That effort is independent of GNoME but shows the same broad direction: narrow lists before running the slow lab steps.
Why a stable energy score is only part of the picture
Computational stability means a structure sits favorably against competing phases in the chosen DFT energy model. In “Scaling deep learning for materials discovery,” the authors flag open science limits such as polymorph competition, dynamic stability tied to vibrations, entropy effects, and synthesizability. The same discussion stresses gaps before real applications and the need for careful expert follow-through on predictions.
Beyond that discussion, real supply chains add cost, manufacturability, safety rules, and device integration pressures that a stability score alone does not cover. Future databases can also grow enough to bump some “stable” entries off the hull when new competing phases appear.
Readers should treat model outputs as leads for experiment, not promises.
Autonomous hardware is a different layer from database screening. Over 17 days of operation, an A-Lab robotic platform realized 36 compounds from a set of 57 targets identified using Materials Project data together with a Google DeepMind stability check, matching the peer-reviewed companion report’s summary. Robotic lab tools are spreading for repetitive solid-state work, but each formula still faces scale-up tests.
Wider questions about how much scientists must trust opaque models remain open; that theme sits outside the GNoME paper but belongs to broader AI-and-science debates.
Sources and related information
Nature – Scaling deep learning for materials discovery – 2023
The peer-reviewed report backs the quantitative claims on discovery counts, convex-hull growth, DFT workflow, ICSD matches, hit-rate improvements, application filters, and r²SCAN cross-checks, via “Scaling deep learning for materials discovery”.
Nature – An autonomous laboratory for the accelerated synthesis of inorganic materials – 2023
The companion paper documents the A-Lab autonomous run summarized as 36 compounds realized from 57 targets over 17 days using Materials Project data together with Google DeepMind stability screening, via “An autonomous laboratory for the accelerated synthesis of inorganic materials”.
Google DeepMind – Millions of new materials discovered with deep learning – 2023
The blog post summarizes GNoME for a general audience, states that about 20,000 ICSD crystals are computationally stable in their framing, and uses the nearly 800 years’ worth of knowledge metaphor for scale, through the DeepMind write-up on millions of new materials.
Microsoft – Unlocking a new era for scientific discovery with AI: How Microsoft’s AI screened over 32 million candidates to find a better battery – 2024
The company describes a separate AI and HPC workload that screened more than 32 million battery-material candidates, via Microsoft’s Azure Quantum blog article on that screening campaign.

