A new framework called DOME (Data, Optimization, Model, Evaluation) is bringing standardized transparency to machine learning research by requiring scientists to report 20 essential methodological details, much like a nutrition label for food products. Developed by the ELIXIR Machine Learning Focus Group between October 2019 and September 2025, DOME addresses a critical problem plaguing AI research: many published papers lack sufficient detail about how their models were built, trained, and tested, making it nearly impossible for other scientists to reproduce or build upon the work. Why Is AI Research Reporting So Broken Right Now? The explosion of machine learning publications has been remarkable. Mentions of "machine learning" across papers archived in Europe PMC have skyrocketed between 2000 and 2024, reflecting the field's explosive growth. However, this rapid expansion has created a serious problem: there is no standardized way to report AI and machine learning methodologies across publishers and journals. As a result, critical information often goes missing from published papers. Researchers frequently fail to disclose essential details about their work, including how datasets were curated, whether data leaked between training and testing sets, whether the model's software is available for others to use, and how performance was fairly evaluated. Without this information, peer reviewers struggle to assess the quality of the research, and other scientists cannot reproduce the findings or confidently build upon them. It's as if someone published a recipe for a complex dish but forgot to list the ingredients, cooking temperatures, or how long to bake it. What Exactly Does DOME Require Scientists to Report? The DOME Recommendations function as a standardized checklist, breaking down machine learning methodologies into four transparent pillars. Each pillar contains specific reporting requirements that researchers must address: - Data: How datasets were curated, selected, and split between training and testing phases to prevent data leakage - Optimization: How models were tuned and what hyperparameters were adjusted during development - Model: Which specific architectures and algorithms were used in the research - Evaluation: How performance was assessed and whether evaluation methods were fair and appropriate Together, these 20 reporting items create what the ELIXIR team calls a "nutrition label" for machine learning research. Just as consumers can compare two cereal boxes by checking fiber content or added sugars, researchers and peer reviewers can now compare two machine learning papers by examining their DOME-related information to assess transparency, reproducibility, and overall methodological robustness. How Does the DOME Registry Actually Work in Practice? If the DOME Recommendations are the ingredients list, the DOME Registry is the recipe book and food critic combined. The registry is an open platform where researchers can submit and curate their AI and machine learning methodologies according to how transparently they report the essential details identified by DOME. After curation, each methodology receives an overall compliance score that allows researchers, peer reviewers, and journal editors to quickly assess whether submissions adhere to best practice recommendations. The registry has already been successfully integrated into the publishing workflows of GigaScience and GigaByte journals, and it is now open for adoption by additional publishers across the life sciences. The system works differently for different stakeholders in the research ecosystem: - Authors: Can annotate and submit their AI and machine learning methodologies through the DOME Registry during manuscript submission, generating a unique identifier that acts as a compliance report and quality seal for their work - Researchers: Can browse the registry as a curated cookbook to discover well-documented models, compare recipes based on compliance scores, and select robust methods for their own experiments - Journal Editors: Can integrate the registry into their submission workflows to verify that methodological reporting meets their standards, streamlining peer review and reducing burden on reviewers - Registry Staff: Maintain the infrastructure, validate community annotations, and ensure that reporting criteria remain current with the rapidly evolving AI landscape Steps to Implement DOME in Your Research Workflow For researchers looking to adopt DOME standards in their own work, the framework provides a clear pathway to more transparent and reproducible science: - Document Your Data Practices: Clearly describe how you collected, curated, and split your datasets, explicitly noting any steps taken to prevent data leakage between training and testing phases - Detail Your Optimization Process: Record all hyperparameters you tested, the tuning methods you used, and explain why you selected your final model configuration - Specify Your Model Architecture: Name the exact algorithms and architectures you employed, including any modifications you made to standard approaches - Explain Your Evaluation Methods: Describe how you assessed performance, what metrics you used, and whether your evaluation approach was fair and appropriate for your research question - Register in DOME: Submit your methodology to the DOME Registry during manuscript preparation to receive a compliance score and quality seal for your publication What's Next for DOME and AI Research Standards? The ELIXIR team is preparing to release a comprehensive DOME Registry roadmap guided by the service's external Scientific Advisory Board. The future development of DOME includes several ambitious goals designed to make the system more powerful and widely adopted. These planned improvements include standardization efforts using ontologies and free-text reduction to make DOME's data machine-readable and FAIR (findable, accessible, interoperable, and reusable); adoption of emerging standards such as Croissant and FAIR4ML to ensure interoperability for cross-resource asset exchange; enhanced discoverability through bidirectional connections between research outputs archived in Europe PMC and methodologies stored in the DOME Registry; and scaling approaches through pilot projects with the AI4EOSC team based at Consejo Superior de Investigaciones Científicas in Spain. The broader significance of DOME extends beyond just improving individual papers. By establishing clear, standardized expectations for how machine learning research should be reported, DOME addresses a fundamental challenge in modern AI research: the reproducibility crisis. When methodologies are poorly documented, it becomes nearly impossible for other scientists to verify findings, identify errors, or build confidently on previous work. This undermines the entire scientific process. DOME transforms AI and machine learning from a field where crucial details are often hidden or unclear into one where transparency and reproducibility are the default expectation, much like how nutrition labels transformed food purchasing decisions by making ingredient information universally available and comparable.