Learning from the EU’s GPAI Code of Practice
Finalized in July 2025 by the European Commission, the General Purpose AI (GPAI) Code of Practice helps model providers align with the EU Artificial Intelligence Act in operations and risk functions. It sets out voluntary yet strategic guidelines for transparency, safety, and accountability, especially for advanced AI models, supporting the emergence of global AI governance standards.
Moody’s models are "specialist models" or "domain-specific models" and not general purpose in nature, meaning the AI Act applies but the GPAI Code of Conduct does not. This paper explores the emergence of best practices derived from the GPAI Code of Conduct, which include:
- Rigorous pre-market risk assessments
- Transparency mindset from inception
- Continuous monitoring
- Robust internal governance
- Detailed record-keeping, maintaining ongoing accountability and rapid response to evolving risks
The Code identifies two risk levels:
- All GPAI models, for which transparency and copyright conduct are a focus: Providers must maintain comprehensive, standardized documentation on AI models — including technical details, data sources, and intended uses — and make that information available to regulators and users. This helps establish robust processes for compliance and continual updates. Operationally, providers should implement copyright policies to uphold lawful data sourcing and usage, including synthetic media.
- Highly effective systemic risk models, for which, in addition to the above, robustness, reliability, and safeguards against misuse are critical: This sets standards for far-reaching models, setting three focal points: loggings, human oversight, and extensive data governance. Maintaining a comprehensive safety and security framework that identifies, assesses, mitigates, and transparently reports systemic risks throughout the model’s life cycle is key.
Seven must-know points:
- Definition: Signing and implementing the Code signals legal commitment and helps reduce administrative and legal uncertainty when complying with Articles 53 and 55 of the EU AI Act regarding the provision of detailed technical documentation on model architecture, integration requirements, design, training methods, and the provenance of training data.
- Scope: The EU Act defines GPAI models as using large amounts of data and self-supervision at scale, displaying significant generality and being trained with over 10 floating point operations (FLOPS)1.
- Training disclosure: Maintaining summaries of training data and providing clear instructions for downstream users and regulators is established as a standard. Consistency and traceability across model documentation are crucial.
- Transparency: The model documentation form offers a consistent template toward standardization of information on training, testing, and validation, including the number and nature of datapoints and a focus on:
- Data curation methodologies
- Measures to detect data source suitability
- Measures to detect identifiable biases - Life cycle: Compliance with the Code of Conduct begins at the model training stage. Significant downstream modifications trigger separate obligations. Sandboxes set up in each member state are effective compliance facilitators and give providers a structured means to demonstrate alignment with safety and transparency.
- Global reach: Non-EU providers offering AI in the EU must comply. The EU AI Office actively updates and enforces provisions. The Code is a key marker of the model validation and governance regulation maturing worldwide, bringing an operational framework with which new models must comply by August 2, 2026, and existing models by January 2027. High-risk sectors (including but not limited to healthcare and law enforcement) require early readiness and conformity assessments.
- CalCompute: Since September 2025, California’s Senate Bill (SB) 63 also offers alternative guidance through CalCompute, a proposed public cloud computing cluster that promotes the development and deployment of safe, ethical, and sustainable AI. It is important to understand the nuances between the EU’s and California’s efforts in risk classification. Both apply a risk-based approach, but they differ in their definition of high-risk models:
a. A GPAI model will always be classified as posing systemic risk when the cumulative amount of computation used for its training measured in floating point operations is greater than 10^25.
b. An SB 63 foundation model that was trained using a quantity of computing power greater than 10^26 integer is a frontier model, meaning it is both high performance and high risk.
Other AI standards, namely in Japan and China, mention risk and specialty categories, but neither country has adopted a two-step comparative “general-purpose, high-risk” regime yet as seen in the EU or California’s frontier model regulations, although China is developing measures in this direction.
Strategic takeaways
By understanding these seven focal points, organizations can:
- Prepare for oversight and maintain compliance while fostering innovation
- Engage responsibly with providers on general-purpose AI models
- Derive AI governance good practices
- Minimize legal and operational risks in the EU on model validation and documentation
The GPAI Code promotes a “comply or explain” model, allowing flexibility in demonstrating responsible AI governance. This evolving framework encourages standards, certifications, and collaboration with national AI Safety Institutes and domain-specific guidance.
Per the AI Act, to protect our industries, AI solution providers must leverage the GPAI Code of Practices beyond general-purpose models to provide effective transparency, explainability, reporting, and oversight to their customers, all the way to the final user.
1 Floating point operations per second is a measure of computer performance in computing, a key threshold in AI system training. The threshold varies in EU (10^25 FLOPs) and US (10^26 FLOPs) regulations.
Learn more
Leverage AI for risk and compliance
For more information on how Moody’s can support your risk and compliance processes, including automated screening that leverages AI, please get in touch – we would love to hear from you.