Article 12 and the Logging Mandate: What the EU AI Act Actually Requires – FireTail Blog
The post Article 12 and the Logging Mandate: What the EU AI Act Actually Requires – FireTail Blog appeared first on FireTail – AI and API Security Blog.
Beyond the Spreadsheet: Why Manual AI Audits Are an EU AI Act Compliance Liability – FireTail Blog
The post Article 12 and the Logging Mandate: What the EU AI Act Actually Requires – FireTail Blog appeared first on FireTail – AI and API Security Blog.
When GDPR arrived, the organisations that had mistaken documentation for capability were the ones that struggled the most. They had policies about data retention but no technical controls enforcing those policies. They had breach notification procedures but no systems capable of detecting a breach in time to use them.
The EU AI Act is heading for a similar reckoning. And Article 12 is where most organisations will feel it first.
Article 12
High-risk AI systems shall technically allow for the automatic recording of events over the lifetime of the system.
Technically means the logging capability must be built into or applied to the system itself. A manual process for exporting logs, or a human who periodically reviews AI outputs and writes notes, does not satisfy this requirement.
Automatic means logs are generated without operator intervention at the moment events occur. Scheduled exports do not count. Human-triggered captures do not count.
Lifetime means from the moment a high-risk AI system is deployed until it is decommissioned. Not from the point at which you decided to start logging or your compliance program went live.
Article 26(6) requires automatically generated logs to be retained for a minimum of six months. For biometric identification systems, additional specific data must be captured including precise usage periods, reference databases consulted, and the identities of individuals responsible for verifying results.
Who This Applies To
The first question many organisations ask is whether Article 12 applies to them. The answer, for most enterprises using AI in operational contexts, is yes.
Under Annex III of the Act, high-risk includes any operation where AI affects hiring, finances, access, healthcare, resource allocation, or fundamental rights. This covers recruitment screening tools, credit and insurance models, employee performance management systems, customer service AI with access to account data, and healthcare triage or administration tools.
The regulation draws a clear line between providers, who build and place AI systems on the market, and deployers, who use those systems within their own operations. Most European enterprises are deployers. Deployers must ensure that logs are kept in formats suitable for analysis and must retain them in a way that supports regulatory review and investigation.
If you are a deployer using a third-party AI system, the obligation to ensure logging is in place does not disappear. You need to verify that the systems you use can generate the required logs, and that those logs are accessible to you when needed.
The Six Gaps Most Organisations Have Right Now
Based on what we see across enterprise environments, these are the most common Article 12 failures:
Fragmented log sources. AI usage is spread across multiple systems, some cloud-hosted, some embedded in SaaS tools, some running in developer environments. Each generates logs in different formats, stored in different places without a unified view and no reliable way to produce a complete picture when required.
Incomplete coverage. Logging may exist for officially sanctioned AI systems but not for the shadow AI running alongside them. An organisation that logs its approved AI but cannot account for the screening browser extension used by three team members has a compliance gap.
Log integrity. Article 12 says nothing about how to protect records from tampering. A log file can be modified, overwritten, or deleted without trace unless it is secured through mechanisms independent of the system that generated it. If logs need to hold up in regulatory or judicial proceedings, chain of custody matters.
Insufficient retention. Many organisations apply general IT log retention policies that fall short of six months, or apply the six-month requirement inconsistently across systems.
No connection to monitoring. Logs that sit in a storage system and are never reviewed until something goes wrong are not a monitoring system. Article 26 requires deployers to actively monitor AI systems for performance and anomalies.
Discovery gaps. You cannot log what you cannot see. The organisations most exposed under Article 12 are those that do not have a complete picture of their high-risk AI deployments in the first place.
The 15-Minute Discovery Standard
There is a practical metric that every CISO and GRC leader should apply to their organisation’s AI readiness. How long does it take to produce a complete, verified inventory of all AI systems currently in use across your environment?
If the answer is days or weeks, you are working from a compliance model that cannot keep pace with how AI is actually being adopted inside your organisation. If the answer is never, or only through a manual survey process, you have a fundamental gap.
FireTail deploys automated discovery across cloud infrastructure, browser-based activity, and application-layer integrations. Within 15 minutes, you have a living inventory. That inventory drives everything else, automatic log capture from every discovered system, centralised retention with tamper-evident storage, real-time alerting on anomalous activity, and the audit-ready reporting that demonstrates compliance to regulators.
FireTail captures the specific data Article 12 requires for high-risk systems. Interaction timestamps, input data classifications, output records, and human review events. Logs are centralised, retained, and exportable for regulatory review.
The Enforcement Timeline
The EU AI Act entered into force on August 1, 2024. The full obligations for high-risk AI systems become applicable on August 2, 2026. Prohibited practices have been enforceable since February 2025.
National Competent Authorities across EU member states will move into active enforcement mode after that August 2026 date.
The organisations that will be best positioned have automated, continuous logging in place now, generating the six months of retained audit trail the regulation requires before enforcement begins. If you start your logging program the day the Act is enforced, you are already behind.
Article 12 reflects what the regulation is actually trying to achieve: the ability to understand, retrospectively and in real time, what high-risk AI systems are doing and what impact they are having. Manual documentation is no longer enough.
*** This is a Security Bloggers Network syndicated blog from FireTail – AI and API Security Blog authored by FireTail – AI and API Security Blog. Read the original post at: https://www.firetail.ai/blog/article-12-and-the-logging-mandate-what-the-eu-ai-act-actually-requires
