AI Transparency & Usage Policy
Version 1.0
Status: Effective January 2026
Introduction: Innovation with Integrity
At our consultancy, we believe that the responsible use of Artificial Intelligence (AI) is essential for modernising the energy sector. By integrating Generative AI through Ennuvo’s professionally developed, dedicated AI Rulebook into our workflows, we aim to deliver more efficient, comprehensive, and innovative solutions to our clients.
This policy outlines our promise to you: we use AI to enhance our expertise, not to replace it.
Our Core Philosophy: Human-in-the-Loop
AI is utilised as a "force multiplier"—a tool that assists our consultants in processing data, drafting documentation, and exploring scenarios. It does not make decisions.
Every piece of content, analysis, or advice generated with the assistance of AI is subject to rigorous review by a Suitably Qualified and Experienced Person (SQEP). Our consultants remain fully accountable for the accuracy, quality, and integrity of all deliverables, through the principle of “human in the loop”.
Data Security & Information Protection
Protecting proprietary data and national security interests is imperative. We adhere to a "Zero Trust" approach regarding public AI models:
No Sensitive Data Input: We assume all data could end up in the public domain, so we do not input any nuclear sensitive, commercial sensitive or personal sensitive information into public AI platforms.
Approved Enterprise Tools: Our staff utilises only vetted, enterprise-grade AI tools that meet our strict information security standards. The use of unauthorised or "shadow" AI tools is strictly forbidden to prevent data leakage.
How We Use AI (and How We Don't)
To ensure transparency, we are clear about where AI adds value and where it has no place.
Authorised Uses:
Ideation & Scenarios: Brainstorming potential failure modes for HAZOP studies or generating "what-if" scenarios to challenge our thinking.
Efficiency & Drafting: creating initial drafts of meeting minutes, routine reports, and non-technical summaries to speed up administrative processes.
Data Synthesis: Summarising trends from open-source operational logs or regulatory guidance documents.
Quality Assurance: Providing a function to ensure our analysis considers all data.
Prohibited Uses:
Final Safety Decisions: AI is never used to make final judgments on safety cases or operational limits.
Critical Calculations: All calculations are performed and verified using traditional, deterministic methods.
Verification & Quality Assurance
We are acutely aware of the risk of AI "hallucinations" (plausible but incorrect facts). To mitigate this, we employ a "Verify, Then Trust" protocol:
Regulatory Alignment: All AI-assisted drafts are cross-referenced against official documentation, such as ONR Safety Assessment Principles or IAEA standards.
Source Verification: We require our consultants to manually verify citations and references to ensure they are legitimate and accurate.
Our Commitment to Transparency
Trust is the foundation of our consultancy. If AI technology has been utilised significantly in the creation of a deliverable for your project, we commit to disclosing this methodology to you. We invite an open dialogue with our clients about how these technologies are applied to their specific challenges.
By adhering to these standards, we ensure that you receive the benefits of cutting-edge innovation without compromising the safety culture that defines our industry.