AI and Power Engineering — Part 2 | Hallucinations, Validation, QA and Data Security
Hallucinations
AI systems are powerful and popular, but let’s be clear about what they actually are. They’re predictive systems that generate answers based on the input and context you give them. AI still hallucinates. It’s much better than it was, but it will confidently tell you wrong things and send you down rabbit holes. Best case, that wastes your afternoon. Worst case, it’s dangerous. Most users have had a “how many r’s are in strawberry” moment.
Safety
Power systems analysis is safety critical. Get a protection setting wrong and you risk equipment damage, extended outages, or worse, a hazard to personnel. Get a fault level calculation wrong and you could under-rate switchgear. These calculations have real-world consequences, and that’s why the work needs to happen in formal, validated, industry-accepted software: PowerFactory, ETAP and PSCAD. These tools are defensible. They’ve been benchmarked against international standards for decades. They give you an audit trail and a QA process. Another engineer can open the model, check the data, review and reproduce your results.
Use and Roll Out
At a corporate level, I don’t let anyone below senior engineer use AI for work. The reason is simple: You need enough experience to recognise when a simulation is not correct. A senior engineer knows when a three phase fault level looks suspiciously low for the network topology, or when a transformer loading sits at 40% when it should be closer to 90%, or when a load flow converges to a voltage profile that simply doesn’t make sense. They stop, investigate, and find out why. AI is no different. You have to know what good looks like before you can use it safely. A graduate taking an AI answer at face value on an X/R ratio or a sequence impedance is a recipe for trouble.
Context Window and Data
With AI, context is everything. Large context windows help. Explaining what you’re trying to do helps. Uploading output tables, single line diagrams, and result exports helps. But remember, the AI only sees what you give it. Ask it to explain a specific clause in IEC 60909 or ENA G99 and it will start hallucinating. It was trained on publicly available information. It can’t see past paywalls, DRM protected standards documents, or read your mind about what you’ve been thinking for the last half hour.
Guardrails
Finally, guardrails and data sovereignty. Most projects carry commercial sensitivity. NDAs exist. Client network models, SLDs, protection settings, and generator parameters are not things you want casually uploaded to a public AI service. Uploading unsanitised data might feel harmless, but it can land you, or your company, in serious legal trouble.
Summary
AI is a tool. Use it with the same judgement you’d apply to any other engineering tool. Know its limits, verify its outputs, and never switch off your own brain.