AI Tooling
AI as a power systems analysis resource.
This page is intended to explain the practical uses cases of power systems analysis and AI. Different uses cases for operates and the interface between structured technical information, model interrogation tools, and reasoning systems used to support engineering workflows around reduced network models and wider power system studies. The weakest applications are where AI is asked to invent answers without context, make unsupported judgements, or replace validated engineering calculations. Used badly, AI creates risk. Used properly, it becomes a powerful engineering interface. There are five main modes of using AI in power systems, discussed below:
Jump to: Question & Response · Inbuilt AI · Direct Interface · Agentic · Knowledge grounding
1Question & Response
flowchart LR
Engineer((Engineer))
AI[AI model]
Context[(Docs · Standards
Results · Models)]
Engineer -- prompt --> AI
AI -- response --> Engineer
Context -. uploads .-> AI
This is the most basic interface that many users will be familiar with, through use of a simple Chat type interface. You ask the AI a question, and it responds. This could be a simple question-responses approach or a more detailed backwards and forwards analysis over many hours. This process can be a simple question and response approach, or a more detailed analysis where reference documents, results and models are uploaded and analysed to help with interpretation.
The use cases here are many and varied. There are a few key things to always remember.
- Do not expect the first answer to be the right one — AIs are fallible, and often don’t always fully understand your first question. You will often need to interact with a series of question-response cycles before it begins to converge on a correct understanding.
- Prompt crafting — AI cannot guess what you are thinking. You need to articulate the problem you are asking it to help with; a longer paragraph with detail helps significantly. Key words in prompts can trigger certain responses, such as Think Deeply, Check from First Principles, Indicate Confidence Levels of Answers, Validate and Provide References.
- Context is everything — all AIs are trained on publicly available data and often have limited specialist knowledge. In a power systems context this means they will often understand classical power systems analysis theory and control system theory well, but they will not be up to date with latest state-of-the-art thinking. They will also not have detailed knowledge of specific standards, codes or CIGRE / IEEE technical reports and brochures.
- Context window — all AIs have a context window length, this means how long they can sustain a conversation for. A sustained conversation is usually (not always) beneficial.
- All AI models are different and have their own strengths and weaknesses. Get to know them and their foibles. Some models are good at power systems, others are less so. Within your chosen AI, they also have different reasoning models. As a good point of principle power systems analysis usually needs the most powerful model the AI can provide.
- Recognise when the AI is getting lost and going down a rabbit hole. Stop and re-evaluate — some problems an AI cannot solve. Try a different reasoning model, or a different AI all together.
2Inbuilt AI
flowchart LR
Engineer((Engineer))
subgraph Pkg [Engineering software package]
Tools[Tool features
menus · scripts · results]
Inbuilt[/AI assistant/]
end
Engineer <--> Pkg
This is perhaps the easiest use case to understand, but one of the hardest to assess properly. Many software packages now claim to have “AI” built in. That can mean several different things. In some cases it may be a genuinely useful assistant built around the software environment. In other cases it may simply be a wrapper around a chatbot, a documentation search tool, or a conventional optimisation algorithm that has been rebranded as AI. The terminology is loose and the marketing can get ahead of the engineering reality.
For power systems software, the key question is what the AI can actually see and do. Can it access the active model? Can it inspect parameters? Can it understand study cases? Can it check results? Can it explain which assumptions it has used? Can it be audited? If the answer is no, then it may still be useful, but it is probably closer to a helpdesk assistant than an engineering tool.
The risk with inbuilt AI is opacity. If the system gives an answer but does not show its basis, inputs, assumptions or references, then it should be treated cautiously. This is especially true where the output could affect protection settings, dynamic models, compliance studies, safety margins or investment decisions.
3Direct Interface
flowchart LR
AI[AI model]
Bridge[/MCP bridge/]
PF[("PowerFactory")]
PSCAD[("PSCAD")]
ETAP[("ETAP")]
AI <--> Bridge
Bridge <--> PF
Bridge <--> PSCAD
Bridge <--> ETAP
A more powerful approach is a direct interface between the AI and the engineering software. Anthropic’s Model Context Protocol, or MCP, is one example of this type of approach. In simple terms, MCP allows an AI system to connect to external tools and applications in a controlled way.
For power systems work, this could mean connecting an AI interface to software such as DIgSILENT PowerFactory, PSCAD, ETAP or similar tools. The AI does not just talk about the model; it can potentially interrogate it. It can call scripts, inspect objects, list study cases, read parameters, export results and help automate repetitive workflows.
This avoids one of the major weaknesses of the basic chat interface, which is the constant need to manually copy and paste information between the engineering software and the AI. Instead, the AI can be given a structured route into the model and can retrieve the information it needs.
There are limits. Direct interfaces can be fragile, time-consuming to set up, and heavily dependent on the software API. In some cases, it may still be quicker for an experienced engineer to drive the software directly. The value is not in replacing every manual action. The value is in reducing friction, improving repeatability, and allowing the AI to work with real model data rather than guessed context.
4Agentic
flowchart LR
Goal[Engineer's
goal] --> Agent((AI agent))
Agent --> Plan[Plans &
orchestrates]
Plan -- via MCP --> Tools
subgraph Tools [Engineering software]
direction TB
PF[("PowerFactory")]
PSCAD[("PSCAD")]
ETAP[("ETAP")]
end
Tools --> Synth[AI synthesises
results]
Synth --> Review{Engineer
review}
Review --> Out[Structured
summary]
Review -. iterate .-> Agent
The next level beyond a direct interface is an agentic workflow. This is where the AI is not simply answering questions or calling one tool at a time, but is able to break a larger task into steps and carry them out through a set of tools, scripts or sub-agents.
This is where the technology becomes genuinely interesting for power systems engineering. An AI agent can be asked to inspect a model, identify relevant plant, check controller parameters, export results, compare plots, find inconsistencies, generate test cases, and produce a structured summary of what it found.
For example, an agentic workflow could be used to:
- Interrogate a PowerFactory project and list all synchronous machines, grid-forming inverters, transformers and controllers;
- check whether key model parameters are missing, inconsistent or outside expected ranges;
- create a batch of RMS or EMT study cases;
- run simulations and export result files;
- compare measured responses against expected test criteria;
- review a PSCAD model against a block diagram or datasheet;
- generate a technical note explaining what was tested, what passed, what failed and what needs further review.
A recent breakthrough moment occurred a few weeks ago when I was able to get the AI to scan a PDF of a machine AVR model, interface directly to PSCAD, build a model of the AVR, design a test plan to prove the AVR worked, and then test the newly built model against the test plan and refine the model further.
5Knowledge grounding
flowchart LR
Docs[(Standards · codes
reports · datasheets)]
Vec[(Vector store)]
Engineer((Engineer))
AI[AI model]
MCP[/MCP bridge/]
PF[("PowerFactory")]
PSCAD[("PSCAD")]
ETAP[("ETAP")]
Docs -- embed --> Vec
Engineer -- prompt --> AI
AI <-- retrieve --> Vec
AI <--> MCP
MCP <--> PF
MCP <--> PSCAD
MCP <--> ETAP
AI -- grounded answer --> Engineer
The four interaction modes above describe how an AI is invoked. A separate question — equally important — is what knowledge the AI can reach when it answers.
Out of the box, an AI model knows what was in its training data. That covers classical power-systems theory and most public textbooks well, but it does not include the specific standards, codes and technical brochures that real engineering work relies on — IEC 60909, ENA G99, the Grid Code, CIGRE technical brochures, internal project reports, datasheets for specific plant. Asking a vanilla model about a clause in any of those will at best give a confident generic answer and at worst hallucinate.
The mainstream solution to this is retrieval-augmented generation, or RAG, built on a vector database. The workflow is:
- Take the documents you want the AI to be grounded in — standards, reports, datasheets, past studies.
- Split them into chunks and convert each chunk into an embedding (a numerical vector that captures meaning).
- Store the embeddings in a vector database, indexed for fast similarity search.
- When you ask the AI a question, the system embeds your prompt the same way, finds the most semantically similar chunks in the database, and feeds those chunks into the AI’s context as background before it answers.
The result is an AI that can cite specific clauses, reference specific standards, and stay grounded in your technical world rather than the public internet’s average view of it. It also makes hallucinations easier to spot — if the AI cites a passage, you can check the source. If it can’t cite anything, you know it’s filling in.
For a power-systems context, a useful starting library would be IEC 60909 family, ENA G99 / G98 / G74, the Grid Code, the FRCR, key CIGRE technical brochures, and any in-house reports or modelling notes you trust. The vector database itself can be a small open-source tool running locally — it does not need to be in the cloud, which matters for material under NDA or commercial sensitivity.