AI & Interfaces
AI as a power systems analysis resource.
This page is intended to explain the practical uses cases of power systems analysis and AI. Different uses cases for operates and the interface between structured technical information, model interrogation tools, and reasoning systems used to support engineering workflows around reduced network models and wider power system studies. The weakest applications are where AI is asked to invent answers without context, make unsupported judgements, or replace validated engineering calculations. Used badly, AI creates risk. Used properly, it becomes a powerful engineering interface. The are four main models of using AI in power systems, these are discussed below:
Jump to: Question & Response · Inbuilt AI · Direct Interface · Agentic
1Question & Response
This is the most basic interface that many users will be familiar with, through use of a simple Chat type interface. You ask the AI a question, and it responds. This could be a simple question-responses approach or a more detailed backwards and forwards analysis over many hours. This process can be a simple question and response approach, or a more detailed analysis where reference documents, results and models are uploaded and analysed to help with interpretation.
The use cases here are many and varied. There are a few key things to always remember.
- Do not expect the first answer to be the right one — AIs are fallible, and often don’t always fully understand your first question. You will often need to interact with a series of question-response cycles before it begins to converge on a correct understanding.
- Prompt crafting — AI cannot guess what you are thinking. You need to articulate the problem you are asking it to help with; a longer paragraph with detail helps significantly. Key words in prompts can trigger certain responses, such as Think Deeply, Check from First Principles, Indicate Confidence Levels of Answers, Validate and Provide References.
- Context is everything — all AIs are trained on publicly available data and often have limited specialist knowledge. In a power systems context this means they will often understand classical power systems analysis theory and control system theory well, but they will not be up to date with latest state-of-the-art thinking. They will also not have detailed knowledge of specific standards, codes or CIGRE / IEEE technical reports and brochures.
- Context window — all AIs have a context window length, this means how long they can sustain a conversation for. A sustained conversation is usually (not always) beneficial.
- All AI models are different and have their own strengths and weaknesses. Get to know them and their foibles. Some models are good at power systems, others are less so. Within your chosen AI, they also have different reasoning models. As a good point of principle power systems analysis usually needs the most powerful model the AI can provide.
- Recognise when the AI is getting lost and going down a rabbit hole. Stop and re-evaluate — some problems an AI cannot solve. Try a different reasoning model, or a different AI all together.
2Inbuilt AI
This is perhaps the easiest use case to understand, but one of the hardest to assess properly. Many software packages now claim to have “AI” built in. That can mean several different things. In some cases it may be a genuinely useful assistant built around the software environment. In other cases it may simply be a wrapper around a chatbot, a documentation search tool, or a conventional optimisation algorithm that has been rebranded as AI. The terminology is loose and the marketing can get ahead of the engineering reality.
For power systems software, the key question is what the AI can actually see and do. Can it access the active model? Can it inspect parameters? Can it understand study cases? Can it check results? Can it explain which assumptions it has used? Can it be audited? If the answer is no, then it may still be useful, but it is probably closer to a helpdesk assistant than an engineering tool.
The risk with inbuilt AI is opacity. If the system gives an answer but does not show its basis, inputs, assumptions or references, then it should be treated cautiously. This is especially true where the output could affect protection settings, dynamic models, compliance studies, safety margins or investment decisions.
3Direct Interface
A more powerful approach is a direct interface between the AI and the engineering software. Anthropic’s Model Context Protocol, or MCP, is one example of this type of approach. In simple terms, MCP allows an AI system to connect to external tools and applications in a controlled way.
For power systems work, this could mean connecting an AI interface to software such as DIgSILENT PowerFactory, PSCAD, ETAP or similar tools. The AI does not just talk about the model; it can potentially interrogate it. It can call scripts, inspect objects, list study cases, read parameters, export results and help automate repetitive workflows.
This avoids one of the major weaknesses of the basic chat interface, which is the constant need to manually copy and paste information between the engineering software and the AI. Instead, the AI can be given a structured route into the model and can retrieve the information it needs.
There are limits. Direct interfaces can be fragile, time-consuming to set up, and heavily dependent on the software API. In some cases, it may still be quicker for an experienced engineer to drive the software directly. The value is not in replacing every manual action. The value is in reducing friction, improving repeatability, and allowing the AI to work with real model data rather than guessed context.
4Agentic
The next level beyond a direct interface is an agentic workflow. This is where the AI is not simply answering questions or calling one tool at a time, but is able to break a larger task into steps and carry them out through a set of tools, scripts or sub-agents.
This is where the technology becomes genuinely interesting for power systems engineering. An AI agent can be asked to inspect a model, identify relevant plant, check controller parameters, export results, compare plots, find inconsistencies, generate test cases, and produce a structured summary of what it found.
For example, an agentic workflow could be used to:
- interrogate a PowerFactory project and list all synchronous machines, grid-forming inverters, transformers and controllers;
- check whether key model parameters are missing, inconsistent or outside expected ranges;
- create a batch of RMS or EMT study cases;
- run simulations and export result files;
- compare measured responses against expected test criteria;
- review a PSCAD model against a block diagram or datasheet;
- generate a technical note explaining what was tested, what passed, what failed and what needs further review.
A recent breakthrough moment occurred a few weeks ago when I was able to get the AI to scan a PDF of a machine AVR model, interface directly to PSCAD, build a model of the AVR, design a test plan to prove the AVR worked, and then test the newly built model against the test plan and refine the model further.