MCP servers and the engineering software stack
What MCP actually is
Anthropic’s Model Context Protocol (MCP) was rolled out initially in 2024, and did not really get much notice outside of AI circles. This was partly due to the fact that AI was still a novelty and had a habit of breaking or hallucinating badly. Now that the more recent models are stronger, the use of an MCP comes into its own, it allows the the AI model to call external tools and read external data. In power systems terms this means we can connect and plug into Powerfactory, PSCAD and ETAP and so on.
The AI does not see file paths or APIs directly; it sees a set of named tools, each with a schema describing what it expects and what it returns. The AI calls them, the MCP server runs the work, the result comes back. If that sounds dry, it is, but once the AI can reach engineering software through that channel rather than through copy-paste, magic can start to happen.
Why this matters for power-systems work
Most AI work in engineering today is bottlenecked on the bridge between the AI and the engineering tool. You ask the AI a question, it asks for the model context, you copy-paste a network printout or a parameter table, the AI reasons about it, suggests an action, you copy-paste back into PowerFactory or PSCAD. The AI never actually sees the model. It sees your description of the model.
That gap is where most of the friction sits, and most of the errors. The AI does not know what it does not know. If you forget to copy a key controller parameter, it happily reasons on the partial picture and gives a confident wrong answer. Or if it starts going down a rabbit hole, it gets very frustrating very quickly. MCP closes that gap. The AI reads the model itself, by calling tools that interrogate it. The bridge becomes structured rather than narrative.
The shape of an MCP server for engineering software
Without going into specifics, an MCP server connecting an AI to a power-systems tool typically exposes capabilities in a few categories:
- Model interrogation - listing objects, reading parameters, walking the network topology, finding controllers attached to specific plant.
- Study-case management - listing study cases, switching between them, knowing which is active.
- Simulation control - setting up and triggering analyses (load flow, RMS, EMT, modal), pulling results back as structured data rather than screen-scraped tables.
- Scripting hooks - running specific scripts the engineer has pre-vetted, with parameters supplied by the AI.
- Read/write boundaries - clear separation between what the AI can read freely and what requires explicit confirmation before changing anything.
That last point is non-trivial. An MCP server that lets an AI silently change a generator’s inertia, DQ parameter value across a model is a problem. An MCP server that lets it read parameters, check them against a datasheet, propose a change, and require an explicit human action to apply, is a tool.
Where the hard parts actually sit
The protocol itself is not the hard part. The hard part is the design choices that make a server actually useful. It is a slow iterative process in developing the MCP server to perform functions that are actually useful and do what you expect when you expect the,.
-
Tool granularity. Too few tools and the AI has to compose half a dozen calls to do anything; too many and it gets lost choosing between them. There is a sweet spot, and it is not the same for every tool family.
-
Schema design. The AI is reasoning over the JSON schemas you give it. Vague schemas produce vague tool use. Schemas that tell the AI exactly what each parameter means, what the failure modes look like, and how to interpret the response, produce dramatically better behaviour.
-
Idempotency and side effects. Some tools should be safe to retry. Some should not. Communicating that distinction to the AI clearly, and protecting against it not understanding, is harder than it sounds.
-
Error feedback. When a tool fails, the error message the AI receives is part of the conversation. A vendor’s raw stack trace is useless. A short, structured error that tells the AI what went wrong and what it might try next, is valuable.
-
State and session length. A long engineering session can build up a great deal of context inside the AI. The MCP server has to play well with that - surfacing what is stable, refreshing what changes, not flooding the context with low-value chatter.
-
Confidentiality boundaries. A great deal of power-systems work sits under NDA. An MCP server has to be careful about what leaves the local machine, what is logged, and what is exposed to model providers. This is an architectural concern, not a feature.
None of these are unique to MCP - they are the same problems any tool-using AI integration faces. MCP just makes them explicit.
Further thoughts
The interesting question is not whether to use MCP. The interesting question is what an engineering practice looks like when the AI has structured access to the model rather than narrative access. Some early observations:
- The cost of running a thorough check goes down, so thorough checks get done.
- The AI starts asking different questions, because it can verify before it answers.
- The engineer’s role shifts toward reviewing the AI’s reasoning and approving actions, rather than driving every keystroke.
- Audit trails become easier - every tool call is logged with its inputs and outputs.
A few things to flag if you are thinking about going down this route:
- It is not faster than an experienced engineer at any one keystroke-level task. It is faster at sequences of tasks, where the alternative is manual setup and repetition.
- It does not replace engineering judgement. An MCP server lets the AI act, but the engineer still has to decide whether the action makes sense.
- It needs guardrails - lots of guardrails. Production models, settings files, real protection systems - none of these belong on the other end of an unsupervised tool call.
- It has a learning curve, and the AI has one too. Expect to iterate on the schema, the tool granularity and the prompt structure for several weeks before it really sings.