SSO Analysis - Eigenvalue vs Impedance Scanning
The starting point - My Question
MY question to the AI, is shown below. Note that I have given a long and detailed question. This is important to give the AI context, demonstrate a level of knowledge of the problem already, to avoid it giving a simple answer.
“In modern SSO studies, there has been a steady move away from classical eigenvalue analysis. Why is this? There have obviously been cases of unstable modes occurring, which the Eigenvalue analysis has not identified, but why? In theory, if all the state space equations are known on the system and can be solved for various linearization points then unstable modes should be identified. What is being missed? The high penetration of IBR is clearly a major contributory factor, as is low system strength and inertia, but why? What do impedance scanning methods identify that Eigenvalue analysis does not. What are the problems and blinds spots of impedance scanning methods?”
Based on the answer, i then asked a follow up question: “So as a follow on question. The problem appears to be purely that that OEM models are black boxed. There is a move in many TSOs to insist on open models, but these are often generic WECC or IEC models, which have necessarily limited accuracy. How suitable would these be really, would the loss of fidelity matter, or be significant? if truly detailed models were available and open, would impedance scanning still be useful?”
This is a long post. I have summarised the response at the top level, and then given the detail responses below. I have decided to use Claude Opus 4.7 and ChatGPT 5.4 for this test.
The short summary
This is one early example. The same pattern - read a specification, build in the engineering tool, design and run a test plan, iterate - extends naturally to controllers, protection settings, full-scale model validation against measured data, and beyond.
Claude: The framing that black-box OEM models are the sole problem is too narrow - even with fully open models, eigenvalue analysis would still struggle with mode-switched dynamics (limiters, FRT, GFM transitions), the combinatorial explosion of operating points in IBR-rich low-SCR networks, and numerical conditioning at scale. Generic WECC/IEC models address the access problem but not the fidelity problem: they were designed for positive-sequence RMS transient stability work and degrade precisely in the SSO frequency band of interest, making them adequate for screening and for proving a problem exists, but weak for the inclusion-style “no problem here” conclusions that most TSO acceptance studies actually require. Even with full white-box detail, impedance methods would remain valuable - for analytical derivation rather than scanning, for natural aggregation of large IBR populations, for frequency-resolved diagnostic insight, for direct validation against PMU and injection measurements, and for cross-checking eigenvalue results. The realistic end state isn’t one method winning, but using each for what it does best: eigenvalue analysis for participation and modal diagnosis, impedance methods for aggregation and frequency-domain interaction, and EMT as the final arbiter for limiter-driven and large-signal behaviour that no linearised method captures.
ChatGPT: The shift away from classical eigenvalue analysis in SSO studies is not because the method is wrong, but because the models being linearised are often incomplete or too simplified-particularly for IBRs where key control dynamics are hidden or poorly represented in generic WECC/IEC models. These generic models are adequate for bulk system studies but can miss the fast control and impedance characteristics that drive SSO, meaning loss of fidelity can be significant. Even with fully open, high-fidelity models, impedance scanning would still be valuable because it provides a clear view of plant–grid interaction across frequency, which eigenvalue analysis does not directly expose. In practice, the most robust approach is to combine methods: eigenvalue analysis for root-cause insight (when models are trustworthy), impedance scanning for interface risk, and EMT simulation for validation under realistic conditions.
The Human (My) View
Both AI gave a solid clear response. This is partly as I have set my preferences this way to avoid waffle, but they clearly understood and contextualized the problem correctly, and gave the answer I was expecting about the black-box problem of IBRs, but both went beyond it and pushed back into the benefit of impedance scanning as a tool.
The interesting point in the response is that ChatGPT skirted around some of the points, but gave a more tutorial based answer. Its points were not directly wrong, but didn’t quite capture what I was looking for; although it was interesting it picked up positive sequence representation - although did not quite close the though on that, and challenge the premise that IEEE 412.5 and WECC models are perhaps too simple. It also didn’t quite capture the nuance of my pushback, and the premise of why MIMO type scanning might still be useful and interestingly delves more into PLL stability issues.
Claude gave an answer that was closer to my brief, but was much shorter and terser (as per my setting). It correctly identified the black box issue gain, and although it talks about PLL issues, these are slightly skirted over. It specifically addressed the issue of low system strength and inertia and pushed back more on the follow up question where I was (gently) trying to lead it astray.
Both answers are solid, technically sound and show good reasoning. Individually both are good, blend them together and you build a very full and clear understanding of the issue. SSO problems and analysis techniques are a real issue on most systems, and so are currently hot topics - although it is important to note that SSO theory has been around since the 1970s or so.
Read on if you want the full version of the AI responses.
ChatGPT Response (V5.4)
Yes - in theory eigenvalue analysis should find the mode. In practice, the problem is not the mathematics; it is the model boundary.
Classical eigenvalue analysis works when you have:
The correct differential equations, correct controls, correct limits, correct operating point, correct network representation, correct frequency range, and a valid linearisation.
Modern IBR-driven SSO often violates several of those assumptions at once.
The big reason for the move away from classical eigenvalue analysis is that many OEM IBR models are black-box EMT models, not transparent state-space models. You cannot reliably form the full A-matrix if you do not know the internal PLL, current controller, DC-link, filters, protection logic, limiters, digital delays, sampling, PWM approximations, plant controller details and vendor-specific control modes. NREL explicitly presents impedance-based analysis as useful because it can use highly accurate black-box EMT models of IBRs.
The second reason is that the “mode” is often not a classical synchronous-machine torsional mode. With high IBR penetration, the unstable behaviour may be a control-network interaction: PLL versus weak grid, current controller versus LCL/filter/grid impedance, plant voltage controller versus network resonance, STATCOM/BESS/PV/wind controller interactions, or interactions between nearby IBRs. The IEEE PES IBR SSO Task Force survey notes that real-world IBR SSO events are strongly linked to grid characteristics such as series compensation and low system strength.
The third reason is that positive-sequence RMS models often miss the physics. Many IBR SSO mechanisms are frequency-coupled, sequence-coupled, phase-domain, or fast-control phenomena. A positive-sequence RMS eigenvalue model may only see a softened version of the control, or not see it at all. PLL-induced oscillations, for example, can involve mirror-frequency coupling around the fundamental frequency, which a simple positive-sequence model does not naturally capture.
So the blunt answer is: eigenvalue analysis did not “fail”; the model being linearised was incomplete, over-simplified, or linearised at the wrong boundary.
What is usually being missed?
First, the vendor controls. The public RMS model might include a REEC/REPC-style approximation, while the real EMT model contains proprietary control paths, protection flags, current limits, PLL behaviour, digital filtering and mode switching. The unstable mode may live inside those details.
Second, the operating point sensitivity. A plant can be stable at one MW/MVAr/voltage/SCR condition and unstable at another. With IBRs, damping can move rapidly with active power, reactive power, voltage control mode, PLL bandwidth, current limit status and neighbouring plant dispatch.
Third, the network seen by the inverter is not just SCR. SCR is a 50 Hz Thevenin measure. SSO is concerned with impedance over a frequency range. NREL makes this point directly: SCR can give an over-optimistic estimate of grid strength, and nearby IBRs can substantially change the grid strength seen by an IBR at non-fundamental frequencies.
Fourth, frequency coupling. A perturbation at, say, 20 Hz can produce coupled behaviour at 30 Hz, 70 Hz, 80 Hz, etc., depending on the frame and control structure. A scalar impedance or simple positive-sequence scan can miss this unless the method captures dq, sequence, or MIMO coupling properly.
Fifth, limiters and discontinuities. Eigenvalue analysis assumes small-signal behaviour around a smooth operating point. Real IBR behaviour may change when current limits, PLL limits, voltage ride-through logic, DC-link limits, Q-priority/P-priority switching, protection supervision, or plant controller deadbands engage.
Impedance scanning helps because it asks a different question. Instead of trying to build the full internal state-space model, it perturbs the device or network in EMT and measures the frequency-dependent response. In simple terms, it asks:
“How does this plant look electrically to the grid at each frequency?”
and
“How does the grid look electrically to the plant at each frequency?”
If the source and load impedances have the wrong magnitude/phase relationship, you can identify a possible unstable interaction. This is especially useful for black-box EMT models. It can reveal negative resistance regions, poor phase margin, resonant intersections, PLL interaction, current-loop interaction, filter-network interaction, and adverse interaction between neighbouring IBRs.
But impedance scanning is not magic. Its blind spots are real.
It depends heavily on where you scan. The result at the inverter terminals, collector bus, grid connection point, or remote Thevenin boundary can look different.
It depends on the perturbation method. Small perturbations may not trigger limiters; large perturbations may no longer be small-signal. Chirp, single-tone, dq injection, sequence injection and phase-domain injection can give different insight.
It can mislead if treated as SISO. Many IBR interactions are MIMO: d-axis/q-axis, positive/negative sequence, frequency-coupled terms. A simple ratio of two scalar impedances can hide important coupling.
It does not automatically tell you participation. Eigenvalues can tell you which states are participating in a mode if the model is transparent. Impedance scans can show the interaction risk, but not always which internal controller state is the root cause.
It is usually local. It is very good for “plant versus grid at this boundary”, but weaker for full-system modal observability unless combined with systematic scanning and EMT studies.
It is still linearised behaviour. If the real event involves control mode switching, protection action, limiters, ride-through transitions, or saturation, impedance scanning may understate or mischaracterise the risk.
So the best modern practice is not “eigenvalues are obsolete”. It is more like this:
Use eigenvalue analysis where the state-space model is trustworthy and transparent.
Use EMT simulation where fast controls, switching, sequence coupling and protection behaviour matter.
Use impedance scanning where black-box IBR models and weak-grid control interactions dominate.
The reason the industry has moved toward impedance methods is practical: modern SSO risk is often sitting inside proprietary converter controls interacting with frequency-dependent network impedance. Classical eigenvalue analysis only sees that if the true internal equations are available and correctly linearised. Often they are not.
ChatGPT Follow Up Response
Yes, the black-box issue is a major driver, but it is not the only issue.
Generic WECC / IEC models are useful for bulk-system behaviour: voltage recovery, active/reactive power response, ride-through envelopes, plant-level controls, frequency response, and approximate transient stability. They are much less reliable for SSO / control interaction studies, because the instability may depend on details that generic models deliberately remove: PLL bandwidth, current-loop tuning, digital delay, filters, inner protection logic, current limiting, DC-link controls, converter-level impedance shaping and vendor-specific mode switching. NERC’s EMT guidance makes the same practical distinction: RMS-type models are not intended to represent sub-cycle phenomena and fast converter controls, while EMT models are needed where those behaviours matter.
So yes, the loss of fidelity can be significant. A generic model can be “good enough” for a 2–10 second voltage recovery study and still be actively misleading for a 10–40 Hz converter-control interaction. It may reproduce terminal P/Q response while having the wrong internal impedance characteristic. That is the critical point: matching time-domain envelopes does not guarantee matching small-signal impedance.
Open generic models are still valuable, but mainly as a planning-screening tool. They let TSOs run consistent studies, avoid total dependence on encrypted vendor blocks, and build system-wide cases. But I would not trust a generic WECC/IEC model as the final word on IBR-driven SSO unless it has been specifically validated against measured or OEM EMT impedance characteristics over the relevant frequency range. WECC’s own recent modelling guidance frames generic IBR models as non-proprietary and accessible, but that accessibility is not the same thing as full vendor fidelity.
If truly detailed open models were available, eigenvalue analysis would become much more useful again. You could linearise the actual converter, plant controller and network model, inspect participation factors, identify which states are involved, and carry out sensitivity studies properly. In that world, eigenvalue analysis would not be obsolete at all.
But impedance scanning would still be useful.
The reason is that impedance scanning is not merely a workaround for black boxes. It is a different diagnostic view of the system. It tells you what the plant or grid “looks like” electrically across frequency at a defined boundary. That remains useful even if you have the full equations, because it gives a compact interface characteristic for plant-grid interaction, especially when comparing different grid conditions, neighbouring IBRs, SCR/XR changes, series compensation, filter banks, cable networks, or weak-grid operating points. NREL describes impedance methods as widely used for assessing dynamic interactions between IBRs and the surrounding grid.
The cleanest answer is this:
Eigenvalue analysis is best for root cause when the model is transparent and valid.
Impedance scanning is best for interface risk: “will this plant interact badly with this grid at this boundary and frequency range?”
EMT time-domain simulation is best for confirmation under realistic disturbances, limits and protection behaviour.
With open detailed models, you would ideally use all three. Eigenvalues to understand the mode, impedance scanning to understand the plant-grid coupling and margins, and EMT to prove the behaviour under realistic events.
The real danger is replacing one false certainty with another. Generic open models can give a false sense of transparency. Impedance scans can give a false sense of completeness. The serious approach is validation: compare model response against site tests, factory tests, EMT benchmark cases, impedance scans, and actual event records where available.
Claude (Opus 4.7)
A useful first push-back on the framing: it’s less “move away from eigenvalue analysis” and more “eigenvalue analysis has become insufficient on its own.” Most serious modern studies use both methods plus EMT validation. But the question of why the classical approach now misses things is the right question to dig into.
Why eigenvalue analysis misses unstable IBR modes
The dominant reason is mundane but decisive: you cannot construct the A matrix you don’t have the equations for. OEM IBR models are delivered as encrypted PSCAD “black boxes” or compiled DLLs. The control law, PLL structure, current limiter logic, FRT switching thresholds - none of it is exposed. So the “if all state space equations are known” premise is doing enormous work. In a real GB or US transmission study with 20+ different IBR vendors, that premise is essentially never satisfied.
Even where white-box models exist, several deeper issues bite:
-
Mode-switched dynamics. Current limiters, FRT logic, anti-windup, freeze logic, GFM mode transitions (the V_crit issue you’re working on is exactly this) - these aren’t smooth nonlinearities you can ignore with small-signal linearisation. The system structure literally changes. A single A matrix doesn’t represent the system; you need a family of them and a switching logic. Many real SSO incidents (ERCOT Panhandle SSCI, several UK BESS oscillation events) involved limiter activity, where the linearised “small-signal stable” model was correct but irrelevant.
-
Operating-point density. For a synchronous-machine-dominated system, a handful of dispatch cases captured the envelope. For IBR-rich, low-SCR networks, the small-signal A matrix is highly sensitive to: SCR at the POI, PLL bandwidth, number of inverters online, dispatch split between GFL and GFM, reactive support state. The combinatorial explosion of linearisation points needed is what defeats the method in practice, not the maths.
-
Numerical conditioning and scale. Modern A matrices for a regional network span timescales from kHz (inner current loops) to sub-Hz (outer voltage/frequency loops). Eigenvalue extraction on badly-conditioned matrices of order 10⁴–10⁵ is where you start questioning every result.
-
The “infinite bus” hangover. A surprising amount of legacy small-signal tooling still implicitly assumes a stiff voltage at the device terminal. With low SCR, the network impedance and the inverter dynamics are co-determinants of stability - neither is “the small thing perturbing the big thing.”
Why low strength / low inertia specifically
Because PLL-based grid-following inverters use the terminal voltage to estimate angle, and the terminal voltage is itself a function of inverter current through the network impedance. At high SCR this loop is weakly coupled and stable; at low SCR it’s a dominant feedback path that classical methods treat as “the network” - i.e. a static element - when it’s actually a dynamic actor. Inertia matters because it broadens the frequency band over which you have to worry: you no longer have a clean separation between EMT phenomena (>100 Hz, ignore in stability studies) and electromechanical (<5 Hz, the only thing that matters).
What impedance scanning catches
Two things, fundamentally:
- Black-box compatibility. You treat each IBR as a frequency-dependent impedance/admittance, measured by perturbation. You don’t need the equations. This is the single most important practical advantage.
- Frequency-resolved network–device interaction. The Generalised Nyquist Criterion applied to the source impedance and grid impedance pair tells you about resonant interactions across a broad spectrum, including harmonic and interharmonic ranges that conventional positive-sequence small-signal models throw away.
Naturally captured: dq cross-coupling (the impedance is a 2×2 matrix), digital sampling delays, filter resonances, and PLL-induced negative real-part regions in the impedance.
Blind spots of impedance scanning
Worth being honest about - the method is not a panacea:
- Still small-signal, still linearised. Same operating-point dependence; doesn’t predict limiter-driven large-signal events any better than eigenvalue analysis.
- MIMO problem with multiple IBRs. Single-port scanning is straightforward; multi-port impedance with N inverters becomes a 2N×2N matrix problem and the Nyquist criterion gets ugly. Most practical studies cheat by scanning one device against an aggregated rest-of-system.
- No mode identification. It tells you the system is unstable; it doesn’t tell you which control loop in which device is the culprit. There’s no participation-factor equivalent. This is a major weakness for diagnosis and mitigation.
- Reference frame ambiguity. dq impedance, sequence impedance, and phasor impedance can give different stability conclusions for the same system. Practitioners often don’t realise they’re comparing apples and oranges across studies.
- RHP-pole counting. GNC requires you to know how many RHP poles each subsystem has. For black-box vendor models, this is genuinely unknowable, and people quietly assume zero. Sometimes that assumption is wrong.
- Computational cost is not trivial. A proper scan over frequency, operating point, and topology is many EMT runs. Often more expensive than people pretend.
- PLL is the perennial troublemaker. PLL dynamics produce a strongly operating-point-dependent negative-resistance region in the dq impedance; small changes in dispatch shift it noticeably.
The honest summary: the move toward impedance methods isn’t because they’re theoretically superior - they have their own (different) blind spots - but because they’re the only family of methods that copes with the data-availability and structural-nonlinearity reality of the modern fleet. The serious studies use both, plus targeted EMT, and treat any method giving a clean answer on its own with suspicion.
Opus Follow Up Response
A useful pushback to start with: the framing “the problem appears to be purely that OEM models are black-boxed” is too strong. Even with full open white-box models you’d still face: mode-switched dynamics (limiters, FRT logic, GFM mode transitions), the combinatorial explosion of operating points, MIMO interaction at scale, and numerical conditioning of A matrices on the order of 10⁵+. Black-boxing is the dominant practical obstacle, not the only fundamental one. Worth keeping that in view as we go.
Are generic WECC/IEC models adequate?
For SSO/SSCI work specifically, the honest answer is: marginal-to-inadequate, and the reasons are structural, not just parametric.
The WECC 2nd-generation generics (REGC_A/B/C, REEC_A/B/C, REPC_A) and IEC 61400-27 family were designed for positive-sequence RMS transient stability simulation - i.e. the electromechanical timeframe, sub-5 Hz. The inner current loop in REGC_A is essentially algebraic. PLL representation is a simplified first-order or absent. Modulation and sampling delays aren’t there. Vendor-specific FRT logic, anti-windup, and current-limiter switching are approximated by generic blocks that may not match the OEM’s actual implementation in either threshold or topology.
This matters because the SSO band of interest (the Hornsea-type events around 9 Hz, Panhandle SSCI in the 20–30 Hz range, weak-grid PLL interactions across a similar span) sits squarely in the frequency range where generic model fidelity falls off. The generic model will often correctly predict that there’s a sensitivity to SCR or dispatch - it will miss the exact frequency, growth rate, or damping crossover, sometimes by enough to matter operationally.
For other study types the picture is different:
- Transient stability / FRT envelope screening: generics are adequate and arguably appropriate (their level of detail matches the question being asked).
- Protection / fault contribution: generics are inadequate - they don’t capture the fault current waveform shape or the active sequence-control behaviour during unbalanced faults.
- Harmonic / interharmonic resonance: generics don’t help at all; you need impedance characterisation.
- GFM transient behaviour (your patch): the WECC REGFM_A1 work is still maturing, and the dual-mode behaviour you’re working on is exactly the sort of thing a generic struggles to represent because the structure changes between modes, not just the parameters.
So a TSO mandate for “WECC generic” gets you a screening tool, not an analytical instrument. There’s a real risk of false confidence - passing a study with a generic model that the actual hardware would fail.
Does the fidelity loss matter?
It matters asymmetrically. For exclusion (proving a problem exists) generics are often sufficient - if a generic model goes unstable, the real plant probably will too. For inclusion (proving a problem doesn’t exist) they are weak - generic stability is no guarantee of OEM stability. Most TSO acceptance studies are framed as the latter, which is the wrong way round to be relying on simplified models.
There’s also a subtler issue: generics encourage a false sense of comparability between OEMs. Two manufacturers tuned to the same WECC parameters look identical in the study but behave very differently in service, particularly at the limits. This has real implications for how you characterise system-wide aggregate behaviour.
Would impedance scanning still be useful with detailed open models?
Yes, and for reasons that go beyond the access problem. A few that I think are genuinely durable:
- Computational tractability and aggregation. Even with full white-box models for every device, eigenvalue analysis of a regional system with thousands of IBRs is at the edge of practical computation. Impedance methods aggregate naturally - you can parallel-combine impedances at a bus and reduce a complex subsystem to a frequency-dependent equivalent without losing the interaction information. Eigenvalue methods don’t aggregate cleanly without losing modes.
- Analytical vs scanning distinction. With white-box models you don’t need to scan - you can derive impedance analytically (the Sun/Wen/Belkhayat tradition). This is more elegant than perturbation scanning and gives closed-form insight into which control parameters drive which features of the impedance. So the method survives even when the technique changes.
- Frequency-resolved interpretation. A Bode plot of source/grid impedance tells an engineer something an eigenvalue list doesn’t: where the system is fragile across frequency, where the negative-resistance region of the PLL sits, where harmonic resonances align with control bandwidth. The diagnostic value is high in a way eigenvalue tables aren’t.
- Validation against measurement. PMU data, disturbance recordings, and dedicated injection tests produce de-facto impedance measurements. You can validate an impedance model against the real plant directly. Validating a state-space model against measurement is much harder - you need to back out modes from time-series response, which is itself an ill-posed problem.
- Cross-method confidence. Independent methods that agree increase confidence; when they disagree, the disagreement itself is diagnostic. A serious study practice would always run both even with full information, just as you’d never rely solely on RMS or EMT.
What would change with full open models is the role: impedance methods would become more about diagnosis, aggregation, and validation, and less about coping with vendor opacity. The participation-factor weakness of impedance methods would become the main reason to keep eigenvalue analysis in the toolkit - it’s still the best method for answering “which control loop in which device is causing this mode.”
The framing I’d push you towards
It’s not “open models would solve the problem.” It’s “open models would let you choose the right method for the question, instead of being forced into impedance scanning by lack of access.” The methods are complementary by nature - black-boxing has just made one of them mandatory and stripped you of the ability to use the other to cross-check it. The deeper issue is that no single linearised method, however well-informed, captures limiter-driven and mode-switched behaviour in low-SCR networks. That ultimately requires EMT, which is why even the best small-signal study should be treated as a screening step, not a verdict.