How LiveSPICE works: numerically solving circuit equations

By | March 28, 2014

The purpose of LiveSPICE is to simulate circuits for audio signals in real time. Simulating a circuit requires solving for the relationship between the current state of the circuit and a new input signal, to produce the new state of the circuit. This post describes the algorithm LiveSPICE uses to evaluate this relationship quickly enough to run in real time.

My hope is that this post fills what I found to be a gap in the literature I could find on solving this kind of system. It took me several months to converge on this solution (numerical analysis pun!); maybe this approach is supposed to be obvious, but it certainly was not obvious to me.

Problem statement

To be more specific, the precise problem this post solves is to solve the system of equations produced by analyzing a circuit via modified nodal analysis (MNA) or other similar techniques. The basic result of a circuit analysis technique is a differential algebraic equation (DAE) describing the behavior of a circuit of the following form:

\mathbf{A y}'(t) + \mathbf{B y}(t) + \mathbf{F}(\mathbf{y}(t)) = \mathbf{c}(t)\tag{1}

Note that this representation has split the algebraic part of the system into linear \mathbf{B y}(t) and explicitly non-linear \mathbf{F}(\mathbf{y}(t)) parts. This allows the system to be statically analyzed further than is possible with a single term representing both linear and non-linear algebraic equations.

Also note that this representation requires that \mathbf{A} is constant w.r.t. t, i.e. the differential terms of the system are linear. This is the case when working with the standard circuit components.

The other part of the problem we wish to solve is to be able to find these solutions very quickly. Our overall strategy to make this fast is to break the problem into two pieces of work: one piece of work can be done once at analysis-time; the other piece of work must be done at run-time for each sample we process. Our goal is to minimize the amount of work to do at run-time, which might mean moving expensive work to be done at analysis-time.

Discretizing the system

The goal of a numerical simulation of the system is to determine values for \mathbf{y}(t) satisfying (1) at some regular sampling of t. The first step in solving this problem is to eliminate the differential equations. To do this, we apply a numerical integration technique, such as the trapezoidal method used in LiveSPICE:

y(t) = y(t-h) + \frac{h}{2}\left(y'(t-h) + y'(t)\right)

This introduces a relationship between the state of the circuit at the previous timestep y(t-h) and the current timestep y(t). To use this directly, we have to solve (1) for \mathbf{y}'(t). It is tempting to simply solve (1):

\mathbf{y}'(t) = -\mathbf{A}^{-1}\left(\mathbf{B y}(t) + \mathbf{F}(\mathbf{y}(t)) - \mathbf{c}(t)\right)\tag{2}

However, this solution is not valid, because \mathbf{A} is not invertible; only some of the equations in (1) are differential. Even worse, the non-zero rows of \mathbf{A} (the rows corresponding to differential equations) may not be independent (there may be more differential equations than unknown differentials). To work around this problem, consider this representation of (1):

\left[\begin{array}{ccc}\mathbf{A}&\mathbf{B}&\mathbf{F}(\mathbf{y}(t))\end{array}\right] \left[\begin{array}{c}\mathbf{y}'(t)\\\mathbf{y}(t)\\1\end{array}\right]=\mathbf{c}(t)

Row reduction of the system above results in a new system of the form:

\left[\begin{array}{ccc}\mathbf{A}_1&\mathbf{B}_1&\mathbf{F}_1(\mathbf{y}(t))\\\mathbf{0}&\mathbf{B}_2&\mathbf{F}_2(\mathbf{y}(t))\end{array}\right] \left[\begin{array}{c}\mathbf{y}'(t)\\\mathbf{y}(t)\\1\end{array}\right]=\left[\begin{array}{c}\mathbf{c}_1(t) \\\ \mathbf{c}_2(t)\end{array}\right]

Note that in the above, the number of equations remains the same, we've just split them into two groups. The first group gives us an ordinary differential equation:

\mathbf{A}_1\mathbf{y}'(t)+\mathbf{B}_1\mathbf{y}(t)+\mathbf{F}_1(\mathbf{y}(t))=\mathbf{c}_1(t)

where \mathbf{A}_1 is not singular, meaning we can solve for \mathbf{y}'(t) similarly to (2), and apply the trapezoidal rule (or another numerical integration scheme). We also get an algebraic equation from the second group:

\mathbf{B}_2\mathbf{y}(t)+\mathbf{F}_2(\mathbf{y}(t))=\mathbf{c}_2(t)

Combining the above algebraic equations with the equations resulting from numerically integrating \mathbf{y}'(t) yields a fully determined set of algebraic equations that can be solved for \mathbf{y}(t).

Solving the system

At this point, we have a fully algebraic system of potentially non-linear equations (rewritten from the above two sets of equations):

\mathbf{Dy}(t)+\mathbf{H}(\mathbf{y}(t))=\mathbf{0}\tag{3}

to which we can apply a non-linear solver such as Newton's method. Begin the iteration with initial guess \mathbf{y}_0=\mathbf{y}(t-h) (the state of the circuit at the previous timestep), and iterate using \mathbf{y}_{n+1}=\mathbf{y}_n+\Delta\mathbf{y}_n until convergence, where \Delta\mathbf{y}_n is found by solving

\mathbf{J} \Delta\mathbf{y}_n + \mathbf{D}\mathbf{y}_n+\mathbf{H}(\mathbf{y}_n)=\mathbf{0}

where \mathbf{J} is the Jacobian of the system in (3). For particular values of \mathbf{y}_n, this is a well defined system of linear equations that can easily be solved with any of the standard methods.

Optimizing Newton's method for real time performance

To run the simulation in real time for non-trivial circuits requires significant optimization to the Newton's method iteration in the previous section. The most effective way to reduce the cost of Newton's method is to reduce the dimensionality of the problem. Our goal is to solve the system as much as possible at analysis-time, to minimize the dimensionality of the system to be solved at run-time during simulation.

To this end, observe that most of the equations in a system derived from a typical circuit are linear, even when being evaluated at analysis-time. We can rearrange the system such that we can solve for the linear solutions at analysis-time (as a symbolic expression of the non-linear solutions), leaving a lower rank system to be solved during simulation. To do this, rearrange the columns of the Jacobian matrix \mathbf{J} (and the corresponding rows of \mathbf{y}(t)) such that the entries of \mathbf{J} that contain non-linear terms appear in the last columns:

\mathbf{J}=\left[\begin{array}{cc}\mathbf{A}&\mathbf{B}(\mathbf{y}(t))\end{array}\right]

Row reduction gives:

\mathbf{J}_{opt}=\left[\begin{array}{cc}\mathbf{A}_1&\mathbf{B}_1(\mathbf{y}(t))\\\mathbf{0}&\mathbf{B}_2(\mathbf{y}(t))\end{array}\right]

To restate the observation more precisely, note that only the solutions for \Delta\mathbf{y}_n described by \mathbf{B}_2 need to be solved for numerically during simulation. \mathbf{A}_1 can be reduced to upper triangular form at analysis-time, so the corresponding solutions for \Delta\mathbf{y}_n can be directly solved for in terms of the non-linear solutions via back-substitution.

This optimization works by moving variables from an iterative solver with steps that are O(n^3) (Newton's method) to a closed form solution that is O(n^2) (back-substitution).

To quickly illustrate the potential of this optimization, I observed a 33x speedup on a particular amplifier circuit, with practically identical numerical results. On my machine, this brings the performance from well below real time to nearly 7 times faster than real time. In other words, without this optimization, the simulation could not be run in real time.

Numerical stability

The naive run-time only approach to evaluating Newton's method will usually involve numerically robust techniques. For example, if Gaussian elimination is used to solve each iteration step, partial or full pivoting can be used.

We cannot be as careful when rearranging the system of equations during analysis for performance optimization, because selecting the largest pivots and selecting pivots corresponding to linear variables are competing goals. In addition, the candidate pivots are often symbolic expressions for which we cannot compute the magnitude. It is possible that by applying the rank reduction optimization to the Newton's method system at circuit analysis-time, Gaussian elimination uses a pivot that would not have been the maximum pivot found at runtime, resulting in a less numerically stable solution.

While this is a concern, I have not yet found a test circuit that produces a numerically unstable simulation when it should be stable. Unfortunately, I don't have much more I can say about this issue.

Conclusion

This is a relatively simple technique that works well for general circuits containing the usual set of components, and produces highly efficient numerical solutions that can be run in real time at audio sampling rates. It is probably also useful for other system simulation problems. I hope someone out there found this useful! Please don't hesitate to let me know if you find a mistake, or something is unclear.

The implementation of this method in LiveSPICE can be found in TransientSolution.cs.

15 thoughts on “How LiveSPICE works: numerically solving circuit equations

  1. Jay Thomas

    Thank you for writing this. I am very interested in learning more about what you have done here. I am having trouble following your explanation when it gets to the section right before "Solving the system". By the time you get into this section, I am completely lost. Is it possible you could provide a concrete simple example of the equations that result from a simple circuit? I would be very grateful if you could help me. Thanks.

    Reply
    1. Dillon Post author

      I'll make a quick edit to that section to maybe make it a bit more clear (it's possibly unclear that the result of row reduction is referring to just the system matrix from the previous step).

      I'll work on a concrete example to add (that's a good idea... I should have thought of that in the first place 🙂 ).

      Reply
  2. Jay Thomas

    Thanks very much. After reading the article again, it is making more sense. Thanks again for writing it as this is helping me gain insight into a design problem I've been working on. I look forward to seeing your concrete example.

    Reply
  3. Jay Thomas

    Thanks very much. After reading the article again, it is making more sense. Thanks again for writing it as this is helping me gain insight into a design problem I've been working on. I look forward to seeing your concrete example when it becomes available.

    Reply
  4. Jay Thomas

    As I've been working through the math, a question came up that I thought I'd ask. Lets say you had a two node circuit and had a differential equation as your first equation: dv1(t)/dt + K1*v1(t) + K2*v2(t) = 0. When you put that into the trapezoidal equation, you now eliminate the derivative and are left with terms in the resulting equation that have v1(t-h) and v2(t-h). My question is how to take the partial derivative of that when computing the Jacobian. Are these type of terms just regarded as constants when taking the partial derivatives? Also, in your estimation is the trapezoidal approximation good enough for solving circuits that have feedback loops? Thanks for any more help.

    Reply
    1. Dillon Post author

      Yes, when taking the partial derivative of a function, if that function is not a function of the variable you are differentiating with respect to, it is treated as a constant. Note that unless the other equation is non-linear, then the result of applying the trapezoid rule to the differential equation is a linear algebraic equation, and you just have a system of linear equations. So, the Jacobian should be constant (and Newton's method would converge in one step).

      I've used the trapezoid rule for everything in LiveSPICE with no issues. The Qucs technical papers (http://qucs.sourceforge.net/tech/node24.html) have a very detailed analysis of the performance of various integration methods if you really want to dig into it. The basic summary of that page is: implicit methods are stable for the classes of equations we care about, while explicit methods are not. The trapezoid rule is a nice balance of complexity and accuracy. In my experience, backwards Euler needs much smaller timesteps for similar accuracy, while the higher order methods are overkill.

      Reply
  5. Jay Thomas

    ok, great. So you mentioned about the issue of convergence. After an initial startup of some number of input samples being used to calculate the v1(t) and v2(t) states from the set of non-linear equations, the algorithm has converged and we can now always go through a single calculation cycle to calculate those states from our new vin sample? How do we know if the algorithm has somehow fallen into a non-converged state?

    Reply
    1. Dillon Post author

      This is a really big topic 🙂

      Initial conditions are hard. The simplest thing to do is start all the nodes at 0, and start iterating. However, this can fail for circuits where this is very far from the typical operating condition of the circuit. For some circuits, this can fail to converge for any input, meaning it's impossible to start simulating.

      The best option available is likely to solve for the steady state of the circuit, and use that as the initial guess whenever you start a new simulation. The steady state of the circuit is the state of the circuit when all of the inputs are held constant.This is done by treating all derivatives as zero (i.e. capacitors are open circuits, inductors are short circuits), setting all time variables to some fixed constant, and just using some reasonable values for any voltage/current sources or other inputs. This is also sometimes known as a "DC analysis". You can see how LiveSPICE implements this here: https://github.com/dsharlet/LiveSPICE/blob/master/Circuit/Simulation/TransientSolution.cs#L92,L119

      Once you've started running, you usually need to iterate at each timestep (usually this corresponds to a new input sample) until convergence. The number of iterations needed to converge can vary widely. LiveSPICE iterates until all of the variables satisfy the condition |dv| < |v|*epsilon, where dv is the Newton's method update step, v is the node voltage, and epsilon is 1e-4. I'm not sure how many iterations this translates to in practice. I think it is typically less than 4-5, but can be many more for stiff/highly distorted circuits. LiveSPICE limits it to 8 iterations to make sure the simulation remains real-time, though it can still noticeably pop due to slowing down when running many iterations at each sample when you send large inputs into a stiff circuit (i.e. you mash the guitar with a distortion circuit running). Note that this 8 iterations per timestep is in addition to oversampling: LiveSPICE upsamples the input by a factor of 8 (so the simulation runs at 8*48 kHz), and downsamples back to the normal sample rate after simulation. This is necessary because high frequency signals are valid intermediate circuit state that affect the audible range. Usually, if the circuit diverges, some of the state variables shoot off to infinity, which quickly leads to all your floating point numbers turning into Inf/NaN, which is easy to detect 🙂

      Reply
  6. Jay Thomas

    cool thanks again. I managed to successfully use this math to simulate a relatively simple circuit in matlab. its amazing to me that this works so well and that it is not that hard to optimize for real time.

    Reply
  7. Jay Thomas

    yes, indeed. thanks again for your help. does your livespice prigram have the feature that allows one to look at the equations that have been deduced from the schematic, optimized and ready for realtime simulation. if it does I say kudos... that would be an amazing feature. either way, I'm also going to check out your program for doing some experimentation.

    Reply
    1. Dillon Post author

      It does. If you run the simulation, and turn the log output to "verbose" as in this screenshot: http://livespice.org/img/marshallbluesbreaker.png

      You'll need to set it to verbose, and then restart the simulation (the green "play" button on the toolbar) so it re-runs with the verbose option on.

      It dumps out lots of info, including the results of solving the circuit equations. However, it doesn't format them very nicely, so they can be hard to read. It should dump out the equations from all the steps: the initial MNA equations, solving the differential equations, and optimizing newton's method (separating the solution into linear and non-linear Newton's method updates).

      Reply
  8. Ilia Sivkov

    Hello,

    We develop plugins for DAW's and currently are interested in circuits simulation. It would very nice to know more about some technical details of your method. We would like to implement it in c++ with some parallelization techniques and also try other solvers for acceleration. We will be very appreciated if you can help us or take part in some development.

    Regards,
    Ilia Sivkov

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

*