MoCaX accelerates many XVA calculations by orders of magnitudePretrade, overnight batch and sensititivies
XVA pricing - CVA, DVA, FVA...
XVA pricing is one of the biggest technology challenges the financial industry has ever faced.
MoCaX helps a lot in that space as it tackles precisely where the bottleneck of the calculation is: in the pricing step.
Most of the XVA computational effort is spent on re-pricing the portfolio around 1 million times. Indeed lots of institutions can only do XVA via Monte Carlo for simple vanilla products.
MoCaX expands the XVA coverage in two ways
- It enables ultra-fast (and ultra-accurate) pricing of non-vanilla products
- It accelerates the current existing pricing jobs, hence enabling new calculations that are difficult to imagine with standard technologies (e.g., XVA stress testing) and/or leaving computing bandwidth for other calculations (e.g. XVA sensitivities).
The APA algorithm in MoCaX is based in your existing pricing functionality, so there is no need to re-invent existing things. Indeed, you can revisit your existing pricers because there is no longer need for shortcuts, now that MoCaX can speed up the run time no matter how slow your pricer is. All this without loss of accuracy.
Implementation is pretty straight forward as the vast majority of your existing Monte Carlo engine is untouched.
XVA pricing - MVA, the special case
MVA is a special case in the XVA spectra, because it requires to calculate the future Initial Margin inside a Monte Carlo simulation. This is quite a task. The AGA method within MoCaX is the only way known to handle this computation, exactly.
From September 2016 several financial institutions need to post Initial Margin to each other. The industry has chosen the SIMM for it, which is a sensitivity-based VaR computation. In order to compute MVA, we need to simulate SIMM inside the XVA Monte Carlo engine, which means we need to simulate dynamically the first order sensitivities of our portfolio. This is precisely what AGA provides.
When a pricing function is passed through MoCaX, APA (Algorithmic Pricing Acceleration) creates an ultra-fast and ultra-accurate version of the pricer. AGA (Algorithmic Greeks Acceleration) then differentiates the APA representation of the pricer, so that its derivatives are also available, ultra-fast and ultra-accurate too.
On this way we can dynamically simulate SIMM (or Initial Margins like those from CCPs) inside the Monte Carlo engine. Computing MVA from that is a fairly trivial step.
Below you can see the SIMM simulation of an at-the-money swaption. It can be seen that those paths that end up in-the-money cluster around a delta-1 VaR, while those that end up out-of-the-money cluster around zero (as the swaption delta is zero).
For pre-trade XVA analysis, speed is crucial. This is one of the features that MoCaX excels at.
MoCaX works beautifully for a vast range of products. Pricing time can be of a few nano-seconds, which means you can price a trade 1 million times in a few milliseconds. And all this is done with very high accuracy.
For example, MoCaX can price analytical options 100-200 times faster, swaps 2,000 faster, American Options 240,000 times faster, Bermudan Swaptions or Barrier Options up to 5,800,000 times faster*.
MoCaX opens up a new gate for the capabilities of pre-trade XVA analysis.
(*) Tests performed in C++ using QuantLib pricers in an i5 Intel processor. 1-dimensional case.
There are two methods for XVA sensitivities
- Bump-and-reprice (BaRP)
- Adjoint Algorithmic Differentiation (AAD)
BaRP is easy to implement, but we need to run a new Monte Carlo job per sensitivity. AAD can provide as many sensitivities as needed with a very limited computation effort, however its implementation is difficult.
One of the beauties of APA in MoCaX is that it works with both approaches; this solution is “orthogonal” to them.
If you use BaRP, MoCaX opens a new space for you, as it can accelerate XVA by several orders of magnitude, so you can get all needed sensitivities via BaRP in a fraction of the original time.
If you are into AAD, MoCaX will fasten the run of the actual Monte Carlo simulation, so you can implement MoCaX before AAD or vice versa and in both cases the computation will be greatly accelerated.
However, you will achieve most substantial speed gains with MoCaX with a very limited implementation effort, while it will take you a substantially longer time to benefit from AAD.