The methods are based on Machine Learning architectures

Heavy to compute functions are replaced by Chebyshev Tensors or Deep Neural Nets
Find more details in our book

Machine Learning methods are at the core of the solutions we propose.

The idea is that in risk calculations, the computational bottleneck often resides in a (or a number of) functions that need to be evaluated, repeatedly, with very small changes in their inputs between evaluations. These functions tend to be derivative pricers; the computational bottleneck is often in the revaluation step of the risk calculation.

We can use this to our advantage. We can create an accurate and fast-to-evaluate replica of those functions and use them instead of the original function in the risk calculation.

To do that optimally, it is best to take advantage fo the specifics of the risk calculation, so that the sampling we do of the function being replicated is optimal.

The combination of Dimensionality Reduction techniques, with Chebyshev Tensors or Deep Neural Nets is the optimal way to do it.

Interpolation methods through polynomials can deliver exponential convergence to the original funciton when the interpolating points are the Chebyshev points. This is a remarkable property.

Let’s think of N equidistant points in the complex plain, and let’s project them to the real line. Those are the Chebyshev points.

These points of interpolation are ideal to replicate a function, as a result of the ultra-fast convergence of the interpolating polynomial towards the interpolated function.

Chebyshev Tensors have the following properties

    • Converge to the function if it is Lipchitz continuous. Converge exponentially if it is analytic
    • They can be easily evaluated by the Barycentric Interpolation formula on Chebyshev points
    • They can be extended to high dimensions.
    • The error of the approximation can be estimated ex-ante
    • The derivatives of the polynomial converge exponentially to the derivatives of the function

For these reasons, they are ideal candidates to replicate pricing functions in risk calculations.

Find more about them here.

The Universal Approximation Theorem says that, for a given function that is well behaved, and for a given error threshold, there exists a Neural Network that approximates the function within that error.

This makes Deep Neural Networks (DNNs) very useful instruments to replicate heavy-to-compute functions inside risk engines.

An added advantage of DNNs is their high degree of flexibility. They can be configured with different number of layers and neurons in each layer, different activation functions, etc.

For this reason, they are strong candidates to approximate functions inside risk calculations, leading to a computational optimisation of the risk engine.

Let’s say that we want to explore the input space of a pricing function, in order to build its replica via Chebyshev Tensors or Deep Neural Nets. One of the inputs to the pricing function is a yield curve.

Taking the exploration algorithm naively, we would explore all the posible combinations of inputs. However, we know that it does not make sense to explore around the region where, say, the 1y and 3y swap rate is 1%, while the 2y rate is -0.1%.

Composition techniques take care of this. They are algorithmic solutions that optimise the exploration of the input space, given a target risk calculation where the proxy function is going to be used.

The work by defining the sub-domain of the full “naive” domain that the pricing function, as a computational object, where we want to explore the inputs, and translate them into entries that the original function can take and operate with.

Tensors (Chebyshev or otherwise) suffer from a strong curse of dimensionality. This problem can be mitigated by Tensor Extension Algorithms.

The number of points that a tensor has grows exponentially with its dimensions. However, in many practical cases such tensors can be expressed in TT-format, which can reduce the dependency on the number of dimensions from exponential to linear.

The application of those algorithms was explored, for example, in Low-Rank Tensor Approximation for Chebyshev Interpolation in Parametric Option Pricing, These algorithms extend the use of Chebyshev Tensors in risk calculations substantially.

 

The Sliding Techniques can be understood as extensions of Taylor series in the world of Tensors. It is a way to deal with the curse of dimensionality problem that has proven to give good results in, for example, the context of IMA FRTB.

As an illustration, let’s say that we have a 3-dimensional tensor. Under certain conditions, that tensor can be approximated by the addition of one 2-d and one 1-d tensor, or the addition of three 1-d tensors. The Sliding Technique formulates how to do this.