In finance, computation efficiency can be directly converted to trading profits sometimes. Quants are facing the challenges of trading off research efficiency with computation efficiency. Using Python can produce succinct research codes, which improves research efficiency. However, vanilla Python code is known to be slow and not suitable for production.
In this technical article, Yi Dong, a Deep Learning Solutions Architect at NVIDIA with a role to provide Financial Service Industry AI solutions, explores how to use Python GPU libraries to achieve the state-of-the-art performance in the domain of exotic option pricing.
This post is organized in two parts with all the code hosted in the gQuant repo on GitHub:
- In Part 1, Dong introduces the Monte Carlo simulation implemented with Python GPU libraries. It combines the benefits from both CUDA C/C++ and Python worlds. In the example shown, the Monte Carlo simulation can be computed efficiently with close to raw CUDA performance, while the code is simple and easy to adopt.
- In Part 2, Dong experiments with the deep learning derivative method. Deep neural networks can learn arbitrarily accurate functional approximations to the expected value derived by Monte Carlo techniques, and first order Greeks (or risk sensitivities) can be accurately calculated by a backward pass through the network. Higher order Greeks can be achieved by iterating the same network backward pass process.
The method that he introduced in this post does not pose any restrictions on the exotic option types. It works for any option pricing model that can be simulated using Monte Carlo methods.