Computation and Economic Risk
by Sebastian Benthall
I’m excited to be working with Prof. John Chuang this semester on an independent study in what we’re calling “Economics of Data and Computation”. For me, it’s an opportunity to explore the literature in the area and hunt for answers to questions I have about how the information and computation shape the economy, and vice versa. As my friend and mentor Eddie Pickle would put it, we are looking for the “physics of data”–what rules does data obey? How can its flows be harnessed for energy and useful work?
A requirement for the study is that I blog weekly about our progress. If these topics interest you, I hope you will stay tuned and engage in conversation.
To get things going, I dipped a toe into computational finance. Since I have no background in this area, I googled it and discovered Peter Forsyth’s aptly titled Introduction to Computational Finance without Agonizing Pain. What I found there surprised me.
The first few chapters of Forsyth’s work involve the pricing of stock options. Stock options are agreements wherein the option owner has the option, not obligation, to sell a stock at a particular price at a future time. These can be valued by imagining them as part of a portfolio with the stock in question, and determining the price which would hedge out all the risk from the portfolio.
Since stock prices are dynamic, evaluating the price of a stock option requires quick adaptation to change. As a model for the changes in stock prices, Forsyth uses Brownian motion, in which a variable moves according to a combination of drift and noise. The accuracy of the estimation of a value of a stock option is going to depend on the accuracy of the estimated expected value of this random stock price.
How do you estimate the expected value of a stock price that is subject to a complex stochastic process, such as Brownian motion? Forsyth starts by recommending a Monte Carlo method. This involves running lots of randomized simulations based on the model of stock price fluctuation and averaging the result.
This is great, but there’s a catch: Monte Carlo methods are computationally expensive. Forsyth goes into detail warning about how to tune the parameters to account for the time it takes these Monte Carlo simulations to converge on a result. Basically, the more iterations of simulation, the more accurate the estimate will be.
This is a very promising early result for us, because it suggests a link between computational power and economic risk. Even when all the parameters of the model are known, deriving useful results from the model requires computation. So we can in principle derive a price of computation (or, at least, of iterations of Monte Carlo simulation) as a function of the risk aversion of the stock option trader.
Is that counter-intuitive? This would be consistent with some findings that high-frequency trading reduces market volatility. It also suggests a possible economic relationship between finance, insurance, and the cloud computing market.
One question I’d like to look into is to what extent computational power can be seen as a strategic advantage to adversaries–for example, in a stock trading situation–and what the limits of that power are. At some point, the effects of computation are limited by the amount of data one has to work with. But too much data without the computational means to process it is a waste.
See where this is going? I’m interested in hearing your thoughts.
That is interesting. Are there any firms aiming at providing datacenters specifically for finance? At least for high frequency trading, it seems the big ones — Amazon, Google (with App Engine), etc — would not be attractive because they have optimized largely (I guess) for electricity costs, not distance to the market. Also, uploading the data to them would introduce extra latency and the tradeoffs between speed and reliability they offer also seem likely to be suboptimal.
If the model is not realistic, getting more precise estimates from it by running it longer (or with more random starts) may not really reduce risk. (I guess you can try to do additional computations, however, to try to assess the accuracy of your model and maybe account for it.) I guess you usually don’t go wrong, however, by adding more data, and the computation required naturally also goes up with the amount of data.
Great research topic. A few points from the world of finance. First MCS. Is virtually free given the immense computing power even of a lowly PC. In addition to lots of freeware and open source MCSs there are plenty of Excel add ins. So the cost per simulation is virtually zero.
MCS Is a great and necessary tool that’s in virtually every options trader’s toolbox.
One can run 500,000 samples or runs in seconds or minutes. I’d be glad to discuss this to you if you’d like. My other email address is biobot@gmail.com. I check that one every few minutes. Ethermadness is more for private communications to select colleagues.
Good luck on you study. Wiki the subject if you haven’t already. Wiki is great for current thinking and uses on MCS. It’s way more current tha any dead tree writings by its nature.
Btw I’m a math and physics freak turned quant in training. I’m going to check out that book if it’s cheap enough.