Computation and Economic Risk

by Sebastian Benthall

I’m excited to be working with Prof. John Chuang this semester on an independent study in what we’re calling “Economics of Data and Computation”. For me, it’s an opportunity to explore the literature in the area and hunt for answers to questions I have about how the information and computation shape the economy, and vice versa. As my friend and mentor Eddie Pickle would put it, we are looking for the “physics of data”–what rules does data obey? How can its flows be harnessed for energy and useful work?

A requirement for the study is that I blog weekly about our progress. If these topics interest you, I hope you will stay tuned and engage in conversation.

To get things going, I dipped a toe into computational finance. Since I have no background in this area, I googled it and discovered Peter Forsyth’s aptly titled Introduction to Computational Finance without Agonizing Pain. What I found there surprised me.

The first few chapters of Forsyth’s work involve the pricing of stock options. Stock options are agreements wherein the option owner has the option, not obligation, to sell a stock at a particular price at a future time. These can be valued by imagining them as part of a portfolio with the stock in question, and determining the price which would hedge out all the risk from the portfolio.

Since stock prices are dynamic, evaluating the price of a stock option requires quick adaptation to change. As a model for the changes in stock prices, Forsyth uses Brownian motion, in which a variable moves according to a combination of drift and noise. The accuracy of the estimation of a value of a stock option is going to depend on the accuracy of the estimated expected value of this random stock price.

How do you estimate the expected value of a stock price that is subject to a complex stochastic process, such as Brownian motion? Forsyth starts by recommending a Monte Carlo method. This involves running lots of randomized simulations based on the model of stock price fluctuation and averaging the result.

This is great, but there’s a catch: Monte Carlo methods are computationally expensive. Forsyth goes into detail warning about how to tune the parameters to account for the time it takes these Monte Carlo simulations to converge on a result. Basically, the more iterations of simulation, the more accurate the estimate will be.

This is a very promising early result for us, because it suggests a link between computational power and economic risk. Even when all the parameters of the model are known, deriving useful results from the model requires computation. So we can in principle derive a price of computation (or, at least, of iterations of Monte Carlo simulation) as a function of the risk aversion of the stock option trader.

Is that counter-intuitive? This would be consistent with some findings that high-frequency trading reduces market volatility. It also suggests a possible economic relationship between finance, insurance, and the cloud computing market.

One question I’d like to look into is to what extent computational power can be seen as a strategic advantage to adversaries–for example, in a stock trading situation–and what the limits of that power are. At some point, the effects of computation are limited by the amount of data one has to work with. But too much data without the computational means to process it is a waste.

See where this is going? I’m interested in hearing your thoughts.