Understanding Diversification : Networks and Nobel Prizes
Mr. Smith walks into a financial advisor’s office. After a good discussion about his goals, the advisor asks what investments he’s using today to reach his financial objectives. Mr. Smith tells the advisor he has CDs at Bank of America, Wachovia, the corner bank, and his local credit union.
The advisor says, “why do you have the CDs spread around at so many banks?”
To which Mr. Smith proudly answered, “I’m diversified!”
It’s an old Wall Street joke, and not a particularly good one I’ll admit, but it does drive home a point.
Diversification is the process of allocating resources across assets in a way that reduces the exposure to any one particular asset or risk. A common path towards diversification is to reduce risk or volatility by investing your resources in a collection of uncorrelated assets.
Basically, diversification is the financial equivalent of “Don’t put all your eggs in one basket”. Although the concept of “spread your risk around” has been with us for a long time (at least as far back as early Mesopotamian crop strategies) — it wasn’t until the 1950’s that diversification as a more formal term became known.
Harry Markowitz and the Efficient Frontier
In 1952, at the ripe old age of 25 , Harry Markowitz wrote a dissertation simply titled: Portfolio Selection: Efficient Diversification of Investments.
While at RAND Corporation, where he worked alongside George Dantzig , he was able to finalize his work into a simple matrix form could be used to easily obtain the optimal resource allocation across a selection of assets. The formula is so simple and elegant that many have called it the E=mc2 of finance.
When Markowitz submitted his final thesis in 1955 — the model was so novel that renowned economist Milton Friedman argued his contribution was “not economics!”.
But the HM model (as it would be known for a few decades) was a profound success. So successful that Markowitz won the John von Neumann Theory Prize (the Nobel for operations research) for it in 1989 … and followed that up with the actual Nobel Prize in Economics in 1990.
In 1959 Markowitz publish the full paper in book form and ever since then, diversification has been a formally defined and measurable value.
Markowitz Mean Variance in 60 seconds or less
What young Harry formalized was the risk reducing nature of diversification. Specifically, the Markowitz model (which we will now call the mean-variance model) states that:
- A given asset has an expected return, which is the mean (average) of it’s historical returns.
- A given asset has an expected risk, which is the variance (standard deviation) of it’s historical returns.
- An allocation of capital amongst assets will yield an expected return and expected risk which is weighted by the co-movement (covariance) of the assets amongst themselves.
It’s a very simple model (in hindsight) because it says (formally) what we all know to be true:
If you have 10$ to spread across Coca Cola, Pepsi , and Microsoft stocks — pick either Coke or Pepsi (whichever has performed better) but not both because they’re basically the same.
A Markowitz-efficient portfolio is one in which there is no diversification benefit that can be had from adding more assets. The most efficient portfolio is the one that maximizes the risk adjusted return (a.k.a the portfolio where the expected return/expect risk is the greatest).
The efficient frontier is then defined as the best portfolios (minimal risk) for a given set of returns.
What the heck does this have to do with networks?
For many years I’ve been a skeptic as to the value of large scale multi-CDN implementations. The argument is seemingly very sound: what if one CDN provider goes down? We need a backup! I get that, and that makes sense. So which one should you pick? And when people get 3,4,5 or 6 CDN providers in multiple regions — I have to stop and ask:
Does adding more CDN providers actually reduce your risk?
There’s a dirty secret in large scale networks: unless you buy private fiber (or lease very expensive direct fiber), all networks are the same at the limit. Yes. Seriously.
Is Akamai better than Cloudflare? Maybe — but they aren’t diversified at all. What about Fastly? Nope … zero diversification benefit. Don’t believe me? Ok, let’s go Harry Markowitz on this for a moment. Head on over to the awesome team at Cedexis and get setup with access to their Radar data (for independent 3rd party data). If you are already edgemesh customer then feel free to use our API to access our version of the data.
What we’ve got here is a time series of the response time (latency) across a collection of CDN providers. I’ve selected these providers for no particular reason (seriously they’re all the same) other than these are the largest and most well known.
Now the first thing we want to do is calculate their average response time (return) and the standard deviation of that response time (risk). You can use either VAR (variance) or STDEV (standard deviation, which is just the square root of variance) — the model works the same. I prefer variance.
Next we add a column called “slope” (in finance it’s often called the Sharpe Ratio which is the Mean/Variance (a.k.a avg latency/variance of latency).
Now the latency here is negative — so we want the maximize this number (e.g. AWS’s Cloudfront has the best latency in this set). The best risk adjusted choice would be Akamai — who although has the worst latency (292ms) it has fantastically low variance. I’ve used conditional formatting to highlight our metrics.
The question is, can we combine N number of these CDN’s to get an EVEN BETTER solution.
Let’s start by seeing how correlated they are. Remember, Correlation is a value from -1 to 1 (or -100% to 100%). A correlation of 100% means that two items are EXACTLY the same thing (the extent of linearity is such that they are perfectly parallel -if you square it you may have seen that denoted on a chart as R2). In finance a correlation between two assets of above 70% would imply you could use one to hedge the other (e.g. Buy Coca Cola, Sell Pepsi). For reference, Pepsi and Coke have a correlation of about 91% today, and that’s the best example in finance. They’ve been that way forever.
Ok how’s our CDN diversity?
Yea. So Akamai and Fastly, Correlated at 92.11%. Cloudflare and Level3 97.5% … well you see the table. Let’s see a chart?
Ok , so those are all definitely the same. Maybe we’re unlucky? Here’s a few more CDN’s, and this time over 90 days.
What about uptime? Maybe performance (latency) is correlated but surely there’s no correlation in availability?
Here’s some availability numbers (shown in nines format : e.g. 99.9 would mean up 99.9% of the time).
And our correlation matrix …
In order to Diversify you need a difference
Since it’s running within the user’s last mile network (e.g. Comcast Cable, Verizon FIOS etc) — it should be different. Adding edgemesh should give you an uncorrelated asset, that should help diversify your network risk. Don’t believe me , let’s ask Mr. Markowitz.
Let’s start by adding in 3 more data points: latency within the Comcast network itself, latency within the Verizon FIOS network itself and finally edgemesh observed latencies over the same time frame and same region.
What we see is that the residential networks are correlated to each other, but NOT to the CDN networks. Akamai and Edgemesh are correlated at 12.55%.
What about uptime?
We still have some correlation, although certainly no where near what we had before.
Finally, let’s ask the Markowitz question:
What combination of networks yields the best variance adjusted latency?
Basically spread around the CDN’s but allocate as much traffic to the peer to peer network as you do to the largest CDN (Akamai).
And let’s see our Mean Variance style graph:
Next up, uptime:
What combination of networks yields the best variance adjusted uptime?
Harry Markowitz says allocate ~40% of your traffic to an uncorrelated peer to peer network.
Diversification is a key metric in minimizing downtime and maximizing performance. It’s important to not just look at the average of something, but rather to include that something’s stability and it’s relation to your other choices (correlation). The model here (Mean Variance Optimization) can be used for any system where you want to decide what is an optimal allocation across a set of choices given a risk minimizing goal.
Some other fun examples are real-time load allocation across system instances (especially in Cloud environments), optimal data-center locations for supporting workloads, and of course … stocks.
Until next time!