Boomtime: Risk as Economics (Notes)

Thank you, internets, for all the feedback I’ve gotten on BoomTime: Risk As Economics. Of course my slides are nigh indecipherable without my voiceover, and my notes didn’t make it to the slideshare, so here are some notes to fill in (some) of the blanks until the video hits YouTube (SiRA members will get early access to SiRAcon15 videos via the SiRA Discourse forum, BTW). (You will want to look at the notes and the slides side by side, probably, as one doesn’t make sense w/o the other.)

An intro here is that in addition to being a product manager specializing in designing large-scale, data-driven security/anti-fraud/anti-abuse automation (yep, that’s a thing), I’m also an economics nerd. (Currently working on an MS in Applied Econ at JHU). Given my background in payments, and a general penchant for “following the money”, framing technology problems on platforms through an economic/financial lens is second nature.

Themes of Security Economics

A list of typical themes one hears when discussing information security & economics: within businesses we are requested to talk about exposures and threats in terms of financial impact, or consider the financial (money) drivers. Also the theme of information asymmetries (Market for Lemons) is a big theme of information economics and of software markets in general: when information about quality of a product is difficult to find, that lack of transparency drives down prices, and we get less incentives to improve quality. (Ask me questions about market signals as a mechanism for correcting information asymmetries.) “Make it more expensive for the attacker” or “don’t outrun the bear, outrun the guy next to you” is also an idea that gets raised. Game theory, concepts of quantifying “risk” (exposure, tolerance), markets for exploits & vulns is a hot topic at the moment, as is behavioral economics and all things related to incentive design – gamification being the most buzzwordy example, perhaps, but framing as a method for improving consumers’ ability to make good choices related to privacy preferences also something that has come up a bit lately in security economics research. Anyway, these are some themes that tend to be repeated in recent research literature.

Microeconomics

Which brings me to microeconomics, which is the core. Micro gives us a framework/model for understanding (estimating) consumption, given preferences, under a budget constraint. I repeat to emphasize what we’re doing here: given a consumer, what do they consume, given their preferences among options, within the bounds of a budget. All key points are relevant, because consumers are expected to maximize their utility — more is always better, but given limited resources (budget) tradeoffs must be made. So preferences affect consumption in the tradeoffs consumers make – they can tradeoff good A against good B, they can tradeoff a preference for labor (which leads to greater income, which leads to greater ability to consume) versus leisure (yes workaholics: leisure). To the right we see a few curves: utility maximization goes up and to the right. Up because more is better, curving off because going from zero units to one unit of consumption is super awesome! But at some point the next unit of consumption will generate less marginal utility. One chocolate chip cookies is awesome. Two is better. Three is better still. But at some point you will enjoy the Xth cookie less than the X-1th cookie. Savvy?

The other curves show consumption/preference/tradeoffs, with a straight line representing a budget constraint. The math is, given preferences and a budget, the consumer will choose to consume at the point the budget line is tangent to the curve. Given a bigger budget, the consumer gets to hop to a parallel-ish but greater utility curve. Any point on the budget line not tangent is UNDER the utility curve, i.e. not utility maximizing given the budget.

The Consumer Model: Expanded

What we just went over is basic microeconomics, and will look somewhat familiar to anyone who took Econ 101 in college. Consumers, preferences, and tradeoffs they make. However the consumer model is extensible all the way to macroeconomics: just like consumers, firms make choices about production and have constraints to deal with. And then the framework expands to inform competition, pricing, and provides the roots of game theory (this is where uncertainty gets added to the mix). Aggregating allows us to get to markets, and adding the public sector as a participant unto themselves gets us firmly into macro at which point we start to discuss dynamics of economies. That’s a TL;DR for: the roots of macro are in micro.

The Language of Risk

Which gets is to the concept of risk: wherein we take our preferences and constraints and add in the element of uncertainty. Sidebar: In Information Risk, or more specifically, risk as applied to information security, we hook into the probabilistic aspects of uncertainty (statistical methods) but I feel like in many ways we ignore the underlying drivers. Sometimes because they’re obvious, others because they are too complex to deal with. I like the behaviorist approach of focusing on the concrete, measurable results from which behavior can be assessed – great for those of us in the business of creating predictive algorithms, but the economic aspect is fascinating especially when we move beyond point probabilities and considering system dynamics as a whole. Not many of us can focus exclusively on point probability issues: defense has a fluid and ever expanding surface area.

Risk Aversion…An Example…An Example Continued…What this Looks Like

This is a simple exercise, in fact it was a homework problem for me. The concept is: given multiple options with the same expected value/payoff, the risk averse individual will choose the one with the lowest uncertainty aka variance. Three examples are given related to a potential loss: no insurance, partial insurance, and full insurance. Utility maximization is achieved where the individual is offered full insurance. Expected value should look familiar: the sum of (Probability * payoff) options.

Sidebar: For haters of expected value: it’s easier and more rational to calculate expected value when things start off as money and end as money, and you have probabilities to work with. High * Level 2 is not the same thing as 50% * $20. Yeah. Qualitative risk assessments are a PITA.

Key points to note:

  • The visual includes a standard utility curve (ever upward to the right, with diminishing marginal utility), the curve doesn’t move no matter what the payoff.
  • Utility is the Y-axis, Payoff aka wealth is the X-axis. NOTE utility is not the same as straight-up payoff.
  • A rational consumer wants MAX utility
  • The diagonal lines connect utility of potential payoffs, i.e. for “no insurance” (green) where the payoff could be 20, the point on red curve is one end of the diagonal line…the other potential payoff in that scenario is 10, and the diagonal line connects where the payoff = 10 hits the utility curve)
  • Where the diagonal line intercepts the expected value (vertical line at 15), we see what that corresponds to on the Y axis, which is the utility associated with that choice
  • The expected value aka payoff aka “wealth” is the same for each option
  • The variance is lowest where full insurance is provided, as the payoff and corresponding utility are fixed (blue)

How to Win at Risk

Understanding risk aversion gives us a little insight into what we’re really talking about when we talk about “risk appetite” or “risk tolerance”. It’s about the variance we’re willing to deal with, not necessarily the expected value. There are some other practical examples/applications of this we could pull from finance and portfolio theory, but this is one of my main points: in finance/economics, we don’t overfit to the expected value of a particular instrument or risk…we pool risk, we hedge our bets, we consider the expected value AND variance of a portfolio when trying to optimize risk.

Sidebar: What a great troll for the finance industry to describe “high-risk” aka “high variance” investment strategies as “aggressive”. It definitely sounds much more Wall Street The Movie sophisticated than some of the other more technical descriptors.

In any case, we talk sometimes about how to “win” at defense, yet I think many of our current strategies towards managing risk are more about “payoff maximization” (expected value) and ignore variance as the key measure of uncertainty. The “defender’s dilemma” is, at it’s heart, a comment that each adverse event (point event) is part of an aggregated failure condition, as opposed to looking at the system as a whole, as we do in portfolio theory.

(Game theorists, yes I know uncertainty is not excluded in our frameworks, but the simple game theory exercises presented as examples in security tend to focus on payoff maximization, we can get into this more if any of you take serious umbrage at this simplification for illustration purposes).

So maybe we can look to economics for a better framework for understanding success? Get our systems booming with better risk capacities?

Winning at Economics

Some pictures I took at Diwali in Delhi: SO MANY FIREWORKS. I took some snaps, but I kept getting distracted by the loose snarl of power cables hanging about 20 feet away from the open stalls stacked floor to ceiling with explosives.

This is not what I mean by “boom”, but as a risktopian I surely am not going to forget the scenario. By “boom” I mean growth, increasing utility, innovation, across an entire system.

A Bit About Economics

But I have to admit that, although “Economics” as a theory & course of study has many merits, economists as a group are not necessarily seen as successful themselves. In fact they may be suffering from a bit of economic defender’s dilemma themselves. Part of this may be that their contributions are invisible in the good times, but everyone sees them as the economic puppeteers in the bad times (sound familiar, defenders?). And TBH it doesn’t help that the language of the dismal science is steeped in advanced calculus and statistics that are incomprehensible to anyone who isn’t an economist themselves. Observe, on the right, optimization using Lagrangian multipliers. For a few months I’ve been seeing these equations in my sleep. (An aside to my micro professor: that equation that you put on the midterm, that wouldn’t simplify down, and ended up being 3 pages of calculations long, was a real chuckle generator).

And in addition to being intensely abstract, it’s not like economists agree with each other. So that’s helpful.

Meta on Macro

On the other hand, let’s look at the storyline that go us here. (BTW, this explanation is a summary of the brilliant interpretation of macro history provided by my macro professor which is handily written up in his textbook which is furtherly handily available online: Modern Macroeconomics, Sanjay Chugh)

Before macro became a “thing” the government didn’t intervene in “managing” the economy. And then the Great Depression was Great enough that the government decided to try and step in, and that was around the time that data on economic indicators was starting to be gathered. This is the “counting” phase. Then came the Keynesians, who in addition to data gathered for a decent period of time also had access to computing technology that allowed them to create models leveraging said data & technology. And the frameworks they used *seemed* to work. But then their efficacy either degraded or it turned out the models weren’t as useful as originally expected, probably due to the phenomenon summarized in the Lucas critique (for which Lucas won a Nobel prize), that said (basically): Alphas used as coefficients in macroeconomic models should be thought of themselves as a function of policy – since they were based on historical observations they yielded no info about how behavior would change/adapt as policies were changed.

Supermodels

So, an abstracted example of what some of these models might look like, math-wise. Perhaps think of them as the output of giant regression analyses, and the alphas are the coefficients. If we’re looking at correlation between the dependent and independent x’s, the coefficients tell us correlation observed between a change in the dependent variable (on the LHS) and the change in the independent variable (on the RHS). The models show us correlations observed in the real world. But the Lucas critique tells us that the alphas (coefficients) *depend the underlying policies themselves* – they are not independent, and so while they may be statistically significant in an evaluation of historical correlation — they are not useful for predicting the effect of *making changes to the underlying policies*.

The Lucas critique is kind of a mind-bender. I had to meditate on it. And it’s possibly what moved macro into post-Keynesianism.

Aside on this post-Keynesianism — Two schools of thought that we talked about in class: the New Keynesians (who believe pricing can still be a tool, that it’s dynamic, but prices & wages are sticky so there’s a lag in response to a policy/pricing change) and RBC (Real Business Cycle) economics (who believe there are no price makers, prices “arise” out of supply/demand, and that policy changes are irrelevant…to change the system, applying SHOCKS is needed, for example, change in energy price, or major innovation).

So back to the Lucas critique, one of the things I love about it is

Positive vs Normative Economics

It brings us right back to the difference between positive and normative economics. Positive economics is mainly…economics to understand how things work: what it is, describing it, evaluating it. Normative economics has an implied value judgement: what it should be. The output tends to be recommendations or policy changes.

Almost every econ professor I’ve had has warned of the siren song of policy: that economists get lured in. I think the reason they describe this tendency with such portent is that policy is tricky. It is one thing to analyze what has happened, it is entirely another to design policies that will be effective at making the systemic changes desired by (well-meaning) policy makers. No kidding.

Currency of Risk

So while our systems are different from those managed by economists, risk scientists have a similar set of items to consider, constraints, and dynamic elements (like competition, or adversaries). Ultimately we’re both looking to maximize the returns of our systems. Since the language of risk is contained within the language of economics, with a few custom elements, this really isn’t a big leap that there may be lessons to be learned from looking more deeply within economics to understand our own system dynamics.

BoomTime

Immediately, there are a few concepts in micro/game theory that are relevant. I’ve blogged my idea re: inferior goods & a security CPI previously…discussions of a “security poverty line” or “maturity models” are related. Within game theory, Incentive design and coalitional game theory already have some traction from security economics researchers. In general I suspect we will see more proper policy analysis in the area of cybersecurity, graph theory is still in early stages of being applied, and we may be able to create more dynamic, and more realistic, threat modeling techniques by incorporating uncertainty and bigger data. Speaking of big data, and technology: those are all key elements in creating the next generation of insurance, econometrics, and classification as applied to cybersecurity. Oh yes. Security econometrics: it will be a thing, and it is going to sit right between insurance and classification in more places than on this slide.

How to Win [Risk] Friends & Influence [Investment] People

Wrapping up this stream of consciousness:

  • “Defenders Dilemma” as a concept is tired. The binary win/loss mindset is over: it’s BoomTime, time to frame things up as expanding our capacity to absorb and manage risk/uncertainty, not sitting around like big bullseyes. Ditch adding up point exposures to nonsense levels. Portfolio theory. System-level view. Embiggen risk capacity! It’s the long game, people.
  • Expected value is good for what it’s good for, it’s not a terrible view to understand what’s being aimed at from an investment perspective, but if we’re talking about RISK, we’re talking about UNCERTAINTY, not just expected exposure/value, and we need to incorporate VARIANCE.
  • Positive vs Normative analysis: Positive is simply “what happened”, maybe a bit of “why”. Normative analysis has inherent value judgements, i.e. the addition of “what SHOULD happen”. And there, my friends, be dragons.

Questions? You know where to find me. Boom on, Riskafarians.

What do you think?

This site uses Akismet to reduce spam. Learn how your comment data is processed.