Thank you, internets, for all the feedback I’ve gotten on BoomTime: Risk As Economics. Of course my slides are nigh indecipherable without my voiceover, and my notes didn’t make it to the slideshare, so here are some notes to fill in (some) of the blanks until the video hits YouTube (SiRA members will get early access to SiRAcon15 videos via the SiRA Discourse forum, BTW). (You will want to look at the notes and the slides side by side, probably, as one doesn’t make sense w/o the other.)
A list of typical themes one hears when discussing information security & economics: within businesses we are requested to talk about exposures and threats in terms of financial impact, or consider the financial (money) drivers. Also the theme of information asymmetries (Market for Lemons) is a big theme of information economics and of software markets in general: when information about quality of a product is difficult to find, that lack of transparency drives down prices, and we get less incentives to improve quality. (Ask me questions about market signals as a mechanism for correcting information asymmetries.) “Make it more expensive for the attacker” or “don’t outrun the bear, outrun the guy next to you” is also an idea that gets raised. Game theory, concepts of quantifying “risk” (exposure, tolerance), markets for exploits & vulns is a hot topic at the moment, as is behavioral economics and all things related to incentive design – gamification being the most buzzwordy example, perhaps, but framing as a method for improving consumers’ ability to make good choices related to privacy preferences also something that has come up a bit lately in security economics research. Anyway, these are some themes that tend to be repeated in recent research literature.
I spend a lot of time thinking about how to use economics to create safer, more secure systems. That’s what’s been driving my forays into seeing if how economists deal with grey markets might work in infosec, what we as system designers can learn from game theory, how to connect secure networks using graph theory (haha), why submitted a paper to WEIS, and why, now, I’ve gone back to school (again) to study economics in more depth. I’m taking microeconomic theory now. It’s just like micro the last two times around, with less folksy examples and more calculus.
So. What I want to talk to you about is a little idea I had regarding inferior goods as they may relate to a firm’s level of maturity, and how that might be interesting both on it’s own, and if we had the concept of a CPI (consumer price index) for security. Let’s call this @selenakyle’s Security CPI, in case anyone wants to adopt this idea in the pantheon of the Hutton Security Mendoza line or Corman’s HD Moore’s law.
Some background.
What’s an inferior good?
The simple answer is: an inferior good is one where when consumer income rises, their demand for the good decreases. (Period. “Inferior goods” as a concept is totally distinct from information asymmetry and conversations about lemon markets)
More detail on inferior goods:
Utility curves: Preferences between Good A & Good B, at different levels of investment (U1, U2, U3). Thanks investopedia!
Start with the assumption that consumers seek to maximize their utility given a fixed budget, i.e. they have an income, and they spend it in a way to get the most for their money, given their individual preferences. When consumers experience an increase in income, they will consume *more* of most goods (due to rational utility maximization and non-satiation) but will purchase less “inferior” goods – potentially because they can afford better.
A classic example is potatoes within a food budget; when income goes up many consumers will purchase less potatoes…and more meat, or higher-end food items. So, the effect of changes in prices may also be affected by the mix of normal vs inferior goods in the bundle. An example – when prices go up and income stays flat, a consumer may change their mix to include more inferior goods. Or another example – when prices are flat and income goes up, a consumer may shift their mix to include less inferior goods. In any case, the consumer will shift their consumption to maximize their utility, and adjust to new prices or income levels.
The key here is what happens as income rises: does the mix of products in the bundle consumed change (preferences shift) or is it just *more* of the products (same preferences)?
I just finishing giving a third version of a presentation that I put together on lessons Infosec/Risk/Platform owners can learn from classic Operations Research/Management Science type work. The talk (“Operating * By the Numbers”) was shared in Reykjavik (Nordic Security Conference), Seattle (SIRACon 2013), and in Silicon Valley (BayThreat). Thanks everyone who attended, especially those of you who asked questions and provided feedback.
A few folks have asked for reading lists. Some asked for the quick run-through sample from my bookshelf, others want some further reading. Here’s the quick run through:
And I also want to give another shout-out to Combat Modeling, by Alan Washburn and Moshe Kress, of the Naval Postgraduate School. It’s a pricey text, but take a look at the table of contents & the topics they cover. Really interesting work to consider for control system designers.
Also, I haven’t read these personally but they are on my “to read” list as they came recommended by fellow quant/risk nerds:
You’re all just nuts. Well. Ok. Some of you are legumes.
I was distracted earlier this week by a thread on the SIRA mailing list. I found myself reacting to an comment that suggested maybe quantitative risk mgmt seems is “just” plain ol’ SIEMs plus some stats/machine learning. That ended up being a bit of a hot button for a few folks on the list, because then there was a very interesting discussion that got going about data architecture options versus how common security-industry tuned tools work, which is worth a whole dedicated discussion. In any case it put me into a contemplative mood about SIEMs, since I am of two minds about them depending on what environment I’m working in: it’s the “any port in a storm” vs “when you have a hammer everything looks like a nail” thing. But regarding SIEM vs databases, or anomaly detection vs ML, or whatever:
While acknowledging that apples and pears are both fruit, some people prefer to cut their fruit (agnostic to apple-ness or pear-ness) with very sharp ceramic knives vs, say, good ol’ paring knives, depending on dish being prepared.
That said, the bowl you put fruit salad into may need to be different (waterproof, airtight, bigger) than a bowl one puts whole fruits in.
Also, in an even more Zen-like tangent: no matter what bowl or what fruit or what knife is being selected, if you’re making fruit salad you’re going to have to spend some time cleaning the fruit before cutting and mixing it. If the bowl the whole fruits were in is especially dirty, or say, a crate – or a rusty bucket – you may want to spend more time cleaning.
I was going for something Zen.
But I’m not very Zen, I’m pedantic, so here’s some explanation of the analogy:
Apples & Pears are both fruit
System logs are data that is usually stored in logfiles. Security devices generate system logs, and so do other devices. Errors are often logged, or system usage/capacity. Servers, clients, applications, routers, switches, firewalls, anti-virus systems — all kinds of systems generate logs.
Financial records, human resource records, customer relationship management records are data that are usually stored in databases. Some may be generic databases, others may be built specifically for the application in question.
There are also data types that are kind of a cross between the two, for example – a large consumer facing website may have account data. You are a customer, you can login and see information associated with your account – if it’s an email service, previous emails. If it’s an e-commerce site, maybe you can see previous transactions. You can check to make sure your alma mater or favorite funny kitten gif is listed correctly on your account profile. It’s not system logs, and it’s not internal corporate records – it’s data that’s part of the service/application. This type of data is usually stored in a database, though there might be metadata associated with the activity stored in logs.
In another mood, I might delve further into this criss-cross category, which often results in a “you’ve got your chocolate in my peanut butter…you’ve got your peanut butter in my chocolate” level of fisticuffs.
But, it’s all DATA.
People have different tool preferences when it comes to cutting fruit
Some capabilities of data-related tools/capabilities:
Basic capabilities tend to be common, or directly comparable, across tools. For example, here’s an article that compares some of the commands that can be used in a traditional SQL database to similar functions in Splunk, a popular SIEM.
The point is, while many tools have many of the desired features, there may be tradeoffs. A product might make it really easy to conduct filtering (via an awesome GUI and pseudocode) and still have limitations when it comes to extracting a set of events across multiple tables that meets ad hoc-developed, but still quite technically specific, criteria. Or, a tool might excel in rapid access to recent records, but crash if there’s a long-term historical trend to analyze. Or, it can be a gem if you’re trying to do some statistical analysis of phenomena but too resource intensive to be used in a production environment.
People have different use cases for cutting fruit
In some cases data is kept only to diagnose and resolve a problem later
In some cases data is kept in order to satisfy retention requirements in case someone else wants to diagnose/confirm an event later
In some cases data is kept because we’re trying to populate a historic baseline so that in the future we have something against which to compare current data
in some cases data is kept so that we can analyze it and predict future activity/behavior/usage
In some cases data is kept because it is part of the service / product being supported
Ops is different from Marketing. Statisticians are not often the same people doing system maintenance on a network. Etc.
Hadoop architecture
About bowls
The container for your data only matters if the container has special properties that facilitate the tools you’re going to apply, your use case for storing the data, or your use cases for processing/manipulating the data. A big use case in the era of always-on web-based services is special containers designed to allow for rapid manipulation and recall of Very Large amounts of data.
An example of traditional data architecture: RDBMS architecture
SIEM architecture – “SIEM” is a product category vs a description of architecture, different products may have different architectures, here are a few examples. Typically a SIEM accepts feeds from devices generating logs, and then have functions to consolidate, sort, search, and filter. Here’s how Spunk describes itself:
Arcsight architecture
“Splunk is a distributed, non-relational, semi-structured database with an implicit time dimension. Splunk is not a database in the normative sense …but there are analogs to many of the concepts in the database world.”
Which architecture is the best is a silly question; they are architected differently on purpose. Pick a favorite if you must, but if you work with data, be prepared: you’ll probably not often find yourself in homogenous environments.
About working with data
No matter where your data is sourced, if you want to do something snazzy like use it to train a neural net, or do a fun outlier analysis, then you’re going to have to spend a great deal of time prepping your data, including cleaning it. Some many database architectures claim to make this process easier (I’ve yet to meet an analyst that’s ever described this part of analysis as fun or easy), what’s definitely true is some data storage formats / practices make it harder.
If your data unstructured – like you might find in key-value pair or document stores – you might have significant work to get it into a more structured format, depending on what research methods you are going to use to conduct your analysis.
Even with relatively structured data you might find that for one purpose formatting is relevant but when you get to the analysis stage you need to further simplify.
The cooler things we might discover require working with more complex (i.e. less structured) data, which is why advances in manipulation of less structured data, and algorithms that are forgiving of different types of complexity are fun. Sometimes it’s the analytic technique that’s new, sometimes it’s the technology for applying it, but often the “coolness”, or at least the nerdy enthusiasm, is from applying existing techniques & tech to a new data source, OUR data source, to answer OUR question – in a way that hasn’t quite been done before. That’s kind of how research is.
And so
Stop worrying so much about your bowls. Unless the lid is on so tight that you can’t get your fruit salad out.
Risk modeling, while it sounds specific, is actually super-contextual. I think my own perspective on the topic (the different types of modeling, what they are good for) was best summed-up in a paper/presentation combo I worked on with Alex Hutton for Black Hat & SOURCE Barcelona in 2010. Probably video from Barcelona is the best reference if you want to look that up (yes, lazy blogger is lazy), but let me summarize by the (from my perspective) three general purposes of risk models:
Design: Aligned most with system theory. The models try to summarize the inputs (threats, vulns, motives, protections) and the outputs (generally, loss and in some cases “gains”) of a system, based on some understanding of mechanisms in the system that will allow or impede inputs as catalysts/diffusers of outputs. Generally I would lump attack tree modeling and threat modeling into this family, just a different perspective on a system as a network architecture or design of a protocol, software, or network stack. Outside of risk/security, a general “business model” is equivalent, which attempts to clarify the scope, size, cost,and expected performance of the project.
Management: Aligned most with the security/risk metrics movement, and (to some extent) aligned with “GRC”-type work, management-focused risk models are set-up to measure and estimate performance, i.e. to answer a questions about “how well are controls mitigating risk” or “to how much risk are we exposed”. One could think of the output of the design phase being a view as to what types of outcomes to expect, and then the management phase will provide a view on what outcomes are actually being generated by a system/organization. Outside of risk/security, a good example of a management model is the adoption of annual/quarterly/ongoing quality goals, and regular review of performance against targets.
Operations: Operational models are a different beast. And my favorite. Operational models aren’t trying to describe a system, they are embedded into the system, they influence the activities taking place in the system, often in real-time. I suppose any set of heuristics could be included in this definition, including ACL’s. I prefer to focus on models that take multiple variables into consideration – not necessarily complex variables – and generate scores or vectors of scores. Why? Because generally the quality of decision (model fit, accuracy, performance, cost/benefit trade-off) will be more optimized, i.e. better. Outside of risk/security, a good example is dynamic traffic routing used in intelligent transport systems.
“Framework” is another term that I’ve heard used in a number of different ways but it seems to really be an explanation of a selected approach to modeling, and then some bits on process – how models and processes will be applied in an ongoing approach to administer the system. Even Wikipedia shies away from an over-arching definition, the closest we get is “conceptual framework“: described as an outline of possible courses of action or to present a preferred approach to an idea or thought. They suggest we also look at the definition for scaffolding: “a structure used as a guide to build something” – (yes, thank you, I want us to start discussing risk scaffolding when we review architecture, pls)
So, it’s been about two years since I added anything to this blog. I’ve been busy!! The awesome folks at SOURCE gave me a speaking slot at SOURCE Boston 2010 and that kicked-off a series of talks on methods consumer-facing companies/websites take to protect customers from online threats. And then later in 2010 was able to participate in some discussions on different types of threat modeling and situations in which modeling techniques can be useful.
In 2011 I wanted to talk about some more concrete topics, and so spent some time researching how threats/impacts can be better measured. This is an area I’d like to spend more time researching, because there’s still a gap between what we can do with the the high-frequency/lower-impact events (which seem to be easier to instrument, measure, and predict) and the lower-frequency/high-impact events (which are very difficult to instrument measure, or predict). –> I think the key is that high-impact events usually represent a series or cascade of smaller failures, but there’s more research into change management and economics to be done.
Later in 2011 I switched over to describing how analytics can be used to study and automate security event detection. I hope in the process I didn’t blind anyone with data science. (haha…where’s that cowbell?) So here’s what I did: Continue reading →