Boomtime: Risk as Economics (Notes)

Thank you, internets, for all the feedback I’ve gotten on BoomTime: Risk As Economics. Of course my slides are nigh indecipherable without my voiceover, and my notes didn’t make it to the slideshare, so here are some notes to fill in (some) of the blanks until the video hits YouTube (SiRA members will get early access to SiRAcon15 videos via the SiRA Discourse forum, BTW). (You will want to look at the notes and the slides side by side, probably, as one doesn’t make sense w/o the other.)

An intro here is that in addition to being a product manager specializing in designing large-scale, data-driven security/anti-fraud/anti-abuse automation (yep, that’s a thing), I’m also an economics nerd. (Currently working on an MS in Applied Econ at JHU). Given my background in payments, and a general penchant for “following the money”, framing technology problems on platforms through an economic/financial lens is second nature.

Themes of Security Economics

A list of typical themes one hears when discussing information security & economics: within businesses we are requested to talk about exposures and threats in terms of financial impact, or consider the financial (money) drivers. Also the theme of information asymmetries (Market for Lemons) is a big theme of information economics and of software markets in general: when information about quality of a product is difficult to find, that lack of transparency drives down prices, and we get less incentives to improve quality. (Ask me questions about market signals as a mechanism for correcting information asymmetries.) “Make it more expensive for the attacker” or “don’t outrun the bear, outrun the guy next to you” is also an idea that gets raised. Game theory, concepts of quantifying “risk” (exposure, tolerance), markets for exploits & vulns is a hot topic at the moment, as is behavioral economics and all things related to incentive design – gamification being the most buzzwordy example, perhaps, but framing as a method for improving consumers’ ability to make good choices related to privacy preferences also something that has come up a bit lately in security economics research. Anyway, these are some themes that tend to be repeated in recent research literature.

Continue reading

Inferior Goods & the Security CPI

I spend a lot of time thinking about how to use economics to create safer, more secure systems. That’s what’s been driving my forays into seeing if how economists deal with grey markets might work in infosec, what we as system designers can learn from game theory, how to connect secure networks using graph theory (haha), why submitted a paper to WEIS, and why, now, I’ve gone back to school (again) to study economics in more depth. I’m taking microeconomic theory now. It’s just like micro the last two times around, with less folksy examples and more calculus.

So. What I want to talk to you about is a little idea I had regarding inferior goods as they may relate to a firm’s level of maturity, and how that might be interesting both on it’s own, and if we had the concept of a CPI (consumer price index) for security. Let’s call this @selenakyle’s Security CPI, in case anyone wants to adopt this idea in the pantheon of the Hutton Security Mendoza line or Corman’s HD Moore’s law.

 

Some background.

What’s an inferior good?

The simple answer is: an inferior good is one where when consumer income rises, their demand for the good decreases. (Period. “Inferior goods” as a concept is totally distinct from information asymmetry and conversations about lemon markets)

More detail on inferior goods:

spare a util, brother?

Utility curves: Preferences between Good A & Good B, at different levels of investment (U1, U2, U3). Thanks investopedia!

Start with the assumption that consumers seek to maximize their utility given a fixed budget, i.e. they have an income, and they spend it in a way to get the most for their money, given their individual preferences. When consumers experience an increase in income, they will consume *more* of most goods (due to rational utility maximization and non-satiation) but will purchase less “inferior” goods – potentially because they can afford better.

A classic example is potatoes within a food budget; when income goes up many consumers will purchase less potatoes…and more meat, or higher-end food items. So, the effect of changes in prices may also be affected by the mix of normal vs inferior goods in the bundle. An example – when prices go up and income stays flat, a consumer may change their mix to include more inferior goods. Or another example – when prices are flat and income goes up, a consumer may shift their mix to include less inferior goods. In any case, the consumer will shift their consumption to maximize their utility, and adjust to new prices or income levels.

The key here is what happens as income rises: does the mix of products in the bundle consumed change (preferences shift) or is it just *more* of the products (same preferences)?

Continue reading

2014.11 (ISC)2 Election Time!

Hi everyone,
I am running for the (ISC)2 Board of Directors this fall.

Basically – I have been a CISSP for almost 15 years and would like the opportunity to help out (ISC)2 more directly. I’d like to spend some time building out the (ISC)2 foundation, and also work on clarifying the strategy and growth plans for the certification/training programs. In addition to my experience in the infosec/risk management space, I have leadership experience with non-profit and volunteer-driven organizations that will be useful. If you are interested in the election and have questions – ask away. Otherwise I feel like I’m talking to myself (more than usual).

About the election (includes Board slate, timeline, & process)

Of course, I will keep updating this with additional content.

About me

I’m adding this section from my statement on (ISC)2 website because it gets to the core of what I’ve been thinking about and discussing with colleagues when it comes to (ISC)2. Check out my full statement on the election website for more details, and come back to me with questions!

(ISC)²’s ambitious vision is to “Inspire a safe and secure cyber world,” and we are strongly positioned to lead the industry forward, as our organization has both the expertise (our membership) and the reach (through the Foundation) to up-level security for businesses AND consumers globally. Since most of my career has been dedicated to protecting consumers and end-users from online threats, I am both keenly interested and uniquely qualified to help the organization refine and achieve this vision.

To pursue this larger long-term vision for tomorrow, today we need to address a few key questions related to the future of (ISC)²’s core program components: certification, membership, and training. The bottom-line is that (ISC)² members need:

  • Confidence in the credibility of (ISC)² certifications
  • A clear value proposition to ongoing affiliation with the organization
  • Access to useful training and education opportunities

While what we expect from (ISC)² is straightforward, what the industry expects from us is a little more complex. Market needs for infosec are evolving, and successful certification/training programs must find a better way to meet practitioner requirements for both specialization (e.g. application security or quantitative risk analytics) and generalization (broad base of “basics,” fluency in companion domains like network operations, law enforcement, software development, or management/business strategy). Professionals already require credibility (and potentially certifications) across several dimensions. With demand for critical skills continuing to increase, to raise the level of our game means as professionals, certification is the beginning – not the end – of our practice.

Payment Risk: DSS & Close Range Combat

When it comes to PCI-DSS, it is easy to get confused about whether or not it’s working. And part of the reason why is that it has never been very clear what problem the PCI-DSS is attempting to solve. Is it trying to prevent fraud, or ensure a dependable minimum level of security in the payment system? My answer so far is neither.

Fraud has always been a problem of payment systems. Cards, like cash, can be counterfeited – and as technology to make counterfeiting more difficult advances, so too does the technology with which anti-counterfeiting methods can be defeated. In card payments, liability for fraudulent transactions is defined within their operating rules (for example the Visa Operating Regulations), and tends to be determined on a transaction-by-transaction basis. To prevent fraud the card issuer that authorizes the transactions needs as much information as possible to detect off behavior, and the merchant that accepts the transaction needs to take some basic precautions at the point-of-sale (in the U.S., at the simplest level this is swipe the card and check the signature). Liability for fraud in the face-to-face environment (when the merchant follows the correct operating procedures) usually rests with the Issuing bank. Liability for fraud in the Card Not Present (CNP) world often rests with the merchant, because the merchant *can’t* follow the existing procedures — no signature.

The point here is that liability is determined on a case-by-case basis, applying the operating rules to the details of each transaction, as evidenced by the data that has gone back and forth between the merchant and cardholder, and then from the merchant through their acquirer/processor to the issuing bank and back again. Transactional liability is both well-defined and, given the scenario, relatively easy to assign.

However when payments started going online, something interesting happened. It became fairly obvious that the information needed to process a payment online (16-digit PAN, expiration date, address information of the cardholder) was also (obviously) being transmitted online and (not so obviously, but, in the early 2000’s kind of terrifying) being stored online. This opened up the possibility that an entity could get popped and lose not just a week’s worth of transactions at one point-of-sale — but hundreds, thousands, *millions* of cards in a single swoop, and fraudsters could use those cards downstream.

Let’s review that scenario: an online retailer gets popped and then those cards get used…at OTHER online retailers. Or face-to-face retailers that allow key-entered (manually typed in) transactions. Maybe across many retailers. And across many Issuers. Not a “local” merchant, like a gas station, where it would be fairly obvious to connect the fraud cases that follow. How long would it take to detect it? How many downstream participants in the system, following the operating procedures as designed, would have to deal with the negative aftershocks coming from that one compromise event? And since the party that “should” be accountable is not actually part of the manifesting fraud transactions, how can liability be shifted?

Historical note (fraud prevention infrastructure): In the early 2000’s the discussion was mostly focused on the CNP environment because a) it was new, b) it was the wild west, and c) a number of high profile web companies got hax0red. Payment/fraud geeks can think of this as the era of: Address Verification System (AVS) was *pretty* well established at this point, SET was finally acknowledged as totally DOA, the drumbeat for 3D Secure was on but *nobody* was adopting yet (it was before the big push in the EU)…and early days for chip. Also: still on regular DES in the PIN infrastructure.

Historical note (fraud prevention & security strategy): One may also remember the era in this way: Issuing banks owned authorization strategy (meaning, they were making the approval decisions on transactions) and very few merchants had made investments in fraud screening. All of the banks were getting a little spooked that databases full of juicy card details were sitting outside the payment system and that so many of them were accessible from the internet. Merchants of yore never needed to store card details — just receipts.

Back to the scenario: one of those wild west outposts full of juicy card data gets popped, downstream participants (Issuers, merchants, cardholders) feel the pain, the compromised party may or may not be known. The network operating guidelines’ transactional rules don’t adequately assign liability back to the accountable party: what are the banks going to do? Well, there are pretty much two options: adjust the system to assign liability back to an accountable party OR go outside the system to demand restitution. The former is difficult and the latter takes issues of the payment system outside standard channels which for several reasons is not ideal for the payment systems themselves, who have elaborate systems set up for arbitration and compliance to address issues between participants.

It is out of this alchemy that the card network data protection programs were born. They are liability plays all the way, and I give them the benefit of the doubt that they were meant to be incentives to encourage merchants to “do the right thing” and secure payment card data. MasterCard and Visa developed slightly different programs. My paraphrase is: MasterCard’s SDP essentially said to merchants — we trust you to secure the data, but if you get hacked we are going to levy fines to high heaven. And Visa’s CISP/AIS programs essentially said — we want to see some proof in advance — so get audited by a Qualified Security Assessor (QSA) who will attest you’re cool, and then if something happens but you were compliant, we’ll work with you.

Both approaches are sticks, neither is a carrot. The subsequent merger and morph into the PCI-DSS sort of also merged the compliance program approach. Merchants must both get attestations of compliance AND if breached there are programs for providing remuneration to downstream system participants affected. (BTW: PCI has more than one standard in it’s purview, though most people when referring to PCI mean the Data Security Standard…poor PIN/PED requirements, always subordinated to DSS…)

The PCI-DSS requirements themselves were developed as a set of reasonable industry best practices (I can only speak w/authority on intentions behind the Visa program progenitors but let’s just go with it). At their best, they are meant to provide guidance to merchants who otherwise would have no idea how to protect cardholder data. The criticisms of the DSS itself are wide-ranging but I personally find the DSS simply basic, not bad, and just a narrow view (focused on payment card data and systems) and yet still very general (security is contextual and needs to be tailored). Really the DSS should have been left as guidance, but unfortunately there’s this business of being assessed and a whole industry that has grown-up around QSA-dom. I find the process of getting assessed to be much more objectionable than the requirements themselves: “compensating controls” could be a book in and of itself, but QSA’s are auditors first and foremost (ROC on), and generally asked to interpret or design strategic security as a secondary concern, if at all.

Now, while in the early 2000’s the focus of all this angst was on CNP merchants, since then the scope of the PCI-DSS problem has embiggened. The DSS quickly expanded to include non-CNP, payment processors, and even the banks themselves. So a program of best practices designed to secure payment card data *outside* of the payments infrastructure got some retrofitting to also cover payment card data ostensibly *inside* the payments infrastructure. You can’t see me right now, but I’m raising an eyebrow, because the payment infrastructure is so interdependent, and has so many legacy components – to step-level up the security of payments infrastructure is impossible to manage without some serious planning, a boatload of direct economic impacts, and hell of a lot of specificity. *Upgrade to triple-DES I’m looking at you*. All I’m saying is, if you’ve got payments infrastructure requirements – don’t bring a knife to a gun fight.

Speaking of knife fights, let’s chat about fraud. Yes, fraud may go up after a major compromise. If counterfeiters flood the market with bad cards, it may take a while for the issuer fraud screening to kick in. Card re-issuance is expensive and so some issuers take a risk and leave potentially compromised cards open and then miss some fraud transactions. However, if major compromises go down, will the fraud rate also go down? That is less clear. Motherlode-sized compromises are a recent phenomenon, and while the fraud rates have ticked-up in the past two years, from 2003-2010 they were hovering near historical lows. Note: fraud losses in total dollars continue to climb, but the global fraud rates (i.e. the portions of total payment volume that ends up as fraud) are relatively steady over the past decade, less than 6 basis points (percents of a percent). (If you’re a U.S. CNP merchant you’re laughing out loud that people would be in a huff over 6 bp). The Nilson Report is a good place to get some data. I also like the CyberSource report but I’m writing this all in notepad.

My point here is that major compromises (that have been getting a LOT of press and attention) are only ONE method the fraud economy uses to operate. All of other methods like skimming, social engineering, insider threats, and plain old theft still exist. And so with all of this, how is fraud being kept down below 10bp? That’s a multiple choice answer: some markets have opted for prevention strategies (Europe loves their chip & PIN and 3D Secure is working there), others have opted for more advanced detection strategies (in the U.S., both issuers and merchants have adopted more advanced fraud screening technology). There are a lot of influences, but it’s pretty clear that most entities that get hit with transactional fraud losses are a) not waiting around for a panacea, and b) not depending on upstream security to reduce their exposure to fraud. (Fraud counterpoint: if you’ve got fraud prevention requirements, don’t bother with a gun in a knife fight.)

Thus if we are to ask ourselves what the PCI-DSS program (requirements plus compliance program) is set up to solve, the answer is something along the lines of “to provide a benchmark of *NOT negligent*” for individual system participants. And that might actually be an okay scope, as long as everyone’s clear what problem is being solved and that it is in the industry/community’s best interest to solve it. However, to solve problems like “fraud prevention” or “payments infrastructure security”, stronger — or at least more direct — medicine (and economic incentives) will be required.

 

This particular post was inspired in part by this Business Week Online article. As an industry, if we are looking to make improvements to infrastructure security or fraud management, we need to be asking the right questions. And as we seek to improve defenses and system strategy in general, it’s useful to clarify the different problem spaces of fraud & security, if only to confirm the variety of solution sets (technology, process, economics, compliance) we have to work with. 

Operating [All the Things] By the Numbers

I just finishing giving a third version of a presentation that I put together on lessons Infosec/Risk/Platform owners can learn from classic Operations Research/Management Science type work. The talk (“Operating * By the Numbers”) was shared in Reykjavik (Nordic Security Conference), Seattle (SIRACon 2013), and in Silicon Valley (BayThreat). Thanks everyone who attended, especially those of you who asked questions and provided feedback.

A few folks have asked for reading lists. Some asked for the quick run-through sample from my bookshelf, others want some further reading. Here’s the quick run through:


And I also want to give another shout-out to Combat Modeling, by Alan Washburn and Moshe Kress, of the Naval Postgraduate School. It’s a pricey text, but take a look at the table of contents & the topics they cover. Really interesting work to consider for control system designers.

Also, I haven’t read these personally but they are on my “to read” list as they came recommended by fellow quant/risk nerds:

And here’s a link to one of my blog posts (Quant Ops), which includes a few references and some thinking on the topic from a different angle.

Big-SIEM Learning Machinations

Sorting algorithm

You’re all just nuts. Well. Ok. Some of you are legumes.

I was distracted earlier this week by a thread on the SIRA mailing list. I found myself reacting to an comment that suggested maybe quantitative risk mgmt seems is “just” plain ol’ SIEMs plus some stats/machine learning. That ended up being a bit of a hot button for a few folks on the list, because then there was a very interesting discussion that got going about data architecture options versus how common security-industry tuned tools work, which is worth a whole dedicated discussion. In any case it put me into a contemplative mood about SIEMs, since I am of two minds about them depending on what environment I’m working in: it’s the “any port in a storm” vs “when you have a hammer everything looks like a nail” thing. But regarding SIEM vs databases, or anomaly detection vs ML, or whatever:

  • While acknowledging that apples and pears are both fruit, some people prefer to cut their fruit (agnostic to apple-ness or pear-ness) with very sharp ceramic knives vs, say, good ol’ paring knives, depending on dish being prepared.
  • That said, the bowl you put fruit salad into may need to be different (waterproof, airtight, bigger) than a bowl one puts whole fruits in.
  • Also, in an even more Zen-like tangent: no matter what bowl or what fruit or what knife is being selected, if you’re making fruit salad you’re going to have to spend some time cleaning the fruit before cutting and mixing it. If the bowl the whole fruits were in is especially dirty, or say, a crate – or a rusty bucket – you may want to spend more time cleaning.

I was going for something Zen.

But I’m not very Zen, I’m pedantic, so here’s some explanation of the analogy:

Apples & Pears are both fruit

  • System logs are data that is usually stored in logfiles. Security devices generate system logs, and so do other devices. Errors are often logged, or system usage/capacity. Servers, clients, applications, routers, switches, firewalls, anti-virus systems — all kinds of systems generate logs.
  • Financial records, human resource records,  customer relationship management records are data that are usually stored in databases. Some may be generic databases, others may be built specifically for the application in question.
  • There are also data types that are kind of a cross between the two, for example – a large consumer facing website may have account data. You are a customer, you can login and see information associated with your account – if it’s an email service, previous emails. If it’s an e-commerce site, maybe you can see previous transactions. You can check to make sure your alma mater or favorite funny kitten gif is listed correctly on your account profile. It’s not system logs, and it’s not internal corporate records – it’s data that’s part of the service/application. This type of data is usually stored in a database, though there might be metadata associated with the activity stored in logs.
    • In another mood, I might delve further into this criss-cross category, which often results in a “you’ve got your chocolate in my peanut butter…you’ve got your peanut butter in my chocolate” level of fisticuffs.
  • But, it’s all DATA.

People have different tool preferences when it comes to cutting fruit

Some capabilities of data-related tools/capabilities:

  • Storage
  • Viewing
  • Searching
  • Filtering
  • Alerting
  • Comparing across tables
  • Pattern analysis / visualization
  • Frequency analysis
  • Simple mathematical operations (addition, subtraction, ranking)
  • More advanced mathematical operations (exponential functions, regressions, statistical tests, quantile analysis)
  • Sentiment analysis or text/string mining
  • Blah blah etcer-blah

Basic capabilities tend to be common, or directly comparable, across tools. For example, here’s an article that compares some of the commands that can be used in a traditional SQL database to similar functions in Splunk, a popular SIEM.

The point is, while many tools have many of the desired features, there may be tradeoffs. A product might make it really easy to conduct filtering (via an awesome GUI and pseudocode) and still have limitations when it comes to extracting a set of events across multiple tables that meets ad hoc-developed, but still quite technically specific, criteria. Or, a tool might excel in rapid access to recent records, but crash if there’s a long-term historical trend to analyze. Or, it can be a gem if you’re trying to do some statistical analysis of phenomena but too resource intensive to be used in a production environment.

People have different use cases for cutting fruit

  • In some cases data is kept only to diagnose and resolve a problem later
  • In some cases data is kept in order to satisfy retention requirements in case someone else wants to diagnose/confirm an event later
  • In some cases data is kept because we’re trying to populate a historic baseline so that in the future we have something against which to compare current data
  • in some cases data is kept so that we can analyze it and predict future activity/behavior/usage
  • In some cases data is kept because it is part of the service / product being supported

Ops is different from Marketing. Statisticians are not often the same people doing system maintenance on a network. Etc.

Hadoop architecture

Hadoop architecture

About bowls

The container for your data only matters if the container has special properties that facilitate the tools you’re going to apply, your use case for storing the data, or your use cases for processing/manipulating the data. A big use case in the era of always-on web-based services is special containers designed to allow for rapid manipulation and recall of Very Large amounts of data.

  • SIEM architecture – “SIEM” is a product category vs a description of architecture, different products may have different architectures, here are a few examples. Typically a SIEM accepts feeds from devices generating logs, and then have functions to consolidate, sort, search, and filter. Here’s how Spunk describes itself:
    Arcsight architecture

    Arcsight architecture

    Splunk is a distributed, non-relational, semi-structured database with an implicit time dimension. Splunk is not a database in the normative sense …but there are analogs to many of the concepts in the database world.

Which architecture is the best is a silly question; they are architected differently on purpose. Pick a favorite if you must, but if you work with data, be prepared: you’ll probably not often find yourself in homogenous environments.

About working with data

No matter where your data is sourced, if you want to do something snazzy like use it to train a neural net, or do a fun outlier analysis, then you’re going to have to spend a great deal of time prepping your data, including cleaning it. Some many database architectures claim to make this process easier (I’ve yet to meet an analyst that’s ever described this part of analysis as fun or easy), what’s definitely true is some data storage formats / practices make it harder.

  • If your data unstructured – like you might find in key-value pair or document stores – you might have significant work to get it into a more structured format, depending on what research methods you are going to use to conduct your analysis.
  • Even with relatively structured data you might find that for one purpose formatting is relevant but when you get to the analysis stage you need to further simplify.

The cooler things we might discover require working with more complex (i.e. less structured) data, which is why advances in manipulation of less structured data, and algorithms that are forgiving of different types of complexity are fun. Sometimes it’s the analytic technique that’s new, sometimes it’s the technology for applying it, but often the “coolness”, or at least the nerdy enthusiasm, is from applying existing techniques & tech to a new data source, OUR data source, to answer OUR question – in a way that hasn’t quite been done before. That’s kind of how research is.

And so

Stop worrying so much about your bowls. Unless the lid is on so tight that you can’t get your fruit salad out.

 

Risk: Models, Frameworks, Diagrams, & other Unicorn-lair maps

Risk modeling, while it sounds specific, is actually super-contextual. I think my own perspective on the topic (the different types of modeling, what they are good for) was best summed-up in a paper/presentation combo I worked on with Alex Hutton for Black Hat & SOURCE Barcelona in 2010. Probably video from Barcelona is the best reference if you want to look that up (yes, lazy blogger is lazy), but let me summarize by the (from my perspective) three general purposes of risk models:

  • Design: Aligned most with system theory. The models try to summarize the inputs (threats, vulns, motives, protections) and the outputs (generally, loss and in some cases “gains”) of a system, based on some understanding of mechanisms in the system that will allow or impede inputs as catalysts/diffusers of outputs. Generally I would lump attack tree modeling and threat modeling into this family, just a different perspective on a system as a network architecture or design of a protocol, software, or network stack. Outside of risk/security, a general “business model” is equivalent, which attempts to clarify the scope, size, cost,and expected performance of the project.
  • Management: Aligned most with the security/risk metrics movement, and (to some extent) aligned with “GRC”-type work, management-focused risk models are set-up to measure and estimate performance, i.e. to answer a questions about “how well are controls mitigating risk” or “to how much risk are we exposed”. One could think of the output of the design phase being a view as to what types of outcomes to expect, and then the management phase will provide a view on what outcomes are actually being generated by a system/organization. Outside of risk/security, a good example of a management model is the adoption of annual/quarterly/ongoing quality goals, and regular review of performance against targets.
  • Operations: Operational models are a different beast. And my favorite. Operational models aren’t trying to describe a system, they are embedded into the system, they influence the activities taking place in the system, often in real-time. I suppose any set of heuristics could be included in this definition, including ACL’s. I prefer to focus on models that take multiple variables into consideration – not necessarily complex variables – and generate scores or vectors of scores. Why? Because generally the quality of decision (model fit, accuracy, performance, cost/benefit trade-off) will be more optimized, i.e. better. Outside of risk/security, a good example is dynamic traffic routing used in intelligent transport systems.

“Framework” is another term that I’ve heard used in a number of different ways but it seems to really be an explanation of a selected approach to modeling, and then some bits on process – how models and processes will be applied in an ongoing approach to administer the system. Even Wikipedia shies away from an over-arching definition, the closest we get is “conceptual framework“: described as an outline of possible courses of action or to present a preferred approach to an idea or thought. They suggest we also look at the definition for scaffolding: “a structure used as a guide to build something” – (yes, thank you, I want us to start discussing risk scaffolding when we review architecture, pls)

Continue reading

QuantOps

Recently, I was interviewed for the ActiveState blog on DevOps & Platform as a Service (PaaS); that interview made it to Wired.com (here). A discussion on the topic was timely, as I’ve been thinking about DevOps and other agile delivery chain mechanisms quite a bit lately, mainly as I am applying them in my current gig which my colleagues are I describe as “Business Ops”. Next month at Nordic Security 2013 I’ll be presenting “Operating * By the Numbers” (If you’re wondering why there’s no abstract, it’s because I’m still perfecting “Just In Time” deck development…just kidding. Sort of.*)

Anyway, I thought it might be a good idea to explain What I’m Talking About When I Talk About DevOps (apologies to the incomparable Haruki Murakami). This will be my first time trying to explain where I’m going with this whole DevOps thing, so it might get fuzzy. Bear with me. I reserve the right to change my mind later, of course (I’m cognitively agile that way, haha), so if you have comments or criticisms I’m very open to hearing your thoughts.

Connection between DevOps & Risk

DevOps, if you’ve not heard of it before, is a concept/approach to managing large-scale software deployments. It seems to be most popular/effective at software-based or online services, and it is “big” at highly scaled out companies like Google, Etsy, and Netflix. Whether consumer-facing or B2B, these services need to be fast and highly-reliable/available. The DevOps movement is one where deployments and maintenance are simplified (simplicity is easier to maintain than complexity) through standardization and automation, lots of instrumentation & monitoring, and an integration of process across teams (most specifically, Dev, QA & Ops). More on “QA” later.

But…the thing about DevOps is, that while it is a new concept in the world of online services, it draws heavily from Operations Management, which is not new. The field of Operations Research was forged in manufacturing but the core concepts are easily applied across other product development cycles. In fact this extension is largely overdue, since a scan through semi-recent texts on operations management shows IT largely described as an enabling function (e.g. ERP) but not a product class in and of itself. (BTW, in some curriculums, Operations Management is cross-listed or referred to as Decision Science, which is a core component of risk/security analytics.)

Continue reading

Read on, readers

Last week I stopped into SOURCE Dublin to give a follow-up to my recent talk in Boston, another foray into game theory (Games We Play: Payoffs & Chaos Monkeys) — this time w/some more advanced mathiness and references back into behavioral economics. Anyway, I still owe some explanatory blog posts to support some of the materials I had to rush through (to get everything into 45 minutes), but first thing I wanted to share is my working reading list. I’m finishing up reading some other books which I’ll post later but this is a good overview and will get folks interested in the topics headed in the right direction.

shall we play a game?

Over the last year I’ve started reviewing game theory in more depth, looking for some models I can use to understand system management (vis a vis risk) better. Game theory is one of the more interesting branches of economics for me, but I don’t actually have a great intuition for it yet (I really have to work at absorbing the material). Since it doesn’t come super-naturally to me, I’m particularly proud of the presentation I gave at SOURCE Boston last year: Games We Play: Defenses and Disincentives (description here). Luckily, there is a good video of the presentation, because when I wanted to expand out the presentation a few months later, my notes were totally undecipherable. 🙂

BruCon 2012 -- A Million Mousetraps: Using Big Data and Little Loops to Build Better Defenses

Yes, that is a Pringles can sharing the podium with me. Photo credit (and Pringles credit) go to @attritionorg.

Since I am still a proponent of applied risk analytics (as in my talk at Brucon this year: A Million Mousetraps: Using Big Data and Little Loops to Build Better Defenses (description here), I’ll never be able to escape behaviorally-driven defenses, but even with the power of big data behind us it feels like we defenders often find ourselves playing the wrong game. I don’t disagree the deck might be stacked against us, but we might be able to at least take control of the game board a little better.

Essentially — I am interested in we how might be able to adjust incentives in order to improve both risk reduction, whether from a fraud, security, or general operational dynamics perspective. Fraud reduction typically considers incentives and system design rather vaguely (not in a systematic way, except maybe in the case of authentication), and instead relies almost exclusively on behavioralist approaches (as typified by the complex predictive models launched looking for patterns in real time. I have been wondering for a while if we can “change the game” and get improved results.

Continue reading