Thank you, internets, for all the feedback I’ve gotten on BoomTime: Risk As Economics. Of course my slides are nigh indecipherable without my voiceover, and my notes didn’t make it to the slideshare, so here are some notes to fill in (some) of the blanks until the video hits YouTube (SiRA members will get early access to SiRAcon15 videos via the SiRA Discourse forum, BTW). (You will want to look at the notes and the slides side by side, probably, as one doesn’t make sense w/o the other.)
A list of typical themes one hears when discussing information security & economics: within businesses we are requested to talk about exposures and threats in terms of financial impact, or consider the financial (money) drivers. Also the theme of information asymmetries (Market for Lemons) is a big theme of information economics and of software markets in general: when information about quality of a product is difficult to find, that lack of transparency drives down prices, and we get less incentives to improve quality. (Ask me questions about market signals as a mechanism for correcting information asymmetries.) “Make it more expensive for the attacker” or “don’t outrun the bear, outrun the guy next to you” is also an idea that gets raised. Game theory, concepts of quantifying “risk” (exposure, tolerance), markets for exploits & vulns is a hot topic at the moment, as is behavioral economics and all things related to incentive design – gamification being the most buzzwordy example, perhaps, but framing as a method for improving consumers’ ability to make good choices related to privacy preferences also something that has come up a bit lately in security economics research. Anyway, these are some themes that tend to be repeated in recent research literature.
I just finishing giving a third version of a presentation that I put together on lessons Infosec/Risk/Platform owners can learn from classic Operations Research/Management Science type work. The talk (“Operating * By the Numbers”) was shared in Reykjavik (Nordic Security Conference), Seattle (SIRACon 2013), and in Silicon Valley (BayThreat). Thanks everyone who attended, especially those of you who asked questions and provided feedback.
A few folks have asked for reading lists. Some asked for the quick run-through sample from my bookshelf, others want some further reading. Here’s the quick run through:
And I also want to give another shout-out to Combat Modeling, by Alan Washburn and Moshe Kress, of the Naval Postgraduate School. It’s a pricey text, but take a look at the table of contents & the topics they cover. Really interesting work to consider for control system designers.
Also, I haven’t read these personally but they are on my “to read” list as they came recommended by fellow quant/risk nerds:
Over the last year I’ve started reviewing game theory in more depth, looking for some models I can use to understand system management (vis a vis risk) better. Game theory is one of the more interesting branches of economics for me, but I don’t actually have a great intuition for it yet (I really have to work at absorbing the material). Since it doesn’t come super-naturally to me, I’m particularly proud of the presentation I gave at SOURCE Boston last year: Games We Play: Defenses and Disincentives (description here). Luckily, there is a good video of the presentation, because when I wanted to expand out the presentation a few months later, my notes were totally undecipherable. 🙂
Yes, that is a Pringles can sharing the podium with me. Photo credit (and Pringles credit) go to @attritionorg.
Since I am still a proponent of applied risk analytics (as in my talk at Brucon this year: A Million Mousetraps: Using Big Data and Little Loops to Build Better Defenses (description here), I’ll never be able to escape behaviorally-driven defenses, but even with the power of big data behind us it feels like we defenders often find ourselves playing the wrong game. I don’t disagree the deck might be stacked against us, but we might be able to at least take control of the game board a little better.
Essentially — I am interested in we how might be able to adjust incentives in order to improve both risk reduction, whether from a fraud, security, or general operational dynamics perspective. Fraud reduction typically considers incentives and system design rather vaguely (not in a systematic way, except maybe in the case of authentication), and instead relies almost exclusively on behavioralist approaches (as typified by the complex predictive models launched looking for patterns in real time. I have been wondering for a while if we can “change the game” and get improved results.
So, it’s been about two years since I added anything to this blog. I’ve been busy!! The awesome folks at SOURCE gave me a speaking slot at SOURCE Boston 2010 and that kicked-off a series of talks on methods consumer-facing companies/websites take to protect customers from online threats. And then later in 2010 was able to participate in some discussions on different types of threat modeling and situations in which modeling techniques can be useful.
In 2011 I wanted to talk about some more concrete topics, and so spent some time researching how threats/impacts can be better measured. This is an area I’d like to spend more time researching, because there’s still a gap between what we can do with the the high-frequency/lower-impact events (which seem to be easier to instrument, measure, and predict) and the lower-frequency/high-impact events (which are very difficult to instrument measure, or predict). –> I think the key is that high-impact events usually represent a series or cascade of smaller failures, but there’s more research into change management and economics to be done.
Later in 2011 I switched over to describing how analytics can be used to study and automate security event detection. I hope in the process I didn’t blind anyone with data science. (haha…where’s that cowbell?) So here’s what I did: Continue reading →
This post is a first in a series I will be exchanging with Ohad Samet (ok, second, he’s a much quicker blogger than I am), one of my esteemed colleagues in Paypal Risk, and the mastermind behind the Fraud Backstage blog. Read Ohad’s article here.
Despite best efforts to protect systems and assets using a defense-in-depth approach, many layers of controls are defeated simply by exploiting access granted to users. Thus the industry is trying to determine not only how we protect our platforms from external threats, but also how we keep user accounts from being attacked. User credentials being the “keys” (haha) guarding valuable access to both user accounts and to our platfoms, a popular topic among the security-minded these days center around alternatives to standard authentication methods. Typically, the discussion centers not around how an enterprise secures its own assets and users, but about arming consumers who come and go across ISPs, search sites, online banking, social networks…and are are vulnerable to identity theft and privacy invasions wherever they roam.
How many information security professionals does it take to keep a secret?
While there are a number of alternatives out there, focusing on authentication as if it’s a silver bullet misses the point. When we assume that keeping our users secure means protecting (only, or above all other things) the shared secret between us, it leaves us over-reliant on simple access control (the fortress mentality) when as an industry we already know that coordinating layers of protection working together is a more effective model for managing risk. To clarify our exposure to this single point of failure, let’s consider:
1) How much exposed (public, or near-public) data is needed to carry out reserved (private) activities? Meaning, how much does a masquerader need to know that is private to approximate an identity?
– and –
2) How does our risk model change if we assume all credentials have been compromised?
Shall We Play a Game…of Twenty Questions?
Really all this nonsense started when we started teaching users to use “items that identify us” as “items that authenticate us”. Two examples, SSN and credit card numbers. SSN we know has been used by employers, banks, credit reporting agencies…as well as for its original purpose, to identify participation in social security (this legislation being considered in Georgia may limit use of SSN and DOB as *usernames* or *identifiers*, although it is silent on using SSN/DOB to verify/authenticate identity).
I’d never heard the term data exhaust (or digital exhaust, thank you wikipedia ), but it’s a handy idea. The proliferation of social media and the internet transformed “media” (generally media is considered a one-way push system) into “social” (the personalization and intimacy of narrowcast, or at least a many-to-many set of connections). Everyone set on broadcast, and all broadcasts stored for perpetuity. If you like data (and I do) and you are interested in how people act, interact, and think (and who isn’t) the idea that all those feeds and updates and comments would be going to waste — well, it’s heartbreaking.
The basic premise of the panel appealed to the entrepreneurial marketer in all web 2.0 hopefuls — consumers are telling you their preferences (loves, hopes, fears, feedback) faster than can be processed, and companies that analyze and parse the data the best will cash in by designing the best promotions and products for the hottest, most profitable market segments. In addition to the new faddish feeds of facebook, twitter, foursquare, and blippy, some classic sources of consumer preference were mentioned: credit card data, point-of-sale purchase (hello grocery store shopper’s club), and government-held spending data. Everyone knows there’s gold in dem dere hills — like mashing up maps and apartments for sale, or creating opt-in rewards programs so you can figure out not just the best offer for Sally Ann, but also for all of her friends.
Philosophically, the reason we’ve been stuck with an advertising model (broadcast brand-centric messages rather than customer-centric, tailored promotions) is because we’ve never had a mechanism for scaling up a business to consumer communication — that’s tuned to the consumer. Now with the Web and current delivery mechanisms (inbox, feed, banner, search placement, etc.) not only can companies talk to their consumers — the consumers will engage in conversation. The better you know your customers, the better your ability to reach them (promote) and meet their needs (product). As long as you know how to interpret the conversation correctly.
“What people say has no correlation whatsoever with real life, but what they do has every correlation with real life…” Mark Breier, In-Q-Tel
Quickly the conversation moved into some of the technier topics: while there’s a promise of reward for the cleverest analysts, there’s some tricky issues too. For one thing, processing exhaust data in a way that makes sense, at scale, requires some serious processing horsepower and an architecture that can accommodate “big data”. While less time was spent on the technical issues than I expected (with DJ from LinkedIn and Jeff from Cloudera on the panel, a full-on Hadoop immersion was a distinct possibility), time was spent both on the computer science research in the area (machine learning, natural language processing) as well as the (more interesting to me) current thinking on the most appropriate analytic methods. More time on sentiment analysis and entity extraction would have been great, but probably really require follow-up sessions.
Magoulas from ORA did a nice job outlining some of the issues consumers are having with social marketing: besides basic privacy and data ownership questions (who owns the pics you’ve uploaded to FB? And how should filters be set-up for consumers to remain in control of information they want to keep semi-private?), there are issues emerging around security/spoofing/hacking — meaning, how much public information is available for consumers to be targeted. (Marketers can guess your age and who your friends are, more nefarious entitites can try the same techniques to learn more private details). My favorite issue described was the “creepiness factor” — which I can definitely relate to — which is the idea that consumers may not react happily to services that know them *too* well.
Ultimately, analytic marketing is here to stay. Since consumers create data and leave digital detritus wherever they surf, leveraging that data will continue pay dividends as service providers nimbly design customer-centric offers and products. Still, concerns about how personal data (whether private public) is used by unknown intermediaries could either drive withdrawal from social media (if consumers opt to lock-down their profiles and share only in private circles) — or — a more hopeful thought, drive further innovation in this space. Look for savvy organizations to weave opt-in, consumer-driven agent technology with the power to aggregate (and segment) the behavior and preferences of wide communities of people.