This past Fall we had a webinar with Dan Geer, CISO at In-Q-Tel and renowned security thought leader. Dan discussed what he considers the top 5 misconceptions preventing effective data protection and gave advice on how security teams could redirect their resources to a security program that is data-centric. Following the webinar we held a Q&A with Dan and we wanted to share some of his answers that stood out.
You discussed the need for what you call counterparty security. How can a company get its partners and suppliers to adopt its security policies and programs, and what happens if they aren’t willing to?
Well this begs the question – how much dependence on those partners do you want? If someone says, “I’m not going to participate, take it or leave it” you have to ask, do you really want them as a supplier?
Several of the larger banks in New York, or at least in those where I am familiar with the people who run their data protection regimes, most of them are bearing down pretty hard on their counterparties. Whether that is clearing operations or whether that is brokers who trade through them and have access to their desk, whether it is access to trade data and under what circumstances, they’re bearing down hard on that. And what they are saying is, “We are not going to allow you to use our systems unless you do the following kinds of things.” This goes to source code control and questions of source code veracity, this goes to aspects of the very kinds of data surveillance that we’re talking about here. I think that if someone says, “We’re not going to play,” it’s at least time for you to start looking for someone else to replace them. That may be harsh, but what’s the alternative? I say that literally – what is the alternative? If someone has access to your data and you can’t keep them from misusing it because they won’t let you see what they’re doing, is this not something that you want to escalate to the CIO, if not above?
These are not random preferences, like what colors you have on the walls or how much glass you have in your building. These are not random preferences – what they are instead are matters of life and death, and you have to convince people of that. I think you have to show them what you do. There’s nothing quite like saying, “Well we do it, here’s what we do. By the way, our other suppliers are doing it, here’s what they do. What’s your problem?” If they still refuse, it might be time to look for somebody else.
Do you see an industry-level common fabric emerging within which the data supply chain participants can operate to detect and protect their information security events?
Well I think we’re well short of an industry-wide consensus, but I’d like to think that it will happen. It might happen under different regulatory regimes (i.e. if you’re in the U.S. vs. Germany vs. Japan, etc). There might be a consensus amongst organizations within different regulatory regimes, of course those of you who have to make peace with every regulatory regime have a different problem.
I think there is some sense that we could arrive at such a consensus. I’m going to speculate here – which is always dangerous – that it might be driven by the needs of insurance and that it might be driven by insurers. Cyber insurance, of course, has been a mess for quite some time. Partly because we don’t have good actuarial data and it’s hard to get it. You know, you can predict my life expectancy to four digit accuracy (or whatever it is) based on three or four aspects of who I am and price a policy accordingly. We have not had that in the cyber realm at all. I think there’s going to be demands for that because the rate at which cyber events occur does not seem to be slackening and the reporting of them in the public press is certainly not slackening. Hence the question of what do you do about it.
Consider California S.B. 1386 – the data breach law with which we’re all familiar. If you define a state of security as the absence of surprises that you can’t mitigate, California S.B. 1386 actually created a state of security. Because it said “If you lose people’s data, here’s what you do,” and “If you do that, that is the right thing and that is what’s required,” that idea that failure is inevitable and this is how you process it does create enough of an actuarial tale that you could get plausible cyber insurance.
At the same, the insurance companies are looking up and down, left and right, asking “How do we price this stuff?” At the moment, the risks they’re taking probably far exceed the revenue they’re getting from their premiums. In other words, under the current level of knowledge, they’re probably underpricing. On the other hand, if it’s the current level of knowledge that you’d want to attack, that idea of what constitutes a reporting regime (S.B. 1386 or you name it) and what is the nature of the forensic readiness of the firm should there be a bad event – now you might be talking about something where insurance could drive this. That’s only one possibility.
I would be reluctant to recommend government driving this, because government can’t keep up and it never will. And it shouldn’t be expected to, and those who expect it to need to get a life. That regulatory part has to be goal-based as opposed to methodology-based, but passing off your risks to an insurer might be a place where methodology bases could have a role. I think that we have an opportunity at this time, if for no other reason than we’re embarking on two things that are rather large: the electronic health record revolution and the smart grid revolution, both of which bring a level of interdependence that is at the same time surveillable in ways that we might approach the best answer to this question.
It’s hard enough to get data on cyber security incidents that have succeeded, but will obtaining data on near misses ever become a reality? Most criminals fail first and this could be a decent early warning indicator.
Oh, I love this question. I recently gave a policy talk in which I asked for a two-level reporting regime. One is much like the CDC’s and state level authorities’ requirements that certain diseases must be reported. For instance, if I show up at the hospital with small pox, or the plague, or anthrax, my medical privacy is put aside in the interest of public health. Such as it is that there are certain events that must be reported, and I think that is a policy decision that we have to make, not just for California S.B. 1386 but at the national level.
At the same time, there is a well-established regime for the airlines to share data with near misses on each other. Having airlines share near miss data with each other, not with the government but rather with a consortium run by MITRE, has been proven to be very useful. First, they’re more frequent. Second, they require a greater degree of analysis to understand exactly what happened. Third, if I discover that somebody else is having the same problems that I do, then necessarily I will want to talk to them about what they’re doing about it, what they’ve seen, etc. In fact, quite a few other security operations do exactly that. If I see something bad, has anybody else seen this?
I’ll channel Dr. Seuss here – if I ran the zoo I would have a still-to-be-negotiated threshold above which mandatory reporting is required and below which there is the availability of some sort of consortium for near misses. I think you’re on to something there and I think this is a great idea.