Is there a policy answer to Heartbleed?

Monika Ermert, Heise, Intellectual Property Watch, VDI-Nachrichten, Germany

PUBLISHED ON: 25 Apr 2014

Rarely has a computer security vulnerability made such big headlines as ‘Heartbleed’ and rarely have there been so many calls to change potentially compromised user passwords. The prevalence and seriousness of the ‘bug’ in OpenSSL implementations all over the net were major reasons for that, as was a pretty straightforward communication strategy. With operators worldwide still patching their servers and normal users changing their Gmail, Yahoomail, Facebook and Godaddy passwords - while learning quite some about secure socket layer/transport layer security and open source software development - the questions still to be answered are: how to prevent such failures in the future, are there any policy answers?

Technically the Heartbleed stems from a 2012 version of the OpenSSL encryption library, and one of its features, called Heartbeat (RFC 6520) in particular. Heartbeat, written by German OpenSSL programmer Robin Seggelmann, allowed to keep connections between servers open over some time while exchanging the ‘heart beats’. Yet alongside the pure beat also other information was sneaked from the server’s deeper memory, including even private keys. Connections thought to be encrypted and secure therefore could be tapped by an attacker.

It's human, silly

The technical community quickly established that it was possible to extract even private encryption keys from this, resulting in the worst case scenario: people thought they had a secure channel for their communication while they had not. The technical problem was explained nicely and quickly to the public –  and long lists of affected services (from mail to the Canadian Revenue Agency or Apple’s Airport Extreme hardware - were provided as were recommendations for end users what they should do.

The questions about how to go forward once the last server has (hopefully) been fixed, has just started. Crypto guru Bruce Schneier pointed it out clearly: “This may be a massive computer vulnerability, but all of the interesting aspects of it are human.” Issues that had to be considered were, to name some, “the auditing of open-source code, how the responsible disclosure process worked in this case, the ease with which anyone could weaponize this with just a few lines of script, (…) and our certificate issuance and revocation process.”

Funding software audits and testing

The fact that the vulnerability was part of an open-source and free library certainly helped with regard to the prevalence of the bug. Open source tools are hailed as vendor-independent and more transparent that proprietary code and, they come cheaper.

Afnic1 scientist Stéphane Bortzmeyer, tweeting about the bug early, said to us, that the general rules still was that “free software can improve security, by allowing many independent reviews.” Yet at the same time, the important word in this rule is ‘can’. “Many free software were never checked or audited” and on top, the OpenSSL bug was ‘only’ two years old, he added.

To avoid bugs serious audits and testing by capable programmers were a must and the latter was a scarce resource, Bortzmeyer said, “especially if you don't pay them and the majority of the openSSL team is not payed to work on OpenSSL. They have day jobs.” The OpenSSL foundation used the Heartbleed-failure to ask for donations to pep up its work. “There should be at least a half dozen full time OpenSSL team members,” Steve Marquess, the “financial guy” of the OpenSSL Foundation wrote, and “not just one”.

Critical software – critical infrastructure

With a bigger team, the code provided by Seggelmann could have been vetted much better – and perhaps the bug would have been found. Marquess also appealed to ‘big users’ to consider donations: “The ones who should be contributing real resources are the commercial companies and governments who use OpenSSL extensively and take it for granted.”

The call was echoed by quite a number of security experts, including Dan Kaminsky, who had found a vulnerability in the design of the Domain Name System a few years ago. The community, Kaminsky wrote, needed to “start getting serious about figuring out what software has become Critical Infrastructure to the global economy, and dedicating genuine resources to supporting that code.”

Bortzmeyer also had another appeal with regard to what governments could do. For future “proper security testing require no legal hassles such as the prohibition of reverse-engineering in the US”. Bortzmeyer’s plea: please get rid of anti-hacker articles like the one in the French criminal code, “which forbids even the simple possession of hacking tools.” Anti-hacking laws, by the way, are no French or US speciality.

National cyber security authorities – where have they been?

Calls to government funding for cyber security might result in the classical pointers to growing budgets for national cyber security agencies or national Computer Emergency Response Teams (CERTs). Yet many of these have only reacted slowly and rather meekly to the Heartbleed news (see here or here or here)– let alone being part of checking on potential vulnerabilities of the OpenSSL ‘critical software’ beforehand.

A week after the news broke, the European Network and Information Security Agency made an attempt to analyse the root causes for the problem, listing: “a programming mistake by lack of validation of input submitted by users, in addition the use of insecure memory management routines, which is a problem caused by the development environment and a failure in the quality assurance process.”

The first recommendation following Heartbleed from ENISA therefore was “to use secure development methodologies and increase the awareness of developers about programming mistakes and how to avoid them.”

Integrating the development, standardisation and implementation communities

Beside the call to rethink funding of open-source software, there was also an interesting policy answer from former Internet Architecture Board (IAB) member Hannes Tschofenig, now a standardisation expert at ARM (Advanced RISC Machines Ltd.). Tschofenig, in discussing the failure to realise the impact of information leakage due to mass surveillance programmes of governments, wrote that the cooperation between standardisation bodies and implementers and operators was not good enough.

While outsiders would assume there is a strong link, in fact deployment of new security protocols or updates often take a long time. “When a security vulnerability has been discovered it may take years before many products and services include the updated code. As an example, today many TLS deployments use insecure cryptographic algorithms, like RC4.” RC4 has been considered an insecure stream cipher - and possibly broken by the National Security Agency – for years.

Tschofenig therefore calls onto the technical community to consider a more general problem:  Nobody feels responsible for the entire chain from development to implementation and operation. He recommended to bring the different communities – developers and implementers – together much more.

Footnotes

1. Afnic is a non-profit association and the incumbent manager of the .fr TLD. It is further a multi-registry operator of the top-level domains corresponding to the national territory of France.

Add new comment