Random Acts of Architecture

Wisdom for the IT professional, focusing on chaos that is IT systems and architecture.

Tag Archives: Cryptography

Theresa May vs Encryption vs Solutions

Theresa May

Theresa May’s speech in response to the recent terrorist attacks in London have, once again, mentioned cracking down on cyberspace “to prevent terrorist and extremist planning” and starving “this ideology the safe space it needs to breed.” World leaders, including Australia’s prime minister Malcolm Turnbull supported her, saying US social media companies should assist by “providing access to encrypted communications.”

Cory Doctorow and others make valid points about how impractical and difficult these dictates are to implement. Politicians mistakenly assume that weakened encryption or backdoors would only be available to authorized law enforcement and underestimate how interdependent the global software industry is.

However, presenting this as a binary argument is a “sucker’s choice”. Law enforcement is likely concerned because it cannot access potential evidence they have a legal right to see. While same laws arguably impinge personal freedoms, is it technology’s or technologists’ role to police governments?

Meanwhile, modern cryptography protecting data cannot also allow law enforcement access without weakening it. Consequently, technologists lambast politicians as ignorant and motivated by populism, not unreasonable considering Brexit and similar recent political events.

As technologists, we know what technology can and, more relevantly, cannot do. While it defines short term options, our current technology does not limit options in the long term. The technology industry needs to use the intelligence and inventiveness it prides itself on to solve both problems.

I do not know what forms these solutions will take. However, I look to technologies like homomorphic encryption or YouTube’s automated ability to scan it’s nearly uncountable number of videos for copyright infringements. There is certainly challenge, profit and prestige to be found.

The threat of criminal or terrorist action is not new. Mobile phones, social media and other phenomena of the digital age grant them the same protections as everyone else. Dismissing solutions from the ignorant does not mean the underlying problems go away. If the technology industry does not solve them, politicians may soon do it for them and, as Cory Doctorow and others point out, this will be the real tragedy.

Image credit: https://www.flickr.com/photos/number10gov/32793567693

A Ticket to Security via Obscurity

A group of Australian university students recently “cracked” the “encryption” used by public transport tickets in New South Wales (NSW), Australia. The public transport ticket format’s security relied on people not knowing the format of data on the ticket’s magnetic stripe and not having access to equipment to create tickets. In other words, it relied on “security through obscurity“.

Security through obscurity is not a new concept. In 1883, Auguste Kerckhoff theorised that the only truly secure parts about a system (computerized or otherwise) were the hidden or secret parts. This became known as Kerckhoff’s principle.  Over half a century later, Claude Shannon rephrased it as assume “the enemy knows the system”, called Shannon’s Maxim. These are as true now in computer security, particularly cryptography and encryption,  as they were then.

Ultimately, poor computer security for the public transport ticket system is not the programmers’ or engineers’ fault. The customer and management decided the security provided was sufficient. It is similar to the Year 2000 issue. The programmers knew that two digit years were not going to work after the century changed but why should the manager responsible spend resources to fix it when he or she is unlikely to be around in the year 2000 to take credit for the work?

When people initially approach computer security, they start at cryptography and software security. If we build more secure encryption systems; if we  fix every buffer overflow and every injection attack; if every object has the appropriate authentication, access control, auditing and so on, the “bad guys” will be helpless.

A lot of progress has been made in improving computer security. Organizations like OWASP and SAFECode provide software security guidance for software developers. Many vendors provide frameworks, such as Microsoft’s Security Development Lifecycle, and software patching is a regular occurrence. The US government has FIPS 140-2, a standard for hardware or software that performs cryptography, and Common Criteria is an attempt by multiple governments to provide a framework for software security assurance. A plethora of software security tools are also available with interesting and technical sounding names like “static analysis tools” and “fuzzers”.

That said, computer security practitioners eventually realise it is a matter of economics instead – as Allan Schiffman said in 2004, “amateurs study cryptography; professionals study economics.” It is not that we do not know how to build secure systems. However, if the customer cannot differentiate the security (or quality) of two otherwise seemingly identical products, the cheaper (and likely less secure) product will win. Similarly, most software is sold “as is”, meaning the customer assumes all the risk. See Bruce Schneier’s “Schneier on Security” or “Freakonomics” by Steven D. Livett and Stephen J. Dubner for more discussion.

Others take a different direction and focus on the sociology and behavioural patterns behind security.  Although most people recognise computer security is important, the first reaction of most when confronted with a security warning is to ignore or bypass it. After all, social engineering attacks are behind many of the headline grabbing computer security incidents over the last few years, such as Operation Aurora and Stuxnet. See Bruce Schneier’s “Liars and Outliers” for more discussion.

That said, there is little impetus to improve computer security without people willing to push the boundaries of systems and reveal flaws. Kudos and credit to the students for not only discovering  the public transport ticket format and “encryption” mechanism (if true encryption was actually ever used) but revealing it in a responsible (or “white hat”) manner, such as by allowing the targeted state to reveal itself. Remember that just because the ticket format has been discovered by the students does not mean others have not already done so and used it for less than honest purposes.

Interestingly, the NSW public transport ticket system is approximately 20 years old, introduced when computers were less powerful and less accessible and the required security standards were much lower. I wonder how many systems being built today are going to be considered secure in 20 years time, when computers are going to be presumably exponentially more powerful and more ubiquitous and the security standard much higher.