Random Acts of Architecture

Wisdom for the IT professional, focusing on chaos that is IT systems and architecture.

Category Archives: Information Security

Twitter will never outsource its algorithm

Elon Musk sparked controversy with this recent attempt to take over Twitter. Many support him, citing Twitter’s relatively poor revenue and Musk’s success in turning seemingly unprofitable ventures, like electric vehicles and space exploration, into successes.

However, his recent Twitter poll caught my interest, where 82% of over one million responses voted that Twitter should open source its algorithm.

Musk explained further during interviews at the TED 2022 conference. The “Twitter algorithm” refers to how tweets are selected then ranked for different people. While some human intervention occurs, social networks like Twitter replace a human editorial team’s accountable moderation with automation. Humans cannot practicality and economically manage and rank the estimated 500 million tweets sent per day.

By “open source”, Musk means “the code should be on Github so people can look through it”. Hosting software code on github.com is common practice for software products. Third parties can examine the code to ensure it does what it claims to. Some open sourced products also allow contributions from others, leveraging the community’s expertise to collectively build better products.

Musk says “having a public platform that is maximally trusted and broadly inclusive is extremely important to the future of civilization.” People frequently demonize social networks for heavy-handed or lax “censorship”, depending on their side in a debate. Pundits claim social networks limit “free speech”, conveniently forgetting “free speech” means no government intervention. Pundits cite examples of the algorithm prioritizing or deprioritizing tweets, authors or topics. They also cite account suspensions and cancellations, sometimes manual and sometimes automated.

Musk assumes that explaining this algorithm will increase trust in Twitter. He called Twitter “a public platform”, implying not just public access but collective ownership and responsibility. If people understand how tweets are included and prioritized, the focus can move from social networks to the conversations they host.

Unfortunately, understanding and trust are two different things. Well understood and transparent processes, like democracies’ elections or justice systems, are not universally trusted. No matter the intentions or execution of a system, some people will accuse it of bias. These accusations may be made in ignorant but good faith, observe real but rare failures or be malicious and subversive.

Twitter’s algorithm is not designed to give equal exposure to conflicting perspectives. It is designed primarily to maximize engagement and, therefore, revenue. It is not designed to be “fair”. Social networks are multibillion dollar companies that can profit from the increased exposure controversy brings. Politicians alienate few and resonate with many when they point the finger of blame at Twitter.

Designing an algorithm for fairness is practically impossible. You can test for statistical bias in a numeric sample set but not across the near entirety of human expression. Like the philosophers opposing the activation of Deep Thought in Douglas Adams’ Hitch Hiker’s Guide the Galaxy, debating connotation, implication and meaning across linguistic, moral, political and all other grounds is an almost endless task.

Assuming transparency can assure trust and fairness is possible, open sourcing the Twitter algorithm assumes the algorithm is readable and understandable. The algorithm likely relies on complex, doctorate-level logic and mathematics. The algorithm likely includes machine learning, which uses no defined algorithm. The algorithm likely depends on custom databases and communication mechanisms, which may also have to be open sourced and explained.

This complexity means few will be able to understand and evaluate the algorithm. Those that can may be accused of bias just like the algorithm. Some may have motivations beyond judging fairness. For example, someone may exploit a weakness in the algorithm to unfairly amplify or suppress a tweet, individual or perspective.

Musk’s plan assumes Twitter has a single algorithm and that algorithm takes a list of tweets and ranks them. It is likely a combination of different algorithms, instead. Some work when tweets are displayed. Some run earlier for efficiency when tweets are posted, liked or viewed. Different languages, countries or markets may have their own algorithms. To paraphrase, J.R.R. Tolkein, there may not be one algorithm to rule them all.

Having multiple algorithms means each must be verified, usually independently. It multiplies the already large effort and problems of ensuring fairness.

Musk’s plan also assumes the algorithm changes infrequently. Once verified, it is trusted and Twitter can move on. However, experts continue to improve algorithms, making them more efficient or engaging. Hardware improves, providing more computation and storage. Legal and political landscapes shift. Significant events like elections, pandemics and wars force tweaks and corrections.

Not only do we need to have a group of trusted experts evaluating multiple complex algorithms, they need to do so repeatedly.

Ignoring potentially reduced revenue from algorithm changes, open sourcing Twitter’s algorithm also threatens Twitter’s competitive advantage. Anyone could take that algorithm and implement their own social network. Twitter has an established brand and user base in the West, but its market share is far from insurmountable.

There are other aspects to open sourcing. For example, if Twitter accepts third party code contributions, it must review and incorporate them. This could leverage a broader pool of contributors than Twitter’s employees but Twitter probably does not need the help. Silicon Valley tech companies attract good talent easily. Some contributions could contain subtle but intentional security flaws or weaknesses.

If the goal is to have a choice of algorithms, is this choice welcome or does it place more cognitive load on people just wanting a dopamine hit or information? TikTok succeeded by giving users zero choice, just a constant stream of engaging videos.

Evaluating an algorithm’s effectiveness is more than just understanding the code. It requires access to large volumes of test data, preferably actual historical tweets. Only Twitter has access to such data. Ignoring the difficulty of disseminating such a huge data set, releasing it all would violate privacy laws. Providing open access to historical blocked or personal tweets would also erode trust.

Elon Musk has demonstrated an uncanny ability to succeed at previously unprofitable enterprises like electronic vehicles and space travel. Perhaps there is more to Elon’s Twitter plan than is apparent. Perhaps he is saying what he needs to say to ensure public support for his Twitter takeover.

While open sourcing Twitter’s algorithm appeals to the romantic notion that information is better free, increased transparency will not create a “maximally trusted and broadly inclusive” Twitter. Social networks like Twitter coalesce almost unbelievable amounts of data almost instantly into our hands. They have difficulty with contentious issues and, therefore, trust because they reflect existing contention back at us. It is easier to blame the mirror than ourselves.

The image is a cropped version of https://commons.wikimedia.org/wiki/File:Programming_code.jpg. Used under Creative Commons Attribution-Share Alike 4.0 International license.

Information security meets scaled Agile

Australian Cyber Security Magazine Issue 8 2019 Cover

Australian Cyber Security Magazine published an article of mine titled “Information Security meets scaled Agile”, describing how large scale Agile processes work and how Information Security can best adapt to it and succeed. The article is on pages 18 to 20.

Rebranding Corporate Politics

politicsThe term “corporate politics” conjures up images of sycophantic, self-serving behavior like boot-licking and backstabbing. However, to some IT professionals’ chagrin, we work with humans as much as computers. Dismissing humans is dismissing part of the job.

The best way to “play” corporate politics is solve big problems by doing things you enjoy and excel at.

“Big problems” means problems faced not just by your team but by your boss’s boss, your boss’s boss’s boss and so on. If you don’t know what they are, ask (easier than it sounds). Otherwise, attend all hands meetings, read industry literature or look at your leaders’ social network posts, particularly internal ones.

This is not just for those wanting promotions into management. Individual contributors still want better benefits and higher profile or challenging projects. These come easiest to those known to be providing value and not the strict meritocracy some IT professionals think they work in.

Start by solving small problems as side projects. Choose something impacting more than your own team and minimize others’ extra work. Build up to bigger problems once you have demonstrated ability and credibility.

You need not be the leader. Assisting others making an effort can be just as effective. You can own part of it or bask in the halo effect. If not, recognize those that are. This creates a culture of recognition that may recognize you in the future.

While some IT professionals solve big problems everyday, communicating and evangelizing their work “feels” wrong. This what salespeople do, not IT professionals. Many also think their work is not interesting.

Being successful requires people knowing what you do. This may be as simple as a short elevator chat, a brown bag talk or a post on the corporate social network. It also helps get early feedback and build a like-minded team. Others will be interested if you are working on the right things.

What about the potentially less savory aspects of corporate politics like work social events, sharing common interests with management, supporting corporate charities and so on? These are as much an art as a science. Focus on common goals and building trust, internally and externally. People like to deal with people at their level and contact builds familiarity.

However, this is no substitute for solving big problems. If you are delivering value, interactions with senior decision makers and IT professionals with similar goals should occur naturally. Build on that.

Be aware that problems change over time. Problems get solved by others. The market changes. Competitors come and go. Understanding organizational goals is an ongoing process.

Also realize decision makers are human. They make mistakes. They want to emphasize their achievements and not their failures, just like software developers’ fundamental attribute error bias for their own code and against others’.

However, if your organization makes decisions regularly on “political” grounds, leave. Culture is rarely changed from the ground up and many organizations are looking for good IT staff.

Ignoring the worse case scenario and IT professionals’ bias against self evangelism, the biggest problem with “corporate politics” is actually its name. The concepts behind “agile” and “technical debt” came into common usage once the correct metaphor was found. Corporate politics needs rebranding from something avoided to a tool that IT professionals use to advance themselves. It badly needs a dose of optimism and open mindedness.

Image credit: http://thebluediamondgallery.com/p/politics.html. Usage under CC BY-SA 3.0.

InfoSec: Not just for hackers

everybody-needs-a-hackerI recently read Troy Hunt’s blog post on careers in information security. Troy makes good points about information security as a potential career and the benefits of certifications like the Certified Ethical Hacker. Hackers are getting increasingly sophisticated, requiring specific knowledge to counter, and cryptography is hard. We need more information security specialists.

However, one criticism of the post, indeed the information security industry, is its implication hacking is the sole information security career path. This binary viewpoint – you are either a security person or not and there is only one “true” information security professional – does more harm than good.

Hacking is technology focused. However, security’s scope is not just technical. Information security needs people that can articulate security issue impact, potential solutions and their cost in terms non-security people can understand. This requires expertise and credibility in multiple disciplines from individual contributor level to management to boardrooms.

Security solutions are not just technical. We live in societies governed by laws. These can be standardized government security requirements as FedRAMP or IRAP. These can be contractual obligations like PCI-DSS, covering credit card transactions. These can hold organizations accountable, like mandatory breach disclosure legislation, or protect or privacy, like the European Union’s Data Protection laws. Effective legislation requires knowledge of both law and information security and the political nous to get it enacted.

We are also surrounded by financial systems. Financial systems to punish those with weak security and reward those with good security will only evolve if we (consumers and investors) value security more. Cyber insurance has potential. Cryptographic technologies like bitcoin and block chain algorithms are threatening to disrupt the financial sectors. Information security has and will continue to impact finance.

The list goes on. Law enforcement needs to identify, store and present cybercrime evidence to juries and prosecute under new and changing laws. Hospitals and doctors want to take advantage of electronic health records..

The security technology focus drives people away non-technology people. In a world crying out for diversity and collaboration, the last thing information security needs is people focusing solely inward on their own craft, reinforcing stereotypes of shady basement dwellers, and not on systems security enables.

Bringing this back to software, many organizations contract or hire in information security experts. Unfortunately, the OWASP Top 10 changed little from 2010 to 2013 and some say is unlikely to change in the 2016 call for data. According to the Microsoft Security Intelligence Report, around half of serious, industry wide problems are from applications. Developers make the same mistakes again and again.

Education is one solution – security literate developers will avoid or fix security issues themselves. A better solution is tools and libraries that are not vulnerable in the first place, moving security from being reactive to proactive. For example, using an Object-Relational Mapping library or parameterized queries instead of string substitution for writing SQL.

Unfortunately, security people often lack skills to contribute to development and design beyond security. While information security touches many areas, information security expertise is not development (or networking or architecture or DevOps) expertise.

Information security needs different perspectives to succeed. As Corey House, a Puralsight author like Troy Hunt says in his course Becoming an Outlier, one route to career success is specialization. Information security is a specialization for everyone to consider, not just hackers.

Image credit: https://www.flickr.com/photos/adulau/8442476626

CCSP Review

ccsp-logo-square

After passing the exam, I wanted to capture my thoughts on the Certified Cloud Security Professional (CCSP), the latest certification from (ISC)2 (known for the CISSP certification) and the Cloud Security Alliance.

The CCSP certification is a vendor-nonspecific focus on cloud security, including infrastructure, risk management, cloud applications, legal and compliance. Like the CISSP, the syllabus is broad rather than deep and represents a good foundation in cloud security issues.

The CCSP is best suited to junior or intermediate IT security staff working in cloud security, although junior staff may struggle with the sheer breadth without experience to ground it. It is also useful for senior IT security staff that would to move into the cloud quickly, people that delegate specifics to others (like IT security management and auditors) or those in related roles looking for a cloud security context (like architects).

The CCSP is not intended to give technical or hands-on skills. This means the certification is not outdated quickly when the next product is released. However, candidates looking for hands-on skills common to junior or intermediate positions will need additional experience, training or certifications.

The exam is 125 multiple choice questions in 4 hours, administered by computer at a testing center. The exam is quite new, with a few typographical and editing errors. There is a lot of reading and people with poor English or reading difficulties may struggle.

The exam contains a mix of good questions, like scenarios asking for the best security control or first task, and less good ones, like examples of specific technologies. Scenario based questions require understanding a large body of information, extracting the relevant portions then making a decision. This mirrors the real world. Specific technology examples, while showing real world relevance, tend to date quickly and can be industry specific.

In terms of training material, (ISC)2 provides a textbook , online training (live webinars) and self-paced training (recorded sessions). The (ISC)2 material is often the best method for determining the actual content of the exam as the outline is very high level. However, it is expensive, has more than a few editing errors and the activities/self tests could be improved. The recorded videos also need the option to play faster like YouTube or PluralSight because merely skipping can potentially miss important points.

Looking ahead, cloud concepts and technology are changing rapidly. The current CCSP material focuses on moving existing on-premise security solutions, e.g. event monitoring (e.g. SIEM) and network monitoring (e.g. NIDS), to the cloud. As new and cloud-native products and concepts emerge, e.g. cloud access security brokers (CASB), or evolve, e.g. identity services, it will be challenging to keep the CCSP relevant and up-to-date.

I was also glad to see an increasing focus on software development and application security. Automation is driving software to be written by non-developers and outside traditional security programs. This is another area that will likely become more important in the future.

Note: At the time of writing, while I have passed the exam, I have not completed the checks and endorsement required to be awarded the certification. Sitting the exam requires the signing of an NDA so exam specifics are intentionally omitted.

Information Security vs Software Developers: Bridging the Gap

Builder versus Defender

One of the biggest challenges in information security is application security. For example, Microsoft’s Security Intelligence Report estimates that 80% of software security vulnerabilities are in applications and not operating systems or browsers.

Software security has improved significantly over the years. For example, groups like OWASP promote awareness and provide concrete solutions for common issues. Software developer security certifications like the CSSLP have emerged. SANS have an increasing breadth and depth of software security courses.

Nevertheless, libraries and best practice rarely protect a whole application. There may be application-specific vulnerabilities (like poorly implemented business logic or access control) or something libraries and frameworks commonly omit (like denial of service prevention). The issues might be even bigger, like not considering software security at all.

Information security professionals often fill this gap. After all, securing the organization’s IT assets is their role. Information security professionals have a security-first or defender mindset. They are usually the first line of defense against threats and the more they know about the applications they protect, the easier that defense becomes.

However, developers are creators and builders and that different mindset can cause friction. This was apparent at a recent static analysis tool training event. We were given the OWASP WebGoat app (a sample Java web site with dozens of security vulnerabilities), a static analysis tool to find vulnerabilities and instructions to start fixing them.

Two different approaches emerged to solve the first vulnerability found: an HTML injection. The first group searched the web for HTML injection fixes. They read recommendations from OWASP and other well-regarded sources. Most found a Java HTML escaping library, used it in the application then modified the static analysis rules to accept the escaping library as safe.

The second group reviewed the code to see how the application created HTML elsewhere. A few lines above the first instance of HTML injection was a call to an escaping function already in the code. The static analysis tool did not flag this as vulnerable. This group then reused that function throughout the code to remove the vulnerabilities.

Which group’s solution is better? The first approach is more technically correct – escaping strings is actually quite complex. For example, although not required by WebGoat, the escaping method included in the application did not handle HTML attributes correctly.

However, the second approach was much quicker to implement. Search, replace, verify and move on. While not as good a solution as the first group, most of the second group had fixed several vulnerabilities before the first group had fixed one. While not technically as correct as the first, is the second group’s approach good enough?

Perceptive readers would have guessed the first group were the information security professionals and the second were software developers. Information security people want to reduce the frequency and severity of security issues. Software developers quickly understand large bodies of code, find solutions and move on. The training exercise highlighted the defender versus builder mindsets.

The two mindsets are slowly reconciling. For example, OWASP, traditionally very defender oriented, has released its proactive top 10, using terminology familiar to software developers, not just information security professionals. The IT architecture community is also starting to tackle software security issues. For example, security is one of the four groups of International Association of Software Architects‘ quality attributes.

However, many information security professionals look at software like WebGoat as a typical application, full of easily rectified security issues caused by ignorance. Most developers I have worked with write relatively secure code but security is only a small part of writing applications.

Developers need frameworks and libraries where common security vulnerabilities are not possible. For example, escaping libraries are great but if you are constructing HTML or SQL by string concatenation and risking injection attacks, you are doing it wrong in the first place! Use parameterized queries for SQL and data binding for HTML, common in both server- and client-side frameworks.

Meanwhile, addressing security at the requirements and design phases – where real security issues lie – comes in at numbers 9 and 10 in the proactive top 10. As software developers will tell you, the earlier issues are identified and fixed, the cheaper the fixes are. Unfortunately, software security is still too focused on point issues at the end of the development cycle.

In fairness to the OWASP proactive top 10, there are still many developers unfamiliar with secure coding practices. Parameterizing SQL queries (number 1), encoding data (number 2) and input validation (point 3) are relatively cheap and easy to implement. All three will give a big pay off, too.

Addressing security design and requirements is also hard. The people involved usually lack the experience and ability to articulate them. Meanwhile, information security professionals rarely have the skills or access to contribute to early phases of software development. This means software developers must also bare responsibility for software security.

Hopefully we can rise above the distractions of point issues and work together on the bigger issues soon enough. In a world where the hackers (breakers) get the glory, we need to remember that both builders and defenders are the ones keeping the software that we rely on working.

A Ticket to Security via Obscurity

A group of Australian university students recently “cracked” the “encryption” used by public transport tickets in New South Wales (NSW), Australia. The public transport ticket format’s security relied on people not knowing the format of data on the ticket’s magnetic stripe and not having access to equipment to create tickets. In other words, it relied on “security through obscurity“.

Security through obscurity is not a new concept. In 1883, Auguste Kerckhoff theorised that the only truly secure parts about a system (computerized or otherwise) were the hidden or secret parts. This became known as Kerckhoff’s principle.  Over half a century later, Claude Shannon rephrased it as assume “the enemy knows the system”, called Shannon’s Maxim. These are as true now in computer security, particularly cryptography and encryption,  as they were then.

Ultimately, poor computer security for the public transport ticket system is not the programmers’ or engineers’ fault. The customer and management decided the security provided was sufficient. It is similar to the Year 2000 issue. The programmers knew that two digit years were not going to work after the century changed but why should the manager responsible spend resources to fix it when he or she is unlikely to be around in the year 2000 to take credit for the work?

When people initially approach computer security, they start at cryptography and software security. If we build more secure encryption systems; if we  fix every buffer overflow and every injection attack; if every object has the appropriate authentication, access control, auditing and so on, the “bad guys” will be helpless.

A lot of progress has been made in improving computer security. Organizations like OWASP and SAFECode provide software security guidance for software developers. Many vendors provide frameworks, such as Microsoft’s Security Development Lifecycle, and software patching is a regular occurrence. The US government has FIPS 140-2, a standard for hardware or software that performs cryptography, and Common Criteria is an attempt by multiple governments to provide a framework for software security assurance. A plethora of software security tools are also available with interesting and technical sounding names like “static analysis tools” and “fuzzers”.

That said, computer security practitioners eventually realise it is a matter of economics instead – as Allan Schiffman said in 2004, “amateurs study cryptography; professionals study economics.” It is not that we do not know how to build secure systems. However, if the customer cannot differentiate the security (or quality) of two otherwise seemingly identical products, the cheaper (and likely less secure) product will win. Similarly, most software is sold “as is”, meaning the customer assumes all the risk. See Bruce Schneier’s “Schneier on Security” or “Freakonomics” by Steven D. Livett and Stephen J. Dubner for more discussion.

Others take a different direction and focus on the sociology and behavioural patterns behind security.  Although most people recognise computer security is important, the first reaction of most when confronted with a security warning is to ignore or bypass it. After all, social engineering attacks are behind many of the headline grabbing computer security incidents over the last few years, such as Operation Aurora and Stuxnet. See Bruce Schneier’s “Liars and Outliers” for more discussion.

That said, there is little impetus to improve computer security without people willing to push the boundaries of systems and reveal flaws. Kudos and credit to the students for not only discovering  the public transport ticket format and “encryption” mechanism (if true encryption was actually ever used) but revealing it in a responsible (or “white hat”) manner, such as by allowing the targeted state to reveal itself. Remember that just because the ticket format has been discovered by the students does not mean others have not already done so and used it for less than honest purposes.

Interestingly, the NSW public transport ticket system is approximately 20 years old, introduced when computers were less powerful and less accessible and the required security standards were much lower. I wonder how many systems being built today are going to be considered secure in 20 years time, when computers are going to be presumably exponentially more powerful and more ubiquitous and the security standard much higher.

Privacy

Privacy is one of those oft misused terms that people throw around, particularly related to discussions of the SOPA or PROTECT-IP acts in the US. This is an ethical question and, unfortunately, arguments on either side usually degenerate into straw man arguments about “big brother” versus “pirates, criminals or terrorists”.

Taking a step back, privacy can mean one of three things in the context of information technology. First, privacy can mean anonymity, where people can contribute to discussions or other activities without having those comments attributable to them directly. Outside large organizations, this is usually accomplished by adopting a new identity such as a forum user or an online game character. Inside organizations, anonymity is rare. Authority and accountability require real names to be used and identity can be centrally managed even if single sign on or federated authentication are still rare.

Many arguments over anonymity descend into questions of what information can individually identify people, called Personally Identifiable Information (PII)? For example, is the IP address you use to access the Internet PII? If you are the only person accessing the Internet from that IP address and you use it for a long time, it may be. However, if you are behind a NAT, firewall or similar measure this may not be the case.

The problem is these discussions often consider potential PII in isolation. For example, my company regularly performs employee surveys. As the only Australian employee in my business group, if I select my country or office, I immediately lose anonymity. Few will argue that someone’s country is PII but it is more complicated in this case. Add that to easy inference and access to analytics and the situation becomes even more complicated.

Second, privacy can mean confidentiality, where people want to restrict access to information. This is usually enforced by access control (e.g. file permissions) or encryption (controlling who has access through protecting and distributing keys). A common example is a person’s medical records being available to medical professionals treating them but not to others. These records may be available to a wider audience of medical professionals for research or statistics as long as PII is removed.

However, confidentiality alone is not sufficient for privacy. Continuing the medical example, just because you want your doctor to see your medical records does not mean you want him or her to send them to a local newspaper to print potentially embarrassing stories about you. The doctor is permitted to use the records for treating you only. In other words, privacy can mean restricting information use, usually defined via laws or consent from the subject (the person the information describes or identifies).

Privacy in all three forms is clearly important for software dealing with external customers, particularly in areas with heavy legislation such as the medical or financial industries. Information on these could fill novels, is usually jurisdiction specific and is better covered elsewhere.

Some would argue enterprise software targeted at employees is less concerned with privacy. Most organizations’ policies state that employees using organization provided computers or systems submit to scanning for malware, logging of actions, indexing and retention for later retrieval and so on and complying to these policies is usually a condition of employment. Some countries’ governments also require access to otherwise confidential information, such as the recent issues with Blackberry devices being too secure.

However, this is not the case everywhere. Many countries, Europe in particular, have strong privacy laws. These benefit the subject by restricting the collected information’s use to that consented at the time of collection. However, well-meaning privacy legislation can impact IT in unintended ways. For example, if an application logs the path to a file “c:\users\joe\my documents\doc.txt” (Windows) or “/home/joe/Documents/doc.txt” (Mac, *nix) for usage statistics or supportability, has it inadvertently captured the user’s name (clearly PII) in the path and should the application remove or obfuscate that directory in the path?

Many countries also limit the movement of PII across country borders. Consent can permit it but the consent must be specific and prior. This creates challenges when aggregating data across and enterprise, systems with ad hoc reporting systems or those that share data. This is particularly challenging with cloud based systems where the location of data is unclear or data is replicated across multiple locations for redundancy.

The target market of software architect’s products influence or dictate its privacy needs.Indeed, as the individual responsible for non-functional requirements, software architects should understand which form(s) of privacy apply and for whom. They need not be experts – legal departments are for that – but knowing what to work around and work with are important, particularly for bigger sales. Indeed, if SOPA/PROTECT-IP is passed, software architects may have even more to learn and apply.


AISA National Conference 2011

I attended the Australian Information Security Association (AISA) National Conference for 2011 on Wednesday 9th November. For an event free to AISA members, the speakers were excellent and I recommend this to anyone involved with or interested in IT security in Australia. What follows is a summary of important or interesting points and how I believe they relate to software architecture.

Using anti-virus, firewalls, intrusion detection systems and so on is best practise but attackers are always finding new attacks that circumvent these controls. IT security is slow to adapt and, as Einstein said, “doing the same thing over and over again and expecting different results” is the definition of insanity. With IT increasingly forced to deliver sensitive data to users via convenient rather than secure methods, such as via social networking or on consumer owned mobile devices, the focus needs to shift from protecting devices and systems to protecting information. As Michael Jones from Google pointed out, organizations are being forced to be both open and private and to “deliver sensitive information to the edge”.

Moreover, as John Stewart from Cisco said, (paraphrased because I missed the exact quote) “there are three types of organizations: those that have been hacked once, those that have been hacked twice and those that don’t know (hacked three or more times)”. IT security needs to move from a preventative model, aimed at stopping all intrusions, to a containment model, one which recognises that compromises occur and aims to preemptively contain or localize the damage. Ruth Marshall from Novartis compared IT security to healthcare and that sometimes “the cure [IT Security and the constraints it places on users] can be worse that the cause [the security issues mitigated]” from the user’s perspective and that maybe a “palliative care” model is better.

Indeed, this second point highlighted the security disconnect between organizations and individuals. Individuals care little for security, as shown by the continued use of social networking and similar sites despite recent privacy and security concerns. Organizations do care and, instead of holding ill-equipped individuals accountable, assign this responsibility to the CISO and the IT security department. However, individuals then blame IT security when it does its job, such as pointing out the lack of regulatory compliance and privacy with commonly used social networking sites or the lack of control when consumers user their own devices for work (as mentioned above).

As non-functional requirements, the software architect is often  responsible for security and privacy. The software architect also influences or dictates where data is stored and how it is consumed. There is no accepted formula or standard for what security measures should be used in a product. One approach is to offer two or three choices for the most sensitive parts of the product; complete with the pros and cons of each, the architect’s recommendation and references to requirements (particularly integration, compliance or regulation). Work with the product manager and director on a solution and use it as a guide through the rest of the feature or product. This increases communication and spreads the responsibility, too.

Another interesting point raised at the conference concerned the potential introduction of mandatory Internet filtering in Australia. None of the presenters were in favour of this because it simply will not work and Bruce Schneier argued that technical groups, such as IT Security, need to become more politically vocal. However, he argued these groups need to argue the technical pros and cons while leaving the policy details to others. Technical groups have expertise and experience, therefore credibility, in technical areas (“Technical reasons A, B and C mean this will not solve the problem”). Adding judgement or policy to this (“Problem X is not the real problem” or “Problem X sucks anyway”) weakens this stance because listeners may assume technical details are manipulated to support a political view. Vice versa is also true.

Software architects are often required to defend their decisions or critique others’ for non-technical people. Sticking to the technical details and drawing technical conclusions from that is a very credible way at presenting or establishing your point of view. Also, coming form an engineering-style background, many new software architects view decisions as black or white affairs where the correct answer is “obvious”. Taking time to build a case with supporting facts can avoid miscommunication and frustration while opening the architect’s mind to other solutions. This is not saying that a software architect cannot or should not care for non-technical issues, more than he or she needs established credibility to do so.