Random Acts of Architecture

Tales of an architect trying to bring order to the chaos that is modern information technology.

Rebranding Corporate Politics

politicsThe term “corporate politics” conjures up images of sycophantic, self-serving behavior like boot-licking and backstabbing. However, to some IT professionals’ chagrin, we work with humans as much as computers. Dismissing humans is dismissing part of the job.

The best way to “play” corporate politics is solve big problems by doing things you enjoy and excel at.

“Big problems” means problems faced not just by your team but by your boss’s boss, your boss’s boss’s boss and so on. If you don’t know what they are, ask (easier than it sounds). Otherwise, attend all hands meetings, read industry literature or look at your leaders’ social network posts, particularly internal ones.

This is not just for those wanting promotions into management. Individual contributors still want better benefits and higher profile or challenging projects. These come easiest to those known to be providing value and not the strict meritocracy some IT professionals think they work in.

Start by solving small problems as side projects. Choose something impacting more than your own team and minimize others’ extra work. Build up to bigger problems once you have demonstrated ability and credibility.

You need not be the leader. Assisting others making an effort can be just as effective. You can own part of it or bask in the halo effect. If not, recognize those that are. This creates a culture of recognition that may recognize you in the future.

While some IT professionals solve big problems everyday, communicating and evangelizing their work “feels” wrong. This what salespeople do, not IT professionals. Many also think their work is not interesting.

Being successful requires people knowing what you do. This may be as simple as a short elevator chat, a brown bag talk or a post on the corporate social network. It also helps get early feedback and build a like-minded team. Others will be interested if you are working on the right things.

What about the potentially less savory aspects of corporate politics like work social events, sharing common interests with management, supporting corporate charities and so on? These are as much an art as a science. Focus on common goals and building trust, internally and externally. People like to deal with people at their level and contact builds familiarity.

However, this is no substitute for solving big problems. If you are delivering value, interactions with senior decision makers and IT professionals with similar goals should occur naturally. Build on that.

Be aware that problems change over time. Problems get solved by others. The market changes. Competitors come and go. Understanding organizational goals is an ongoing process.

Also realize decision makers are human. They make mistakes. They want to emphasize their achievements and not their failures, just like software developers’ fundamental attribute error bias for their own code and against others’.

However, if your organization makes decisions regularly on “political” grounds, leave. Culture is rarely changed from the ground up and many organizations are looking for good IT staff.

Ignoring the worse case scenario and IT professionals’ bias against self evangelism, the biggest problem with “corporate politics” is actually its name. The concepts behind “agile” and “technical debt” came into common usage once the correct metaphor was found. Corporate politics needs rebranding from something avoided to a tool that IT professionals use to advance themselves. It badly needs a dose of optimism and open mindedness.

Image credit: http://thebluediamondgallery.com/p/politics.html. Usage under CC BY-SA 3.0.

InfoSec: Not just for hackers

everybody-needs-a-hackerI recently read Troy Hunt’s blog post on careers in information security. Troy makes good points about information security as a potential career and the benefits of certifications like the Certified Ethical Hacker. Hackers are getting increasingly sophisticated, requiring specific knowledge to counter, and cryptography is hard. We need more information security specialists.

However, one criticism of the post, indeed the information security industry, is its implication hacking is the sole information security career path. This binary viewpoint – you are either a security person or not and there is only one “true” information security professional – does more harm than good.

Hacking is technology focused. However, security’s scope is not just technical. Information security needs people that can articulate security issue impact, potential solutions and their cost in terms non-security people can understand. This requires expertise and credibility in multiple disciplines from individual contributor level to management to boardrooms.

Security solutions are not just technical. We live in societies governed by laws. These can be standardized government security requirements as FedRAMP or IRAP. These can be contractual obligations like PCI-DSS, covering credit card transactions. These can hold organizations accountable, like mandatory breach disclosure legislation, or protect or privacy, like the European Union’s Data Protection laws. Effective legislation requires knowledge of both law and information security and the political nous to get it enacted.

We are also surrounded by financial systems. Financial systems to punish those with weak security and reward those with good security will only evolve if we (consumers and investors) value security more. Cyber insurance has potential. Cryptographic technologies like bitcoin and block chain algorithms are threatening to disrupt the financial sectors. Information security has and will continue to impact finance.

The list goes on. Law enforcement needs to identify, store and present cybercrime evidence to juries and prosecute under new and changing laws. Hospitals and doctors want to take advantage of electronic health records..

The security technology focus drives people away non-technology people. In a world crying out for diversity and collaboration, the last thing information security needs is people focusing solely inward on their own craft, reinforcing stereotypes of shady basement dwellers, and not on systems security enables.

Bringing this back to software, many organizations contract or hire in information security experts. Unfortunately, the OWASP Top 10 changed little from 2010 to 2013 and some say is unlikely to change in the 2016 call for data. According to the Microsoft Security Intelligence Report, around half of serious, industry wide problems are from applications. Developers make the same mistakes again and again.

Education is one solution – security literate developers will avoid or fix security issues themselves. A better solution is tools and libraries that are not vulnerable in the first place, moving security from being reactive to proactive. For example, using an Object-Relational Mapping library or parameterized queries instead of string substitution for writing SQL.

Unfortunately, security people often lack skills to contribute to development and design beyond security. While information security touches many areas, information security expertise is not development (or networking or architecture or DevOps) expertise.

Information security needs different perspectives to succeed. As Corey House, a Puralsight author like Troy Hunt says in his course Becoming an Outlier, one route to career success is specialization. Information security is a specialization for everyone to consider, not just hackers.

Image credit: https://www.flickr.com/photos/adulau/8442476626

Systems > Goals

systems-over-goals

It is the time of year when people evaluate their previous year’s goals and plan for the next. It is the time when New Year’s resolutions are made. It is also the time where people lament ones they failed to keep.

Setting goals is beneficial. They are how we demonstrate commitment and achievement. They motivate us to better ourselves.

Take learning a new skill, like a programming language or library. This requires acquiring tools, reading or watching tutorials and/or working with teachers then practicing the new skill until proficiency is reached.

People approach goals in different ways. For example, learning the basics of a new programming language can be crammed into a weekend, fitting into our “busy” lives and short-term focus.

This may be sufficient if the need is urgent. However, this is not possible with larger or sustained goals.

A few years ago I realized I needed to lose weight. Superficial attempts at exercise or the occasional healthy meal were insufficient. I needed a sustainable system not just reach an arbitrary weight target.

First, I had to want to lose weight. There is a difference between imagining oneself attaining the goal and the often underestimated effort required to achieve it. For example, in his book “The Element”, Ken Robinson compliments a keyboard player saying he’d love to play the keyboard that well. The keyboard player disagrees:

“You mean you like the idea of playing keyboards. If you’d love to play them, you’d be doing it.”

Second, I had to create a system that would make me succeed: “No excuses!” My schedule was unpredictable so gym memberships and other organized activities were out. I had always enjoyed running so I purchased a treadmill. Diet was solved by subscribing to a calorie- and portion-controlled food delivery service. I enjoyed running and the food so it became almost harder not to follow the plan.

Third, I had to make time to exercise and the discipline to stick to the diet. My unpredictable schedule meant a exercise a regular times was not possible. I fell back to priorities: other things had to fit around exercise like Stephen Covey’s big rocks analogy.

Fourth, I weighed myself morning and night to track progress. Many weight loss programs recommend weighing less frequently but, as long as the downward trend continued, the raw measurements were less important than the accountability – the scales were always there looking back at me and never lied.

Yes, I occasionally ate too much or missed a run or three but I just picked myself up and resumed. Patience and persistence conquered the dreaded weight plateaus.

I eventually reached my target weight and celebrated my success. I lost a quarter of my body weight over eight months.

More importantly, I developed habits for keeping my weight down and increasing fitness. Reaching my target had become both inevitable and irrelevant. I kept going afterwards. A year later I ran a sub 96 minute half marathon. During lunchtime at work. For fun.

Without realizing it, I stumbled upon thinking about achieving as systems or habits, like in Charles Duhigg’s “The Power of Habit: Why We Do What We Do in Life and Business” or Scott Adam’s “How to Fail at Almost Everything and Still Win Big”. Goals are only milestones. Systems or habits allow you to achieve them.

I now look at goals differently. First, is the goal important enough to change my habits? I cannot do everything. I try to pick what I will fail at or others will do it for me.

Second, do I want the goal enough to change my habits? I try to separate what I want from what others want. Failing that, I look for sources of fun or rewards for doing so. Motivation is half the battle.

The Potential of Cosmos: Containers

cosmos-potential

Cosmos is an operating system construction kit in development since 2006. At first glance, it appeals to the “Internet of Things” (IoT) crowd. One could control home automation or run a Raspberry Pi or Arduino in C#. Cosmos is also interesting technically, as Scott Hanselman describes. .Net languages are rarely used for lower level programming and this project is an interesting case study.

However, there is a whole other angle to consider – a competitor to containers. Containers, single-application virtual machines, provide the hardware independence of virtual machines but are smaller and use an operating system’s existing isolation and switching mechanisms instead of a hypervisor.

If Cosmos or a system built on it supports a reasonable set of APIs, such as an early version of .Net Standard, these could be run like containers. The components and functionality would be minimal, reducing the surface area of attack and the need for patching. They could be smaller than scratch containers because they are a single binary.

A Cosmos container, for want of a better term, could run on bare metal for maximum performance. It could also run as a “pico virtual machine”, needing only a few MB of RAM and storage, to minimize costs.

Of course, there is more to containers than just the image format and hosting engine. Docker, the most common container engine, has a whole ecosystem of orchestration, management and monitoring tools. Many of these are open source and have high contribution rates, so adding Cosmos container support is not unreasonable.

Supporting Cosmos containers directly on hardware may require hypervisor changes, meaning existing IaaS vendors would not initially support it. That said, Amazon does support Arduino as a cloud platform. Cosmos containers could also run in a “serverless” compute service like AWS Lambda.

Of course, the Cosmos team have spent a long time bringing their original vision to fruition and this is a significant change in direction. However, we live in a world of potential where software is changing so quickly and is often open for anyone to build on.

 

CCSP Review

ccsp-logo-square

After passing the exam, I wanted to capture my thoughts on the Certified Cloud Security Professional (CCSP), the latest certification from (ISC)2 (known for the CISSP certification) and the Cloud Security Alliance.

The CCSP certification is a vendor-nonspecific focus on cloud security, including infrastructure, risk management, cloud applications, legal and compliance. Like the CISSP, the syllabus is broad rather than deep and represents a good foundation in cloud security issues.

The CCSP is best suited to junior or intermediate IT security staff working in cloud security, although junior staff may struggle with the sheer breadth without experience to ground it. It is also useful for senior IT security staff that would to move into the cloud quickly, people that delegate specifics to others (like IT security management and auditors) or those in related roles looking for a cloud security context (like architects).

The CCSP is not intended to give technical or hands-on skills. This means the certification is not outdated quickly when the next product is released. However, candidates looking for hands-on skills common to junior or intermediate positions will need additional experience, training or certifications.

The exam is 125 multiple choice questions in 4 hours, administered by computer at a testing center. The exam is quite new, with a few typographical and editing errors. There is a lot of reading and people with poor English or reading difficulties may struggle.

The exam contains a mix of good questions, like scenarios asking for the best security control or first task, and less good ones, like examples of specific technologies. Scenario based questions require understanding a large body of information, extracting the relevant portions then making a decision. This mirrors the real world. Specific technology examples, while showing real world relevance, tend to date quickly and can be industry specific.

In terms of training material, (ISC)2 provides a textbook , online training (live webinars) and self-paced training (recorded sessions). The (ISC)2 material is often the best method for determining the actual content of the exam as the outline is very high level. However, it is expensive, has more than a few editing errors and the activities/self tests could be improved. The recorded videos also need the option to play faster like YouTube or PluralSight because merely skipping can potentially miss important points.

Looking ahead, cloud concepts and technology are changing rapidly. The current CCSP material focuses on moving existing on-premise security solutions, e.g. event monitoring (e.g. SIEM) and network monitoring (e.g. NIDS), to the cloud. As new and cloud-native products and concepts emerge, e.g. cloud access security brokers (CASB), or evolve, e.g. identity services, it will be challenging to keep the CCSP relevant and up-to-date.

I was also glad to see an increasing focus on software development and application security. Automation is driving software to be written by non-developers and outside traditional security programs. This is another area that will likely become more important in the future.

Note: At the time of writing, while I have passed the exam, I have not completed the checks and endorsement required to be awarded the certification. Sitting the exam requires the signing of an NDA so exam specifics are intentionally omitted.

Agile: It does not mean what you think it means

Agile Blog PostMany organizations adopt Agile development methodologies, or just Agile, for the right reasons. They want a software development methodology that welcomes change. They want something to give management better visibility on team progress and teams better visibility into the longer-term product plans. They want something to give motivated, competent individuals the opportunity to take more ownership and build value.

However, many organizations adopt agile for the wrong reasons. The bandwagon effect and general hype have made Agile a panacea. Unfortunately, Agile exacerbates many problems instead of fixing them.

The biggest problem Agile exacerbates is lack of trust and respect. Management needs to trust software developers to estimate accurately, cooperate with other team members and ensure non-functional requirements are met, such as automated testing and performance/scalability. Team members need to trust management to not use Agile to push more work onto the team, not to blame the team for poor or late management decisions, not to use the increased visibility for performance management and to promptly address hurdles the team encounters.

Agile only works if people are willing to change. For example, if software developers are unwilling to “waste” time on daily stand-ups, estimations, automated testing or code reviews then they are missing the point. While Agile, specifically Lean, allows decision making to be delayed to the last reasonable moment, decisions still must be made, communicated and supported.

Poor communication makes Agile harder. Technical team members often have difficulty translating technical details into something the business can understand. Product owners, usually senior decision makers, often have many demands on their time and a team of software developers can be culturally easy to deprioritize.

Remote team members, particularly in different time zones, make face-to-face communication harder. Technology can partially compensate, such as teleconferencing and Internet chat applications, but communication occurs as much in incidental conversations as it does in meetings.

Agile requires customer involvement. Scrum, for example, has the product owner role where a customer is actively involved in the process by identifying and prioritizing work and being available and accountable. Agile emphasizes regular delivery of working software to customers.

However, this works against the contractual nature of most outsourced software development. Some products, such as enterprise software, have six month or longer sales cycles and delivering software more frequently just burdens support. Some customers lack the expertise or desire to be actively involved. Successful agile requires an agile-capable customer.

Successful agile also requires an agile-capable team. Not all team members are proactive, Kaizen-embracing go-getters. Some people are happy to be led. Frequent iterations and regular delivery require deep and broad technical skills that some individuals or teams may lack. Team members need to be focused on value, not solving technical problems.

Without addressing these problems beforehand, Agile adoption causes a lot of pain and suffering. While the agile manifesto is easily read and understood, the underlying wisdom is less so. Agile is a tool that allows an organization hamstrung by poor process to excel.  Otherwise, Agile is often blamed when the real cause is the underlying culture or structure.

That said, Agile can be an effective organizational diagnostic tool. It shows problems that people often did not see or did not want to see. Therefore, it is important to clarify intentions and understanding before adopting Agile, as Inigo Montoya recommended.

Blog post image is a modified version of the image by Froztbyte – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=1121735. 

Thanks to Adam LG Ring (@adamlgring) for the title.

The Future of Product Management in Enterprise Software

PM Challenges

In traditional enterprise software, product managers represent the customer to software development teams. They draw on industry experience to propose and prioritize features, usually as the scrum product owner. They liaise between development and the rest of the business, particularly marketing and sales.

However the product management role has three main challenges. The first challenge is new business models, particularly the move from on premise software to the cloud.

Cloud software presents increased non-functional challenges. For example, cloud systems are expected to scale up further than on premise software. The burden of service level agreements (SLAs), upgrades, patching and backup/restore is on the vendor and not the customer. Cloud systems are usually Internet accessible, missing the protection of an organization’s perimeter security measures like firewalls. Multi-tenant systems and resource pooling mean quality issues and downtime can affect many or all customers and not one.

Where product management previously dictated product direction, product management now share responsibility with architects (enterprise, solution, infrastructure and application) and business analysts (responsible for internal business processes). While IT commoditization has made IT cheaper and more accessible, capable technical people are still required, albeit with a different skill set. The increasing emphasis on integration, such as exposing web service APIs to third parties, and multiple platforms, such as mobile and tablet, exacerbates this.

However, this gives a product manager more opportunities. With the move from capital expenditure (initial purchase + support) to operational expenditure (monthly charge), many organizations are liberated from a fixed purchasing cycle. Unburdened by the three to six month customer upgrade period, releases can be as frequent as the development team can accommodate. Product managers can offload technical problems rather than shoulder the responsibility of the whole product.

The second challenge confronting product management is the increasing use of analytics and metrics. Many industries, such as media, have been using analytics and metrics for some time but many traditional enterprise market segments are only now getting access to accurate usage information. For example, few organizations previously consented to on premise products sending usage data back to the vendor.

Many product managers rely on experience or recent customer conversations to make decisions. However, the on-demand provisioning, self-service and customer customization (as opposed to vendor customization) aspects of cloud products reduce customer contact. Analytics and metrics can help fill this gap but it is a different type of customer relationship.

Moreover, the quality and quantity of decision-impacting data is increasing. Tools and expertise to extract useful information are becoming cheaper and more prevalent. Intuition and experience are always useful but will be more focused to what metrics to gather and interpreting them. Decisions will have a grounding (or at least a rationalization) in data. Making a decision solely on a “gut feeling” will be less acceptable.

Also note that good analytics and metrics can easily be applied to tactical problems like improving a user interface or prioritizing between two features based on actual use. It is harder to apply analytics and metrics to strategic questions like market direction or new product acceptance. This is where product management insight will be most valuable.

Metrics also help mitigate the “squeaky wheel” effect, where vocal customers monopolize product management time and the product backlog. For example, it is easier for a product manager to dissuade a customer demanding improvements to a certain feature with evidence the feature is rarely used.

The third challenge is the rapid change. Many product managers come from a business role. For example, an HR application product manager may previously been a manager in HR. Others have come from customer-facing technical roles like support, QA or sales engineering.

Unfortunately, in some industries previous industry experience is quickly outdated. While product management’s customer-focused perspective is vital, being removed from the industry can quickly atrophy understanding of customer processes and priorities. Exposure to their own software product(s) and current organization threatens to limit their thinking to what is probable, not what is ideal.

This is particularly important for product managers that come from customer-facing technical roles. They usually come to product management with specific product improvements in mind but, once these are in the product, may be at a loss.

Instead, a product manager needs to build ways to learn and predict industry changes – developing a new feature often takes months and the feature must be competitive when it is released, not just today. This could be key customers, industry contacts, thought leaders, peers at competitors or industry press. Building personal relationships, such as at conferences and industry meet-ups, is crucial.

Moreover, many information sources available to product management, like metrics or competitor analysis, are available to others in the software development team. It is these alternate information sources and relationships that will differentiate a good product manager.

Product management can no longer horde this information. For example, architects need accurate customer usage predictions for scalability planning and infrastructure provisioning. Management needs to allocate staff. Operations needs to ensure they are looking for likely threats or issues. By the time these are included in a release’s requirements or a sale occurs, it may be too late. Hording assumes product managers are the only ones making strategic decisions. Product managers are often poor conduits for technical requirements, too.

Meanwhile, product managers have less available time. They are dragged into sales opportunities to demonstrate or demand commitment, support calls to placate unhappy customers and marketing for feature commitments. Agile software development methodologies, like scrum, involve the product manager more frequently. This creates a “red queen” effect, where product managers spend a lot of their energy merely keeping pace with the industry, competition and own products.

Product management has always been a challenging role – often all responsibility and no authority. While many technical people incorrectly equate product knowledge with industry knowledge, prior experience and a customer perspective are no longer sufficient to be a good product manager. Successful product managers will adapt to the new business models (e.g. cloud) and leverage the new tools (e.g. analytics and metrics) to be more effective. In the future, those that rely on outdated experience and intuition are likely to fail while those that learn and adapt quickly will succeed.

Indeed, the end goal of product management is to impart customer perspective and industry knowledge. There will always be a place for a coordinating customer voice but it will lead by teaching, not a requirements document. Those involved in development should not need to consult product managers for every new feature or for a customer perspective – product management should have already taught them to think like a customer.

Treating Enterprise Software like Game Design

Mechanics, Dynamics and AestheticsIn 2005, Robin Hunicke, Marc LeBlanc and Robert Zubek wrote an academic paper titled “MDA: A Formal Approach to Game Design and Game Research“. It was and is an influential attempt at quantifying game design and theory.

The “MDA” acronym stands for “Mechanics, Dynamics and Aesthetics”. Mechanics refers to the algorithms and data structures that drive the game, such as how running characters animate or the arc of a character’s jump. Dynamics refers to the run-time interaction between the mechanics and the player, such as the pressing this button to jump or showing the character’s health as a bar at the top left of the screen. Aesthetics refers to how the player enjoys the game and what the player gets out of the game.

Aesthetics is often the hardest to describe to non-gamers. Some games may offer multiplayer where players enjoy the social and competitive aspects, like the an online game of “Call of Duty” or “Doom”. Other games offer an easy way to pass the time, like “Angry Birds” or “Candy Crush”. Others provide intense challenge, like Chess. Most games focus on a few core aesthetics and this is reflected by the difference audiences for each game.

As the paper points out, game designers and developers approach games from the mechanics side then dynamics, which hopefully impart the desired aesthetics. Game players, however, experience the aesthetics through the dynamics. Outside of statistic-heavy role-playing games and sports simulations, players rarely encounter the mechanics. Game designers should always keep aesthetics in mind, if possible.

Recognizing different layers and viewpoints gives game designers a nomenclature for understanding games’ inner workings and highlighting shortcomings. For example, a game aimed at a social aesthetic needs some form of multiplayer or social network integration. A game aimed at competition needs a visible score or ranking and consistent, well communicated rules.

How does this relate to enterprise software? The MDA framework layers have equivalents. Mechanics refers to the code and database queries software developers create along with business processes. Dynamics is unchanged, referring to user experience and interaction with the software. Aesthetics refers to the business value.

Also like game design, enterprise software customers and users approach the benefits the opposite way to software developers. Like game designers, software developers tend to start with the mechanics and work up to dynamics. Management aims for the aesthetics and, for those that directly use the software, the dynamics. While some software developers may enjoy the technical challenges of enterprise software, they must not lose focus of the business value.

As with any classification or taxonomy, the MDA framework provides a way of dissecting and comparing different applications. For example, two applications can aim for the same aesthetic (business benefit) but use different dynamics (user experiences). One might be a touch-heavy mobile application. One might be a web site storing its data in the cloud.

The MDA framework can point out where a business need (aesthetic) is not supported through user experience (dynamics) or a user experience does not relate to any of the defined business needs. Software developers and architects could also create a reusable mapping of dynamics to aesthetics or mechanics to aesthetics, like linking tactics to quality attributes.

Software developers have traditionally split systems into different layers or components. The aim was to improve maintainability by localizing the effects of changes. However, the MDA framework reminds us that changes in one layer can and do affect other layers. For example, a database query change (mechanics) may affect the results shown in the UI (dynamics) and the business value (aesthetics). Conversely, new or different aesthetics may require changes to both dynamics and mechanics.

The MDA framework also reminds us of requirement compartmentalization. For example, problems occur when management or business users specify dynamics (user experience) instead of aesthetics (business requirements). Management and business users should have opinions and input but user experience designers are experts.

With the increasing popularity of IT consumerization and gamification, game design has already encroached on enterprise software. The MDA framework goes deeper by identifying things important to the target audience (whether they be games or management) and a structured way of providing them. The fact that a closely related field has also produced something similar to existing software architecture and design best practices reinforces them.

Indeed, despite the fact that games are also created with limited time and resource constraints, enterprise software has a poor record of user experience design. There is probably a lot more game designers can teach software developers about improving enterprise software, considering games succeed or fail purely on their ability to satisfy users.

Information Security vs Software Developers: Bridging the Gap

Builder versus Defender

One of the biggest challenges in information security is application security. For example, Microsoft’s Security Intelligence Report estimates that 80% of software security vulnerabilities are in applications and not operating systems or browsers.

Software security has improved significantly over the years. For example, groups like OWASP promote awareness and provide concrete solutions for common issues. Software developer security certifications like the CSSLP have emerged. SANS have an increasing breadth and depth of software security courses.

Nevertheless, libraries and best practice rarely protect a whole application. There may be application-specific vulnerabilities (like poorly implemented business logic or access control) or something libraries and frameworks commonly omit (like denial of service prevention). The issues might be even bigger, like not considering software security at all.

Information security professionals often fill this gap. After all, securing the organization’s IT assets is their role. Information security professionals have a security-first or defender mindset. They are usually the first line of defense against threats and the more they know about the applications they protect, the easier that defense becomes.

However, developers are creators and builders and that different mindset can cause friction. This was apparent at a recent static analysis tool training event. We were given the OWASP WebGoat app (a sample Java web site with dozens of security vulnerabilities), a static analysis tool to find vulnerabilities and instructions to start fixing them.

Two different approaches emerged to solve the first vulnerability found: an HTML injection. The first group searched the web for HTML injection fixes. They read recommendations from OWASP and other well-regarded sources. Most found a Java HTML escaping library, used it in the application then modified the static analysis rules to accept the escaping library as safe.

The second group reviewed the code to see how the application created HTML elsewhere. A few lines above the first instance of HTML injection was a call to an escaping function already in the code. The static analysis tool did not flag this as vulnerable. This group then reused that function throughout the code to remove the vulnerabilities.

Which group’s solution is better? The first approach is more technically correct – escaping strings is actually quite complex. For example, although not required by WebGoat, the escaping method included in the application did not handle HTML attributes correctly.

However, the second approach was much quicker to implement. Search, replace, verify and move on. While not as good a solution as the first group, most of the second group had fixed several vulnerabilities before the first group had fixed one. While not technically as correct as the first, is the second group’s approach good enough?

Perceptive readers would have guessed the first group were the information security professionals and the second were software developers. Information security people want to reduce the frequency and severity of security issues. Software developers quickly understand large bodies of code, find solutions and move on. The training exercise highlighted the defender versus builder mindsets.

The two mindsets are slowly reconciling. For example, OWASP, traditionally very defender oriented, has released its proactive top 10, using terminology familiar to software developers, not just information security professionals. The IT architecture community is also starting to tackle software security issues. For example, security is one of the four groups of International Association of Software Architects‘ quality attributes.

However, many information security professionals look at software like WebGoat as a typical application, full of easily rectified security issues caused by ignorance. Most developers I have worked with write relatively secure code but security is only a small part of writing applications.

Developers need frameworks and libraries where common security vulnerabilities are not possible. For example, escaping libraries are great but if you are constructing HTML or SQL by string concatenation and risking injection attacks, you are doing it wrong in the first place! Use parameterized queries for SQL and data binding for HTML, common in both server- and client-side frameworks.

Meanwhile, addressing security at the requirements and design phases – where real security issues lie – comes in at numbers 9 and 10 in the proactive top 10. As software developers will tell you, the earlier issues are identified and fixed, the cheaper the fixes are. Unfortunately, software security is still too focused on point issues at the end of the development cycle.

In fairness to the OWASP proactive top 10, there are still many developers unfamiliar with secure coding practices. Parameterizing SQL queries (number 1), encoding data (number 2) and input validation (point 3) are relatively cheap and easy to implement. All three will give a big pay off, too.

Addressing security design and requirements is also hard. The people involved usually lack the experience and ability to articulate them. Meanwhile, information security professionals rarely have the skills or access to contribute to early phases of software development. This means software developers must also bare responsibility for software security.

Hopefully we can rise above the distractions of point issues and work together on the bigger issues soon enough. In a world where the hackers (breakers) get the glory, we need to remember that both builders and defenders are the ones keeping the software that we rely on working.

Unit Testing: The 20/70/0 Rule

20-70-0 Rule

Automated unit testing, one of the most important foundations of software quality, is still a struggle for many software development teams. Justifying the extra upfront time to business is difficult, particularly when the team is under deadline or resource pressure. Many teams give up when confronted by huge amounts of untested, untestable legacy code. However, avoiding or delaying unit testing hurts everyone.

Many misunderstand automated unit testing, making the message inconsistent or less convincing. For example, automated unit tests do not initially reduce the number or severity of code defects. Good developers should already manually test their code thoroughly, stepping through it in a debugger where possible. Good developers also manually check error conditions and corner cases.

Many concerns are also unfounded. For example, automated unit tests do not replace QA (testers). QA check software developers’ work and test at the functional level. Their different perspective can help write better automated unit tests, too.

Many complain about brittle unit tests only to find brittle “unit tests” are usually functional or integration tests, such as calling web services on external systems or accessing a shared database. Since these are not segregated, the unpredictable actions of others cause tests to fail.

Indeed, the biggest barrier to automated unit testing is software design. If a method or function cannot be unit tested, the design is incorrect. For systems outside the software development team’s control, see Michael Feather’s work on legacy code. Testable code tends to be better designed code, too.

The main benefit of automated unit tests is unit tests capture the expected behavior of a single unit of code, such as a method or function. These tests can be repeated quickly and regularly with little manual effort, identifying when code changes, refactoring or experiments break the expected behavior.

Software developers also forget important details as they move to other features or projects. Capturing expectations as automated unit tests retains this experience and knowledge.

Nothing mentioned above is novel. However, questions remain once agreement to add automated unit tests is reached. For example, how much unit testing do developers add? How much extra time is needed? How do you explain this to non-technical stakeholders? The 20/70/0 rule answers these questions:

First, spend 20% of development time writing automated unit tests. A day’s worth of testing each week is a good compromise. This is part of the development task, not extra effort. Otherwise non-technical stakeholders will demand skipping it when under pressure.

Second, aim for 70% code coverage. This excludes third-party or generated code, so make sure code coverage tools can exclude this. Interestingly, technical people tend to think this is high, especially if no automated unit tests exist. Less technical people ask why the remaining 30% cannot be covered.

Third, ensure 0 failing tests. Running automated unit tests after an automated build is a critical part of continuous integration. Fix failing tests immediately.

The first rule, 20% development time on tests, tells project managers and stakeholders the extra time to add initially. It also allows project managers to compare the up front costs with time savings later (ideally greater than 20%).

The second rule, 70% code coverage, tells developers what the team expects, particularly when code reviews highlight missing or poor unit tests. In an agile process, automated unit tests are part of “done” for development tasks.

Code coverage is an imperfect metric and heavily debated. Ideally, the team should target functional coverage. Behavior Driven Development (BDD) is one option. However, for a team without unit tests or a superior metric, code coverage is unambiguous, automatable and easy explained to less technical people.

The third rule, 0 failing tests, reinforces that quality is critical, especially to less technical people once again.

Software developers often get caught up in technical debate. Unit tests and quality are no different. However, projects can rarely wait for perfect understanding. The 20/70/0 rule is unambiguous and understandable, even to less technical people. Attaining it or, more specifically, the quality goal it represents is still a challenge but attaining it is now about metrics instead of gut feel and hand waving.

%d bloggers like this: