Random Acts of Architecture

Experiences and musings of Anthony Langsworth, a passionate software architect and technologist from Sydney, Australia.

Treating Enterprise Software like Game Design

Mechanics, Dynamics and AestheticsIn 2005, Robin Hunicke, Marc LeBlanc and Robert Zubek wrote an academic paper titled “MDA: A Formal Approach to Game Design and Game Research“. It was and is an influential attempt at quantifying game design and theory.

The “MDA” acronym stands for “Mechanics, Dynamics and Aesthetics”. Mechanics refers to the algorithms and data structures that drive the game, such as how running characters animate or the arc of a character’s jump. Dynamics refers to the run-time interaction between the mechanics and the player, such as the pressing this button to jump or showing the character’s health as a bar at the top left of the screen. Aesthetics refers to how the player enjoys the game and what the player gets out of the game.

Aesthetics is often the hardest to describe to non-gamers. Some games may offer multiplayer where players enjoy the social and competitive aspects, like the an online game of “Call of Duty” or “Doom”. Other games offer an easy way to pass the time, like “Angry Birds” or “Candy Crush”. Others provide intense challenge, like Chess. Most games focus on a few core aesthetics and this is reflected by the difference audiences for each game.

As the paper points out, game designers and developers approach games from the mechanics side then dynamics, which hopefully impart the desired aesthetics. Game players, however, experience the aesthetics through the dynamics. Outside of statistic-heavy role-playing games and sports simulations, players rarely encounter the mechanics. Game designers should always keep aesthetics in mind, if possible.

Recognizing different layers and viewpoints gives game designers a nomenclature for understanding games’ inner workings and highlighting shortcomings. For example, a game aimed at a social aesthetic needs some form of multiplayer or social network integration. A game aimed at competition needs a visible score or ranking and consistent, well communicated rules.

How does this relate to enterprise software? The MDA framework layers have equivalents. Mechanics refers to the code and database queries software developers create along with business processes. Dynamics is unchanged, referring to user experience and interaction with the software. Aesthetics refers to the business value.

Also like game design, enterprise software customers and users approach the benefits the opposite way to software developers. Like game designers, software developers tend to start with the mechanics and work up to dynamics. Management aims for the aesthetics and, for those that directly use the software, the dynamics. While some software developers may enjoy the technical challenges of enterprise software, they must not lose focus of the business value.

As with any classification or taxonomy, the MDA framework provides a way of dissecting and comparing different applications. For example, two applications can aim for the same aesthetic (business benefit) but use different dynamics (user experiences). One might be a touch-heavy mobile application. One might be a web site storing its data in the cloud.

The MDA framework can point out where a business need (aesthetic) is not supported through user experience (dynamics) or a user experience does not relate to any of the defined business needs. Software developers and architects could also create a reusable mapping of dynamics to aesthetics or mechanics to aesthetics, like linking tactics to quality attributes.

Software developers have traditionally split systems into different layers or components. The aim was to improve maintainability by localizing the effects of changes. However, the MDA framework reminds us that changes in one layer can and do affect other layers. For example, a database query change (mechanics) may affect the results shown in the UI (dynamics) and the business value (aesthetics). Conversely, new or different aesthetics may require changes to both dynamics and mechanics.

The MDA framework also reminds us of requirement compartmentalization. For example, problems occur when management or business users specify dynamics (user experience) instead of aesthetics (business requirements). Management and business users should have opinions and input but user experience designers are experts.

With the increasing popularity of IT consumerization and gamification, game design has already encroached on enterprise software. The MDA framework goes deeper by identifying things important to the target audience (whether they be games or management) and a structured way of providing them. The fact that a closely related field has also produced something similar to existing software architecture and design best practices reinforces them.

Indeed, despite the fact that games are also created with limited time and resource constraints, enterprise software has a poor record of user experience design. There is probably a lot more game designers can teach software developers about improving enterprise software, considering games succeed or fail purely on their ability to satisfy users.

Information Security vs Software Developers: Bridging the Gap

Builder versus Defender

One of the biggest challenges in information security is application security. For example, Microsoft’s Security Intelligence Report estimates that 80% of software security vulnerabilities are in applications and not operating systems or browsers.

Software security has improved significantly over the years. For example, groups like OWASP promote awareness and provide concrete solutions for common issues. Software developer security certifications like the CSSLP have emerged. SANS have an increasing breadth and depth of software security courses.

Nevertheless, libraries and best practice rarely protect a whole application. There may be application-specific vulnerabilities (like poorly implemented business logic or access control) or something libraries and frameworks commonly omit (like denial of service prevention). The issues might be even bigger, like not considering software security at all.

Information security professionals often fill this gap. After all, securing the organization’s IT assets is their role. Information security professionals have a security-first or defender mindset. They are usually the first line of defense against threats and the more they know about the applications they protect, the easier that defense becomes.

However, developers are creators and builders and that different mindset can cause friction. This was apparent at a recent static analysis tool training event. We were given the OWASP WebGoat app (a sample Java web site with dozens of security vulnerabilities), a static analysis tool to find vulnerabilities and instructions to start fixing them.

Two different approaches emerged to solve the first vulnerability found: an HTML injection. The first group searched the web for HTML injection fixes. They read recommendations from OWASP and other well-regarded sources. Most found a Java HTML escaping library, used it in the application then modified the static analysis rules to accept the escaping library as safe.

The second group reviewed the code to see how the application created HTML elsewhere. A few lines above the first instance of HTML injection was a call to an escaping function already in the code. The static analysis tool did not flag this as vulnerable. This group then reused that function throughout the code to remove the vulnerabilities.

Which group’s solution is better? The first approach is more technically correct - escaping strings is actually quite complex. For example, although not required by WebGoat, the escaping method included in the application did not handle HTML attributes correctly.

However, the second approach was much quicker to implement. Search, replace, verify and move on. While not as good a solution as the first group, most of the second group had fixed several vulnerabilities before the first group had fixed one. While not technically as correct as the first, is the second group’s approach good enough?

Perceptive readers would have guessed the first group were the information security professionals and the second were software developers. Information security people want to reduce the frequency and severity of security issues. Software developers quickly understand large bodies of code, find solutions and move on. The training exercise highlighted the defender versus builder mindsets.

The two mindsets are slowly reconciling. For example, OWASP, traditionally very defender oriented, has released its proactive top 10, using terminology familiar to software developers, not just information security professionals. The IT architecture community is also starting to tackle software security issues. For example, security is one of the four groups of International Association of Software Architects‘ quality attributes.

However, many information security professionals look at software like WebGoat as a typical application, full of easily rectified security issues caused by ignorance. Most developers I have worked with write relatively secure code but security is only a small part of writing applications.

Developers need frameworks and libraries where common security vulnerabilities are not possible. For example, escaping libraries are great but if you are constructing HTML or SQL by string concatenation and risking injection attacks, you are doing it wrong in the first place! Use parameterized queries for SQL and data binding for HTML, common in both server- and client-side frameworks.

Meanwhile, addressing security at the requirements and design phases – where real security issues lie – comes in at numbers 9 and 10 in the proactive top 10. As software developers will tell you, the earlier issues are identified and fixed, the cheaper the fixes are. Unfortunately, software security is still too focused on point issues at the end of the development cycle.

In fairness to the OWASP proactive top 10, there are still many developers unfamiliar with secure coding practices. Parameterizing SQL queries (number 1), encoding data (number 2) and input validation (point 3) are relatively cheap and easy to implement. All three will give a big pay off, too.

Addressing security design and requirements is also hard. The people involved usually lack the experience and ability to articulate them. Meanwhile, information security professionals rarely have the skills or access to contribute to early phases of software development. This means software developers must also bare responsibility for software security.

Hopefully we can rise above the distractions of point issues and work together on the bigger issues soon enough. In a world where the hackers (breakers) get the glory, we need to remember that both builders and defenders are the ones keeping the software that we rely on working.

Unit Testing: The 20/70/0 Rule

20-70-0 Rule

Automated unit testing, one of the most important foundations of software quality, is still a struggle for many software development teams. Justifying the extra upfront time to business is difficult, particularly when the team is under deadline or resource pressure. Many teams give up when confronted by huge amounts of untested, untestable legacy code. However, avoiding or delaying unit testing hurts everyone.

Many misunderstand automated unit testing, making the message inconsistent or less convincing. For example, automated unit tests do not initially reduce the number or severity of code defects. Good developers should already manually test their code thoroughly, stepping through it in a debugger where possible. Good developers also manually check error conditions and corner cases.

Many concerns are also unfounded. For example, automated unit tests do not replace QA (testers). QA check software developers’ work and test at the functional level. Their different perspective can help write better automated unit tests, too.

Many complain about brittle unit tests only to find brittle “unit tests” are usually functional or integration tests, such as calling web services on external systems or accessing a shared database. Since these are not segregated, the unpredictable actions of others cause tests to fail.

Indeed, the biggest barrier to automated unit testing is software design. If a method or function cannot be unit tested, the design is incorrect. For systems outside the software development team’s control, see Michael Feather’s work on legacy code. Testable code tends to be better designed code, too.

The main benefit of automated unit tests is unit tests capture the expected behavior of a single unit of code, such as a method or function. These tests can be repeated quickly and regularly with little manual effort, identifying when code changes, refactoring or experiments break the expected behavior.

Software developers also forget important details as they move to other features or projects. Capturing expectations as automated unit tests retains this experience and knowledge.

Nothing mentioned above is novel. However, questions remain once agreement to add automated unit tests is reached. For example, how much unit testing do developers add? How much extra time is needed? How do you explain this to non-technical stakeholders? The 20/70/0 rule answers these questions:

First, spend 20% of development time writing automated unit tests. A day’s worth of testing each week is a good compromise. This is part of the development task, not extra effort. Otherwise non-technical stakeholders will demand skipping it when under pressure.

Second, aim for 70% code coverage. This excludes third-party or generated code, so make sure code coverage tools can exclude this. Interestingly, technical people tend to think this is high, especially if no automated unit tests exist. Less technical people ask why the remaining 30% cannot be covered.

Third, ensure 0 failing tests. Running automated unit tests after an automated build is a critical part of continuous integration. Fix failing tests immediately.

The first rule, 20% development time on tests, tells project managers and stakeholders the extra time to add initially. It also allows project managers to compare the up front costs with time savings later (ideally greater than 20%).

The second rule, 70% code coverage, tells developers what the team expects, particularly when code reviews highlight missing or poor unit tests. In an agile process, automated unit tests are part of “done” for development tasks.

Code coverage is an imperfect metric and heavily debated. Ideally, the team should target functional coverage. Behavior Driven Development (BDD) is one option. However, for a team without unit tests or a superior metric, code coverage is unambiguous, automatable and easy explained to less technical people.

The third rule, 0 failing tests, reinforces that quality is critical, especially to less technical people once again.

Software developers often get caught up in technical debate. Unit tests and quality are no different. However, projects can rarely wait for perfect understanding. The 20/70/0 rule is unambiguous and understandable, even to less technical people. Attaining it or, more specifically, the quality goal it represents is still a challenge but attaining it is now about metrics instead of gut feel and hand waving.

Architect/Stakeholder Inversion

Stakeholder Architect InversionArchitect/stakeholder inversion occurs when non-technical stakeholders tell software architects how a system should work, not what it should do. Without the “what”, software architects are left trying to guess or reverse engineer it. The resulting system may not solve the customer problem or may bloat with features attempting to do so.

Architect/stakeholder inversion is not a stakeholder wanting to move a system into the cloud to reduce costs. It is not wanting a mobile app to reach a different, younger market or offer a better user experience. It is not marketing pushing for a better analytics tool. They have business justifications.

Architect/stakeholder inversion is wanting two products integrated without saying what data to share or tasks to provide. It is creating a report engine without knowing the reports it will run. It is any framework created solely to handle nebulous requirements.

Architect/stakeholder inversion occurs due to one of three reasons. First, non-technical stakeholders feel they need to give low-level, technical requirements. Usually a sign of inexperience or frustration, the stakeholder bypasses discussion with technical details.

Alternatively, software developers may be used to implementing what they are told. This is common in environments with many ancillary roles (user experience, visual design, business analysis, copy writing, solution architecture, application architecture, agile coach, project manager, scrum master, team leader, etc) and stakeholders may take advantage of this.

Second, stakeholders often make technical assumptions and present those assumptions as solutions. They may not even realize they made assumptions.

Technical people may miss the business impacts of technical choices. However, non-technical stakeholders may miss technical impacts of business choices, too. For example, while the ongoing costs of moving to an “Infrastructure as a Service” (IaaS) or “Platform as a Service” (PaaS) provider may be lower, non-technical stakeholders may not consider the transition cost and impacts on compliance, security, jurisdiction, privacy, bandwidth and latency. The stakeholder might not have considered other benefits, such as elasticity (rapid scale up or scale down), built-in monitoring and management tools and cheap creation of test and staging environments, either.

Stakeholders with technical backgrounds may exacerbate the problem. While the technical solution requested may be good, the business context is still needed. Software architects are part of the checks and balances for the business requirements and stakeholder technical knowledge does not negate this.

Third, stakeholders may not yet know the business goals of the system. This may be driven by schedule (“We need to start coding now so that we will hit the deadline”), a misunderstanding of agile processes (“We will work it out as we go”) or a lack of preparation.

Architect/stakeholder inversion is usually solved by highlighting assumptions or providing alternate solutions. Forming these into questions (“Have we considered doing X instead of Y?”) and prototypes/spikes are effective. However, if software architects are on a “need to know” basis, stakeholders set direction solely by intuition instead of evidence or stakeholders take offence at challenges or questions, there may be wider organizational problems.

Architects and stakeholders should cooperate and respectfully challenge each other, providing greater understanding to both sides. Software architects can make better informed design decisions and glean insight into wider and future direction. The stakeholder can get a better understanding of and confidence in the solution.

That said, there are no sides here. Both the stakeholders and architects are working toward the same goal. If the organization has appointed stakeholders and architects, it realizes the value of each. Architect/stakeholder inversion contradicts this and produces a lower quality product.

Update: This post is featured in a discussion in the International Association of Software Architects (IASA) group on Linkedin.

Big Design Up Front versus Emergent Design

BDUF vs Emergent Design(This post is in response to Hayim Makabee’s posts on emergent design and adaptable design along with some of the follow-up discussions, such as the thread in the “97 Things Every Software Architect Should Know” Linkedin Group and Gene Hughson’s post on emergence vs evolution.)

One argument software architects regularly encounter is that time spent designing systems is wasted. Many say that “big design up front” is not the agile way and “emergent design” is more effective. This cuts straight to the value proposition of an architect. If up front design has no place in the Agile world, are architects redundant?

To most people, “big design up front” (BDUF), sometimes called “big up front design” (BUFD), means a lengthy, detailed design created at the start of a project. It works on three assumptions. First, one can create requirements for a project. Second, one can create a design to meet those requirements. Third, the design’s suitability for meeting the requirements can be evaluated without implementing it. In other words, there can be good designs and bad designs.

Meanwhile, emergent design means minimal or no design up front (NDUF). It works on the assumption both the requirements and design must be deduced so the team starts developing the product and iterates as they learn more about the problem and the solution. The process finishes at a predetermined time or when “good enough” requirements and design “emerge”.

By inference, emergent design assumes designs are often highly problem/solution specific. Adapting existing designs may create more work than they save. It also removes the focus from providing value to following the design.

Emergent design is quite popular among Agile and Lean practitioners. They argue emergent design reduces some waste (unnecessary work) by not creating lengthy documents that people may never read. Of the read design documents, few are updated as changes are made. Many developers are so cynical they refuse to read documentation and jump straight to the code to answer questions.

Big design up front may encourage over design. Unnecessary features may be added (violating the YAGNI principle) or the system may be unnecessarily complex (violating the KISS principal). Emergent design, particularly when coupled with Test Driven Design (TDD), can produce the minimum code required to meet a requirement and no more.

Big design up front may create an illusion the team knows more than they do. This may prompt decisions when the team knows the least about the problem, meaning big design up front can become big commitment up front. Meanwhile, a team that delays making decisions until necessary may discover different features are needed.

Big design up front’s assumptions are also not always true. Every project has a goal but it may not be clear how to get there. Most startups do not have quantifiable requirements, for example, where coding is more experimenting than implementing. New technologies may supersede old techniques or require new ones, meaning designs are either too difficult to create or cannot be evaluated without implementation.

However, proponents of big design up front point out that designing is often more useful than design documents. The design exercise validates and challenges requirements, explores edge cases and discovers mistakes. Without it, developers often dive straight into low level details and even a short time thinking about the problem can expose assumptions or alternate solutions they would otherwise miss.

Emergent design assumes change is cheap. A lot of effort and attention has been directed to this. Continuous integration and continuous delivery aim to make releasing easy. Test Driven Design (TDD) and automated testing aim to find regressions quickly. Agile methodologies like Scrum provide visibility and guidance on how to manage change.

However, not every change is cheap. Hardware can be difficult or impossible to change once manufactured. Network infrastructure changes need to be scheduled to minimize impact to others. Engaging external vendors may require lengthy contract negotiations. Legacy code may lack sufficient automated unit test coverage. Aspects like security, compliance and scalability are difficult to retrofit.

Similarly, software development must be accountable to the organization. Required skills and teams must be hired or contracted. Budgets must be determined. Progress is usually tracked against milestones and must be approved based on return on investment (ROI) estimations. Risks must be identified and mitigated. Early designs (as part of architectures) can help drive all of this.

Designs present abstracted views of the system, emphasizing important decisions and removing noise. This means designs can also be reviewed by others before the more expensive and time consuming implementation to find weaknesses or suggest improvements. Designs of notable projects can teach others, either by following or avoiding them.

The problem with comparing big design up front against emergent design is it usually devolves into straw man arguments. Neither are absolutes. Good big design up front recognizes some design and details are filled in during development. Good emergent design must start with some idea of how the system will work.

Both big design up front and emergent design can be done badly. Poor big design up front can miss important factors, provide a poor solution or communicate good ideas badly. Poor emergent design can waste time rewriting code, introduce regressions and impede governance. Both can create a big ball of mud. However, big design up front need not be change averse. Emergent design need not be chaotic and unpredictable.

Big design up front and emergent design are process agnostic. Big design up front originated in waterfall processes. As mentioned above, emergent design is common with agile development methodologies. However, emergent design can be used within a waterfall design phase (prototyping) or for defined components during development (spikes). A team using agile development methodologies may do some design inside, outside or between iterations.

Both approaches can be combined. For example, adaptable design is a technique where parts of the system that anticipate change,such as unknown or changing requirements, are designed to accommodate them.

Looking at the comparison from a different angle, what does “design” mean? Is it thinking about how to approach the system or is it documenting and communicating it? A small system may be something a developer can completely understand and describe in a few sentences. It has an implicit, undocumented design and can be iterated over time using emergent design. However, a large or complex system using a mix of legacy and new components whose development is split across different teams may need a different approach. In other words, the benefit of up front design increases as the system complexity increases.

Different approaches require different skills. Big design up front requires thinking about a system in abstract terms. It is skill that not every developer has, requiring breadth rather than depth, and is often why democratizing design fails. By contrast, emergent design embraces a detail and code focus, particularly with the focus on unit testing and small, incremental changes. This is one reason emergent design is more attractive to software developers.

Both big design up front and emergent design are tools a software development team can use. Rather than being excluded, software architects are in a unique position. They can help determine which approach is best for a situation. The real challenge for a software architect is knowing the right amount of design for a system and when to do it.

What makes a “good” software developer?

Good Software DeveloperSpend any with software developers and the question of whether a software developer is “good” has invariably arisen. Want to hire a new software developer? Want to promote someone to a lead developer position? Need someone to refactor or redevelop a piece of code? Need someone to work with a different team? You always want a “good” software developer. The challenge is “good” has many different meanings and few can be more specific. What does “good software developer” actually mean?

The first skill of a “good” software developer is delivery: writing code quickly with as few bugs as possible. Some software developers are faster than others, some create slightly more bugs than others but the balance should be there.

The second skill is product knowledge: knowing where to fix or improve something, the rationale for the current design and impacts of any change – often the hardest parts of software development. This includes the technical ecosystem (what products it integrates or interacts with and how).

This list excludes skills like communication and building a network of contacts. They are important but are not software development specific.

This list also excludes business knowledge because most software developer “business knowledge” is just product knowledge. Software developers with business knowledge exist and some product owners lack the business knowledge they should have. However, few software developers can step into a non-technical role and perform as well or better than someone hired specifically for the role.

Both delivery and product knowledge provide strong business value. However, does that mean the best software developers are those that have worked on the product the longest? Is “good” synonymous with “experience”?

Similarly, with little design up front, regular pivots, increased visibility and pressure to deliver, Agile practices tend to encourage finding the minimal effort to solve a given problem. The constant focus on delivery may dissuade developers from stepping out of their immediate task and learning how to improve themselves, their process and their team.

Consequently, there are other skills to consider. For example, optimizers examine a process or product, identify its deficiencies and make concrete improvements. Epitomized by Sam Saffron’s recent post, these developers excel at improving nonfunctional aspects (e.g. performance, scalability, security, usability) or automation (e.g. build process). These are the developers constantly pushing for new tools and libraries because they are better, not just because they are shiny and new.

Software developers may also focus on good design. For new products or features, designers focus on how the software works as much as what it does and usually head for the whiteboard first and not an IDE. For established products, designers seek technical debt, fix it and move to the next problem.

Software developers may also be quality-focused. Quality-focused developers consider code incomplete until it is both testable and has automated tests. They often espouse the benefits of Test Driven Development (TDD) and code coverage (an imperfect but useful metric) not necessarily for themselves but as easy ways to encourage (or require) others to focus on quality.

This list is also hardly conclusive. Some software developers may focus on security, others on user interface design and so on.

Optimizers, designers and quality-focused software developers are disruptive and may detract from delivery by focusing too much on improvement. For example, they may spend too much time and effort on switching tools, redesign and over testing. The disruption may extend outside the development team, too. For example, new processes may impact other areas of the business and purchasing new tools may put pressure on budgets.

However, these disruptive developers can drive positive change and be a source of innovation. Without them, software developers would still be writing code in assembler using waterfall processes on computers that filled buildings.

The first consideration is focusing disruptive software developers on important problems, often difficult for managers not used to software developers driving initiatives from the ground up. Changes must be driven to completion and implementation and not just the initial research and proof of concept. Managers also need to make sure all team members can follow and benefit from the improvements, not just team members driving changes.

The second consideration is managing expectations. A disruptive developer brought into the team may expect or be expected to drive change. Setting boundaries and goals beforehand can avoid problems before they occur.

However, having the skills mentioned above is not necessarily the best answer to “What makes a good software developer?”. Yes, “good” software developers deliver and amass product knowledge. Some software developers are disruptive and those that drive improvements while delivering may be “great” software developers. However, these are goals. How does a software developer become “good” or “great”?

Great software developers strive for continual improvement. If a software developer truly wants to improve, reads widely, experiments and keeps what works then even a “poor” software developer will become “good” or “great” eventually.

Continual improvement is often mentioned when discussing Agile development processes but it also applies down to the individual level, such as scripts, key bindings, macros or IDE add-ins. It also applies to the organization level, although this is harder for individual software developers to influence.

To look at it another way, great software developers focus on goals and motivations , not just rituals. A good developer may follow Test Driven Development or the SOLID principles but a great developer will know the benefits they bring and when to bend the rules. This is also why great developers learn new languages quickly and apply concepts from one in others.

That said, choosing a great developer is not always the best choice for every role or task. Great developers expect more from an organization. This is not just a higher salary and benefits. They expect to make more of an impact but not every organization can provide or allow that.

The key is to match the software developer to the task, not just find a “good” or even “great” software developer. This requires understanding what the team needs and how developers improve or lose their edge over time, which requires “good” management. However, that is a whole other blog post.

The Software Development Employment Jungle

When I first looked for software development jobs after leaving university, most employers were looking for candidates with the right attitude, right aptitude and good marks. However, as the years passed and my positions were increasingly senior, I found employers wanted the right person for the role, not just someone with the right skills. Having recently left Symantec (where I spent almost 13 years), waded through the employment jungle (as one of my peers called it) and started a new job, I wanted to capture my experiences and observations. Hopefully this applies beyond just software architect and software development roles in Sydney, Australia.

Recruiters

Recruiters

Many have negative opinions of recruiters and employment agencies. They try to provide candidates from the limited pool they can attract for clients whose requirements they sometimes do not fully understand. However, such an opinion is unconstructive and unfair. Many recruiters work long hours to fill roles whose remuneration sometimes dwarfs the recruiter’s in a rapidly changing technology and business landscape. Instead, consider the following:

Recruiters are paid by employers so candidates are the product, not the customer. Recruiters are very interested in candidates while applying and going through interviews and work hard to sell positions to candidates  and candidates  to employers. Rejected candidates  are rarely worth recruiters’ time (“no news” is generally “bad news”) while successful candidates are pursued, particularly for recurring contract roles.

Recruiters find candidates for positions not positions for candidates. Candidates are responsible for finding their next job, not recruiters. Candidates need to market themselves (such as joining job search sites, talking to hiring managers, attending conferences and user groups), generate as many good leads as possible (such as applying for jobs and distributing their resume), following up the leads, understand the employer needs (such as emphasizing particular skills on a customized resume) then sell themselves in interviews. This process changes over time, as the candidate learns and the market changes, and recruiters and employment agencies are one part of the process.

Preparation

Preparation

Start working toward the next job or promotion now, even for those happy with their current position. Why now? Because you cannot control when new job opportunities will arise or when your current position may end or change for the worse. Starting now means you will be prepared.

For example, identify activities that look good on a resume or will help for that next promotion, even if they are not immediately appealing. Plan conversations in advance, such as introducing yourself to peers at networking events, thinking of an insightful question for the boss’s boss and answering the inevitable “Why do you think you would be the right person for X?”

Start networking with people that could help you find a job in the future, such as peers and managers within the industry or related industries. Many are put off by the time commitments others recommend but simply reaching out, getting contact details and occasionally (once every few months) commenting on a post or tweet is usually more than sufficient. Aim for mutually beneficial relationships but realize peoples’ needs change overtime. Beyond getting the next role, it also gives context across the industry, identifies the skill level of peers, the challenges they solve and gaps in your own skills.

Create an online presence. It can be as simple as a LinkedIn profile that lists your employment history, important projects and key skills (and is a great way to keep your resume up-to-date). It can be as complex as a lengthy blog, strong social networking presence, a high Stack Overflow reputation and contributions to multiple open source projects. Start small, work up and do not be afraid to experiment.

Some fear others will infer the worst from making achievements visible - Dunning-Kruger is rife in software development – but being able to answer an interview question thoroughly because it was the topic of your recent blog post or pointing to your project on Git Hub when asked about a library or framework is invaluable. Unless you are aiming for a thought leadership position, most people simply do not care enough to discover other’s mistakes and having a demonstrable history is better than not, all other things being equal.

Some are deterred by the implicit pressure to maintain a blog or social networking activity. While thought leadership is may be best maintained by a steady stream of content, a different strategy is to post or blog fewer but better articles and content. A small number of insightful, relevant articles can be more useful than a regular stream of retweets, for example, because the articles show original work. This reduces the time commitment and interviewers will see better articles when they browse your content, too.

Be open minded. What you think you need may not be what you actually need and the people you think you should be talking to may not be who you should actually be talking to. Talking to those outside software development can be insightful and there are always things to learn and different perspectives to respect. Do not neglect soft skills, either.

Focus the resume. Customize it for the role to emphasize relevant skills. Write a cover letter but do not expect anyone to read it.

Interviews

Role

Employers and interviewers come with their own preconceptions. They have their own backgrounds and experiences, their immediate need for a new hire and their vision of a suitable candidate for the role. This leads many interviewers to grill the candidate about the candidate’s fit for the interviewer’s idea of the role, to see if the candidate “fits in the box”.

The “box” approach works well for software development roles where the programming languages and frameworks are known but interviewer skills vary. Bad technical interviews fixate on minutiae under the misplaced assumption good software developers will have touched those areas. Good technical interviews involve writing code, explaining technical concepts or defending decisions.

However, candidates with broad or unusual experiences may not easily fit the box, encouraging interviewers to label candidates (“Are you an X or Y? You cannot be both!”). Finding a candidate that fits a box is less successful the more senior the role gets, particularly for leadership roles like management and software architect positions, where strategic and broader thinking is required, because people the box approach prefers are less likely to have new ideas and different perspectives.

Understanding the interview strategy can help the candidate identify the sincerity of the role. For example, is the role being advertised as a technical leadership role, like a software architect position, but they are conducting a “box” style interview? This hints at an impressive title being used to attract people for an otherwise straight coding job. Alternatively, a software development role containing lots of open questions may hint at a more senior role or a higher expected standard of candidate.

Moreover, what many interviewers forget is interviews are as much about the interviewer as the candidate. Even without asking questions, the candidate learns about the types of problems the interviewer thinks are important, their priorities and the interviewer’s communication skills. Is the interviewer asking questions about challenges you are interested in or have experience with? Is it something you enjoy talking about? Are they happy with high level answers or do they want detail? How did they react to your last answer? Did they ask for clarification or move to the next question on the list?

The usual recommendations about interview preparation apply. For software development and particularly software architect roles, understand your last few projects, the important design decisions and why they were made. Have scenarios prepared for behavioral questions on leadership, dealing with difficult stakeholders or working under pressure. Focus on the interviewer’s business first and talk about the candidate’s benefits at the end. Good questions for the interviewer include questions about how to be successful in this role and what challenges do they expect to face but do not be afraid to ask questions throughout they interview if they are relevant at the time.

Final Thoughts

Much is written about things candidates can do to improve their chances of finding a job, like resume writing or interview practice. However, a key part is patience. All the hard work will help you if and only if the job you want is available – the Australian IT employment market hibernates over summer, for example. Some people often need a new job due to financial or other pressures and, by all means, adapt to the market’s needs  but getting a new role does not have to be a question of choosing what to sacrifice. Be good at what you do, have faith in yourself and the jungle will not seem so bad.

Intellectual Property Ownership

Whenever anyone involved in intellectual property starts a new job, particularly in software development, the employment contract usually includes a lengthy intellectual property agreement where the new hire assigns ownership of all intellectual property created over to the employer. Many new employees balk at this, concerned that the employer will claim any side projects they are working on and with recent court battles over software patents worth billions of dollars, for example, intellectual property ownership is a complex area worth exploring.

Employment contracts are usually “work for hire” contacts. Although the legal requirements and obligations vary from country to country, it usually means the employer owns any works created instead of the employee, in return for payment or salary. Few software developers dispute this. However, unlike authors and artists usually contracted for a specific work, developers are often contracted for all intellectual property created during their employment, irrespective of whether it is performed during working hours on work equipment.

Software developers are increasingly working on side projects like open source software or apps for mobile or tablet app stores. Many developers code in the spare time, whether it be tweaking the website for a friend’s business or experimenting with new libraries or languages. It is also increasingly common for students to bolster their resume this way or for software developers to teach themselves new languages or libraries.

This differs from traditional “working under the table” or “moonlighting” in that much of the work is not paid. The software developer may want to capitalize on the work in the future but usually just does not want others intruding or demanding ownership of the work, akin to a lawyer doing “pro bono” work. Software developers are also producing assets other than code. For example, software patents produced during software development can be more valuable than the code.

The barrier of entry for software development has never been lower. Thirty years ago, software development was difficult, usually performed on expensive, centralized computers and proprietary software. This is a far cry from today where one can create complex websites using free tools running on free operating systems hosted on commodity priced servers. Compare that to chemists, physicists or engineers that may require thousands or millions of dollars of equipment and dedicated teams of support staff to perform their research and yet more to monetize it.

Indeed, the problem with software patents and intellectual property ownership in software development is development only part of the cost. There is the marketing and sales required to turn products into revenue, for example, and the IT infrastructure, HR and accounting structures required to support all of this.

Patents are similar. Beyond software development, legal expertise to file patents is required along with the time and resources needed to find and deal with infringements. Patents are also often cross licensed, either earning additional revenue or allowing access to other organizations’ patents. Patents may also become more valuable over time as products they are used in become more widespread.

The word “ownership” also carries many mis- and preconceptions. If an employed software developer (“inventor”) wants to own all or part of his or her inventions, what does this mean? Does the inventor want royalties, like an actor may receive for a movie? Does the inventor want the option to use it in their own work, possibly for a job at a competitor? What about an open source project the inventor contributes to in his or her spare time? Does the inventor want the option to stop others using it, like competitors or those the inventor disagrees with? Will the developer help fund the sales, marketing and legal infrastructures required?

Even if those in software development can make a case for increased “ownership” of their products, it is not in employers’ interest to allow this. The creative process cannot be constrained to occur within work hours or on work equipment – many have  inspiration when asleep, exercising or in the shower – and increasingly flexible working arrangements further blur the distinction. Work for hire contracts are also well understood and widespread, making them lower risk.

Some would argue software development is like a painting selling and reselling for increasing amounts but the original painter seeing none of the profit.  However, a better analogy is performers selling music. Do they go through a record label where they get greater exposure and marketing but sacrifice income or do they produce the songs themselves and sell it through iTunes, where they can make a greater cut but a much reduced sales volume?

Many employers are also quite reasonable. If the idea is unrelated to current or likely projects and the employee is not going to make much money, pursuing it is not worth the expense. If the organization files patents, most reward developers with bonuses for doing so. Other employers take the opposite position and talking to the employer first before producing anything important is prudent.

It will be interesting to see what the future holds. Younger generations, those that have grown up with social networking, are used to sharing their lives on social media and regularly blur the lines between professional and social. With software development tools more accessible than ever and collaborative source code repositories like github gaining in popularity, will developers from younger generations look at coding the same way? While they may have different politics from their GNU and GPL espousing forefathers, will they see “social coding” or “social software development” as an obvious direction? If so, what compromises will be made?

Software Development: An Overtime Culture

Software development is rife with overtime. Driven by passion and perfectionism, many developers throw everything into it, reveling in a culture of coding in darkened rooms late at night. Coupled with customers pushing for more or larger deliverables with fewer staff and a legal exemption from overtime pay in many circumstances, developers often shoulder the burden. Unfortunately, consistently working late nights and weekends is not sustainable but many seem unable to break free of the rut.

To put this in perspective, overtime is not unique to software development or even IT – just ask accountants around the end of financial year or salespeople in danger of missing their quota. There is also nothing wrong with flexible working hours and arrangements. Nevertheless, a work/life balance is important particularly for those with families or other commitments outside work.

It is easy to blame overtime on poor management. While true, saying “manage better” does not solve the problem. Indeed, there are many causes of software development overtime, including: (1) poor estimation or underestimation, (2) too few or under-skilled developers, (3) perfectionism, (4) redoing work due to past mistakes or (5) changing requirements/poor communication with other parts of the business. Specifically:

Poor estimation or underestimation: Few cling to the Newtonian beliefs that estimation is merely substituting initial measurements into well understood formulas to predict the delivery date.  The issue is one of negotiation – experienced software developers often know how much features will really cost in time and resources but software developers are often outclassed by customers or management used to arguing their case and, once agreed, deadlines are traditionally immutable, like a salesperson’s quota.

Too few staff or under-skilled developers: No development team is has “enough” developers and with the current trend of outsourcing and offshoring, developers working on project may be under-skilled or unfamiliar with the business problem. However, “you have to go to war with the army you have” as they say but the Dunning-Kruger effect and poor planning rarely take this (and Murphy’s Law) into account.

Perfectionism: Unfortunately, software developers can be their own worst enemies. Well architected, readable, tested software is the goal but “production quality code” is really a tautology - real developers ship. Include time for refactoring but time box it. Include time for design but realize it will change. Include time for testing but test important features first.

Correcting development mistakes: Unfortunately, mistakes happen but software developers must take responsibility for their own mistakes. To put this in context, at least development has the option of working late to fix their mistakes, unlike sales or management.

Changing requirements or poor communication: Many developers think of this as an “us versus them” situation - the “villainous” business “demands” changes and the developers are “helpless victims.” Instead, work on negotiation and trade-offs. After all, if the project is not delivered, everyone suffers, not just development. Make it a business problem or a business risk.

Note the steady addition of new requirements is a smell, possibly indicating the release cycle is too long. If possible, consider multiple smaller releases rather than a single large one.

There is no silver bullet to prevent overtime. Software development is a profession and there will always be work to do. However, it is possible to reduce it. Apart from focusing on soft skills, developers need to:

  1. Improve negotiation skills. For example, treat estimates as immutable. Trade features rather than agree to reduce estimates. Otherwise the estimates lose credibility. Treat everything else (deadlines, delivery mechanisms, support, documentation, maintenance releases and so on)  as negotiable.
  2. Improve estimation. Each development team faces its own challenges in this area but a few suggestions include:
    1. Improve prioritization. Many customers want everything but estimating the revenue or customers for each feature (or loss if not implemented) is usually the best prioritization strategy. Deprioritize features that have no customer impact (the lean software development definition of waste), too.
    2. Estimate with 10-20% fewer resources and less time. Customers often spot padded estimates (reducing their credibility leading them to press for reductions) but reducing the resources and time instead is usually seen as prudent.
    3. Present estimates using the least accurate unit of measure. Instead of saying “172.3 hours”, say “about 1 month”. Defer more detailed estimates until the problem is better understood later in the project.
  3. Track the previous work estimates against actual time and resources. Use that to support current estimates and improve future estimation and negotiation.
  4. Focus on delivering. Many developers love learning new things and striving for the ultimate solution. There is nothing wrong with this but a critical project is usually not the place for it. Temper or time box this experimentation and perfectionism.

Agile development methodologies might also be useful but suggest them tentatively. Many have presented ill-defined “agile development” practices as a panacea, thus some consider the term “agile” a less than credible buzzword. Agile development methodologies require a wider organizational changelike the appointment and recognition of single product owners in scrum, and may require more trust before allowing this.

The challenge with much of this is not sacrificing status of development within the organization or being seen as an impediment. The business may expect overtime and changing this perception can be hard. Metrics that demonstrate how things get worse as overtime increases, like as lines of code produced and bugs introduced, might help particularly against those that espouse the “pressure makes diamonds” fallacy.

Reducing it requires developers to recognize their part in the problem. There is nothing wrong with developers striving to create the best solution and putting in the extra effort when they want to but required and expected overtime is problematic. Ignoring extreme cases like those described by EA_Spouse, the best solution is usually a mutually constructive one, particularly with developers improving their negotiation skills.

Should Software Architects Write Code?

Much has been written and debated on whether software architects should write code. Many argue the more architects understand the language, tools and environment they are designing for, the more effective they are and this is best achieved by implementing some or all of the design. Non-coding architects, sometimes called “PowerPoint architects”, “astronaut architects” or “ivory tower architects”, may use archibabble and talkitecture to convince non-technical stakeholders of their expertise while delegating the unsolved, real problems to developers, so much so that it has become an organizational pattern (“Architect Also Implements“) and corresponding anti-pattern (“Architects Don’t Code“). Others argue that architects responsible for implementing their architectures lose focus on the bigger issues and longer term vision. Understanding does not necessarily require knowledge of the minutiae and, as systems scale up and diversify, implementing it requires too much time or spreads the architect too thin. Therefore, should software architects write code?

As with many difficult questions, the problem starts with the question itself. “Should a software architect write code?” can mean “Should a software architect always prototype or implement their own architectures?”, “Should a software architect write production code most of the time?” or “Should a software architect be able to write code?”. It could also mean “Is coding the best or only way to become a software architect?” or “Can non-coders be good architects?” but that is best left to another blog post.

It also depends on the definition of “software architect”. The Canadian architect (of buildings rather than IT) Witold Rybczynski wrote in his 1989 book “The Most Beautiful House in the World“:

“For centuries, the difference between master masons, journeymen builders, joiners, dilettantes, gifted amateurs, and architects has been ill defined. The great Renaissance buildings, for example, were designed by a variety of non-architects. Brunelleschi was trained as a goldsmith; Michelango as a sculptor, Leonardo da Vinci as a painter, and Alberti as a lawyer; only Bramante, who was also a painter, had formally studied building. These men are termed architects because, among other things, they created architecture — a tautology that explains nothing.”

This is exactly the same issue for software architects. Without a clearly defined and segregated role, anyone designing software or IT related systems can rightly be called an architect, including many developers and technical leads. For the sake of argument, this post uses Simon Brown’s definition, where software architects are responsible for high level design, non-functional requirements and technical vision.

Should a software architect should be able to write code? Architects should be able to read and write code because it:

  1. Verifies the code written by developers matches the design and identifies deviations.
  2. Helps the architect learn about changes or new features. If the architect has been assigned to a new project, he or she can learn the product sooner by looking at the code, too.
  3. Allows the architect to write a proof of concepts or prototype. A working demo is much more convincing than an architecture diagram and will usually facilitate better estimates. Care must be taken to prevent non-technical stakeholders attaching too much credibility to it, as with any prototype however.
  4. Provides another pair of capable hands during the project crunch periods.
  5. Makes the architect more forgiving of bugs because the architect has likely made similar mistakes in the past. At least, the architect should have a better understanding what types of issues to expect.

Writing code may help earn the architect respect of the developers. Developers can be notoriously dismissive and a software architect producing some of his or her own code, even if it is just a proof of concept, or providing good feedback from a code review can make the developers feel like the architect is one of them. Having a working development environment and access to source code means the architect can try out new versions without waiting for a build or release. Any significant build, development environment or source code control issues also become apparent to the architect.

Note that code reviews do not replace talking to developers because regular discussions between developers and software architects can help build mutual respect. Otherwise, developers may see the architect as a constraint or threat that must be circumvented. Also, developers often know or can find problem areas faster than the architect reading the code but there needs to be a balance between architect self-sufficiency and squandering developer time.

Software architects are often required to settle disputes between developers, such as when one team discovers a better way of solving a problem or that the proposed design will be harder to implement than first thought. Software architects are also sometimes mentors or coaches for developers or may be used as internal consultants to examine process, quality, automation or similar issues. Understanding code means the architect can use his or her judgment more effectively rather than rely on which developer is more persuasive.

Should a software architect write production code most of the time (usually implementing their own architecture)? If a software architect implements their own architectures, this ensures the design is implementable with the tools and environment used. This can lead to new insights, improved designs and more accurate estimations. It is also implemented by the person most familiar with the design so minimizes miscommunication.

However, architects may jump to implementation (depth thinking) before exhausting other solutions (breadth thinking). Existing implementations may overly influence the architect or the architect become attached to his or her code, fighting against needed improvements. It can distract the architect from higher level tasks such as longer term planning, communicating with stakeholders and reviewing other developers’ code.

Also, part of the role of an architect is to fight for reuse, security and other non-functional requirements. Being forced to prototype or implement their design may encourage compromises that the team need not make. It is not that an architect does not make compromises – design is the art of compromise as many have said – more that it is the architect’s job to make the right compromises rather than those the architect makes creating the initial implementation. The developers will likely rewrite much of the architect’s code, anyway.

Indeed, the more an architect focuses on communication, requirements analysis, stakeholder management and non-technical activities, the more the development skills of software architects may atrophy. As long as the architect is providing value via other means this is not an issue. However, an architect should maintain his or her development skills; whether by extensive research, working on their own projects or contributing where they can; but needs to focus on capabilities, limitations and edge cases rather than speed or a complete understanding.

Problems may occur when organizations promote their strongest developers into a software architect role rather than good communicators that are capable of working at higher levels of abstraction. Friction arises when these architects try to “lead from the front” by implementing their architecture rather than facilitating others to do so. Organizations should promote a developer that has better soft skills, instead.

Many confuse not writing code with a lack of feedback. An unprototyped architecture, hypothetically, may be difficult to implement or problematic. However, a senior developer or technical lead can prototype the architecture if required. This also allows architects and developers to work together and ensure the design is communicated well. Alternatively, the architecture can be shared with others that have implemented similar systems previously or architects or developers working on integrating products. Requiring an architect to implement their own architecture beyond a proof of concept also does not scale well, particularly for large or complex products.

Similarly, many confuse an architect not writing code with a lack of accountability. Architects must produce designs that not only are approved (whether formally or informally) by stakeholders but also developers and developers should not approve a document that does not meet their needs. Issues or errors in the designs should be noted. Some change is expected but major or expensive errors should be attributed to the architect. An architect implementing their architecture in code does not guarantee an issue-free project.

With increased use of agile development methodologies, architects are no longer creating an architecture and “throwing it over the wall” to developers. Even previously ivory tower architects are more involved with lower level issues since less critical decisions are deferred until later in the process and design is iterative. For example, architects in organizations using Scrum should attend at least the planning, review and retrospective meetings. (Some architects may move to other projects or otherwise not see the project through, the “Architects Play Golf” pattern. This is an organizational issue and unrelated to whether architects code.)

Many developers also look down on “PowerPoint architectures”. However, many forget the role of a software architect is as much communication as development and completed, implemented architecture does not help non-technical stakeholders, QA, localization, documentation writers and so on. Of course, these stratospheric PowerPoint architectures are not substitutes for high-level designs developers can implement but the architect represents the developers and products to outsiders and developers often feel any time not spent developing is unproductive. Ultimately, PowerPoint architectures do have their place but developers are as much the architects’ customers as the stakeholders.

Should architects write code? The question is loaded and should be determined by the team on a case by case basis. Architects may prototype high risk projects, experiment with new libraries or try out now tools. Architects may completely delegate the design and implementation of well understood, low risk components. The real question is “How does an architect be successful?”. It is a question of managing and mitigating risk. Architects are often good coders but good coders are not necessarily good architects.

Update: There is a large discussion about this post on the IASA (International Association of Software Architects) LinkedIn group: http://www.linkedin.com/groups/Should-Software-Architects-Code-1523.S.188454845

Follow

Get every new post delivered to your Inbox.

Join 151 other followers

%d bloggers like this: