Fueled by the advent of social media, opinions now surround us. Seemingly everyone has them and broadcasts them to the broader world. Untethered from context and expertise, such expression usually leads to validation or ridicule instead of healthy debate. Uncritical opinion sharing can drive tribalism, becoming a foundation of identity as adherents one-up each other with more extreme views.
Professional environments face different challenges with opinions. They are smaller and simpler than worldwide social media and rarely emphasize extremes. However, organizations frequently need to make quick decisions on incomplete or uncertain information. For example, to solicit feedback on a proposal or design.
Opinion gathering can be a helpful tool. Requesting opinions can reveal similar initiatives, valuable metrics and new perspectives. It can also extract contradictory, irrelevant, ill-informed or otherwise not valuable input from the wrong people.
Effective opinion gathering requires preparation then focusing on relevant motivation, experience and facts.
Preparation involves asking the questions you should ask before seeking feedback.
Creating and sharing a succinct document or presentation clarifies the problem, sets its scope and answers common questions. It gets better quality opinions sooner.
Defining the audience ensures the audience is complete and not excessive. It also helps set an end to opinion gathering.
Preparation should be proportional. Not every issue needs documentation and a plan. However, taking a few moments to think is almost always beneficial.
Unfortunately, opinions are sometimes unsolicited or not on topical problems. On topic opinions may be outside the scope or imply impractical changes.
While perfect alignment is rare, the most useful and actionable opinions come from those with benign motivations. Opinions from those with different views or incentives need careful consideration.
For example, salespeople are often commission-motivated. They are likely to favour whatever gets them their next sale. However, they also have frequent close contact with customers, giving unique insights.
In IT, non-technical stakeholders are not accountable for warranty (e.g. speed, security, reliability, maintainability), only for utility (e.g. correctness, completeness). They will often neglect or downplay quality issues. However, they may also represent a broader or long term view than just the immediate technical details.
Stakeholders may have conflicting motivations. Managers have to weigh short-term targets against long-term goals. The decision-fatigued or overworked may suggest the easiest path over the best one.
Even well-intentioned opinions can be problematic. For example, aspiring experts may give unsolicited ideas, trying to appear knowledgeable in the guise of assistance. Some merely repeat others’ opinions they found compelling. Confidence and credibility are different things.
Potentially the most dangerous opinions are those with an unknown motivation. Actual dishonesty is rare in professional settings. However, self-promotion and self-preservation evolve from helpful to essential as you climb the management ladder.
The opinion giver’s experience is also significant. It prevents foreseeable mistakes and imparts acquired wisdom.
However, there is relevant and less relevant experience. “The last time we did this” may have been under different market conditions, finances or staff.
Experience does not imply expertise. Seemingly capable people may have knowledge gaps, poor tools or incomplete awareness.
People may not adapt or improve, repeating poor practices or old mistakes. Sometimes years of experience are just repeating the same experience.
Like motivation and experience, saying facts are pertinent is an understatement. These include measurements, dates, budgets, quotes and anything that most agree is hard to dispute. They are more effective at forming or changing opinions in professional settings than on social media.
Often the source’s trustworthiness is the most crucial factor in disputes. However, referencing “The Lean Startup” by Eric Ries, anything auditable, actionable, and accessible should be convincing.
Facts should be complete and, like experience, relevant. Statistics or definitions can ignore inconvenient data. Averages imply an often inaccurate or oversimplified modal distribution.
Fact-producing experiments are generally more effective than expertise. They consider local conditions and capabilities. Repeatable experiments are even better because they can become goals to achieve or metrics to track success or failure.
Changing your organization or peers to give better opinions is challenging, particularly given the self-entitlement enabling social media that surrounds us.
However, you can change the way you give and receive opinions. While everyone is entitled to opinions, not all opinions are equal. Thinking critically about those given and received helps professionally and in the broader world.
Elon Musk sparked controversy with this recent attempt to take over Twitter. Many support him, citing Twitter’s relatively poor revenue and Musk’s success in turning seemingly unprofitable ventures, like electric vehicles and space exploration, into successes.
However, his recent Twitter poll caught my interest, where 82% of over one million responses voted that Twitter should open source its algorithm.
Musk explained further during interviews at the TED 2022 conference. The “Twitter algorithm” refers to how tweets are selected then ranked for different people. While some human intervention occurs, social networks like Twitter replace a human editorial team’s accountable moderation with automation. Humans cannot practicality and economically manage and rank the estimated 500 million tweets sent per day.
By “open source”, Musk means “the code should be on Github so people can look through it”. Hosting software code on github.com is common practice for software products. Third parties can examine the code to ensure it does what it claims to. Some open sourced products also allow contributions from others, leveraging the community’s expertise to collectively build better products.
Musk says “having a public platform that is maximally trusted and broadly inclusive is extremely important to the future of civilization.” People frequently demonize social networks for heavy-handed or lax “censorship”, depending on their side in a debate. Pundits claim social networks limit “free speech”, conveniently forgetting “free speech” means no government intervention. Pundits cite examples of the algorithm prioritizing or deprioritizing tweets, authors or topics. They also cite account suspensions and cancellations, sometimes manual and sometimes automated.
Musk assumes that explaining this algorithm will increase trust in Twitter. He called Twitter “a public platform”, implying not just public access but collective ownership and responsibility. If people understand how tweets are included and prioritized, the focus can move from social networks to the conversations they host.
Unfortunately, understanding and trust are two different things. Well understood and transparent processes, like democracies’ elections or justice systems, are not universally trusted. No matter the intentions or execution of a system, some people will accuse it of bias. These accusations may be made in ignorant but good faith, observe real but rare failures or be malicious and subversive.
Twitter’s algorithm is not designed to give equal exposure to conflicting perspectives. It is designed primarily to maximize engagement and, therefore, revenue. It is not designed to be “fair”. Social networks are multibillion dollar companies that can profit from the increased exposure controversy brings. Politicians alienate few and resonate with many when they point the finger of blame at Twitter.
Designing an algorithm for fairness is practically impossible. You can test for statistical bias in a numeric sample set but not across the near entirety of human expression. Like the philosophers opposing the activation of Deep Thought in Douglas Adams’ Hitch Hiker’s Guide the Galaxy, debating connotation, implication and meaning across linguistic, moral, political and all other grounds is an almost endless task.
Assuming transparency can assure trust and fairness is possible, open sourcing the Twitter algorithm assumes the algorithm is readable and understandable. The algorithm likely relies on complex, doctorate-level logic and mathematics. The algorithm likely includes machine learning, which uses no defined algorithm. The algorithm likely depends on custom databases and communication mechanisms, which may also have to be open sourced and explained.
This complexity means few will be able to understand and evaluate the algorithm. Those that can may be accused of bias just like the algorithm. Some may have motivations beyond judging fairness. For example, someone may exploit a weakness in the algorithm to unfairly amplify or suppress a tweet, individual or perspective.
Musk’s plan assumes Twitter has a single algorithm and that algorithm takes a list of tweets and ranks them. It is likely a combination of different algorithms, instead. Some work when tweets are displayed. Some run earlier for efficiency when tweets are posted, liked or viewed. Different languages, countries or markets may have their own algorithms. To paraphrase, J.R.R. Tolkein, there may not be one algorithm to rule them all.
Having multiple algorithms means each must be verified, usually independently. It multiplies the already large effort and problems of ensuring fairness.
Musk’s plan also assumes the algorithm changes infrequently. Once verified, it is trusted and Twitter can move on. However, experts continue to improve algorithms, making them more efficient or engaging. Hardware improves, providing more computation and storage. Legal and political landscapes shift. Significant events like elections, pandemics and wars force tweaks and corrections.
Not only do we need to have a group of trusted experts evaluating multiple complex algorithms, they need to do so repeatedly.
Ignoring potentially reduced revenue from algorithm changes, open sourcing Twitter’s algorithm also threatens Twitter’s competitive advantage. Anyone could take that algorithm and implement their own social network. Twitter has an established brand and user base in the West, but its market share is far from insurmountable.
There are other aspects to open sourcing. For example, if Twitter accepts third party code contributions, it must review and incorporate them. This could leverage a broader pool of contributors than Twitter’s employees but Twitter probably does not need the help. Silicon Valley tech companies attract good talent easily. Some contributions could contain subtle but intentional security flaws or weaknesses.
If the goal is to have a choice of algorithms, is this choice welcome or does it place more cognitive load on people just wanting a dopamine hit or information? TikTok succeeded by giving users zero choice, just a constant stream of engaging videos.
Evaluating an algorithm’s effectiveness is more than just understanding the code. It requires access to large volumes of test data, preferably actual historical tweets. Only Twitter has access to such data. Ignoring the difficulty of disseminating such a huge data set, releasing it all would violate privacy laws. Providing open access to historical blocked or personal tweets would also erode trust.
Elon Musk has demonstrated an uncanny ability to succeed at previously unprofitable enterprises like electronic vehicles and space travel. Perhaps there is more to Elon’s Twitter plan than is apparent. Perhaps he is saying what he needs to say to ensure public support for his Twitter takeover.
While open sourcing Twitter’s algorithm appeals to the romantic notion that information is better free, increased transparency will not create a “maximally trusted and broadly inclusive” Twitter. Social networks like Twitter coalesce almost unbelievable amounts of data almost instantly into our hands. They have difficulty with contentious issues and, therefore, trust because they reflect existing contention back at us. It is easier to blame the mirror than ourselves.
It seems every organization wants to transform itself to become more agile. They want to respond to opportunities quickly and cost-effectively. They want to adapt faster than their competition in the Darwinian corporate landscape. COVID-19, for example, required a quick shift to remote working for their employees and remote interactions with clients and suppliers. Geopolitical changes alter supply chains and increase legislation. Cloud, IoT and similar internal IT trends accelerate.
An agile transformation often starts with a consultant-led agile processes adoption. These range from methodologies like Scrum at the small scale and Scaled Agile Framework (SAFe) at the highest. Despite some negativity in the agile community aimed at higher scale methodologies, these methodologies help by providing structure, vocabulary, and expectations.
However, following anagile process is the least important part of being agile. An organization attempting to increase agility solely by adopting a new process usually creates superficial changes that foster change fatigue, at best, or failure, at worst. This failure is often then incorrectly attributed to the process without any more profound and helpful introspection, leading to the negativity mentioned earlier.
An agile organization needs agile systems. “Agile systems” does not refer to using a tool like JIRA to track work. Agile systems, both IT and business processes, are easily changed and provide feedback for timely validation or correction.
Modern software development practices are a good example. Frequent demos to users and stakeholders and automated testing and deployment, for instance, ritualize change and create tighter feedback loops.
However, an organization’s systems are not agile if only their software development teams are agile. For example, consultants often build organizations’ back office and operational systems by customizing third-party tools. Organizations then maintain them with skeleton teams and, therefore, usually lack the skills or the environment to change them safely and cost-effectively.
Product Owners are not the answer. Product Owners can shield their team’s backlog to encourage agility. However, systems are more extensive than just the team or teams that maintain or implement them. Product Owners are also usually experts in the business area and handling internal politics. They are rarely also design experts and incentivized to think strategically.
Agility extends to data. Organizations are under increasing pressure to collect and monetize data. Agility requires knowing where that data is (“systems of record”), ensuring its quality and integrating it with other systems (“systems of engagement” or “systems of transformation”).
Governance systems must adapt. Regulatory, financial, legal, risk, IT security, privacy and similar teams must work in smaller batches. A lengthy, formal review of a completed system is often too late.
Unfortunately, articulating the benefits of agile systems is difficult. Pruning teams to their minimums has clear short-term financial benefits. Executives often overstate their systems’ capabilities through ignorance or self-promotion. Product teams may fall into the trap of a short-term, sales-oriented focus under the guise of being customer-focused. Asystem’s agility often depends on an executive’s skill at shielding budgets or convincing stakeholders.
These problems may extend to company culture. Business processes sometimes lack an agreed, empowered business owner. IT is sometimes “seen and not heard”. Insufficient executive representation or ownership means minimal focus and support, making agility practically impossible.
An agile organization also needs an agile structure. This statement may sound tautological. However, an otherwise agile organization that cannot leverage and benefit from that agility wastes that effort.
A good example is what SAFe terms the “network” versus the “hierarchy”. Most organizations structure themselves hierarchically around similar skills for ease of management. For example, an organization often has a legal department for lawyers, a sales department for salespeople and an IT department for developers and system administrators.
However, work frequently requires people across different teams to cooperate, called the “network”. Identifying value chains then building and supporting these multidisciplinary teams to implement and enhance them increases flow, ownership, and individuals’ agency. Effective use of “networks” requires different management techniques and incentivizing outcomes, not throughput. These often fail without executive support.
Organizations are rarely homogeneous. Some teams or systems may be more agile than others, such as through mergers/acquisitions or pockets of conscientious staff. Tracking and aligning these is vital. Otherwise, some teams may adopt inconsistent processes or tools, be omitted from the transformation or optimize at others’ expense.
This all assumes the organization’s bottleneck is the lack of agility. Organizations frequently want to decrease “friction” but cannot articulate it beyond the highest level. Defining friction by actionable, measurable metrics is a prerequisite. Increasing agility also only highlights any poor prioritization or lack of focus.
The “transformation” concept is also a misnomer. Agile thinking encourages constant self-evaluation and improvement, and this “transformation” does not end when the consulting engagement does. From the executive viewpoint, this can scarily move the fulcrum of structural control into the middle and lower levels of the organization.
The chosen agile methodology’s principles best guide agile transformations. Unfortunately, people easily gloss over these in favour of the easily implemented and more prescriptive processes. However, these are often what the more prescriptive parts are derived from, not vice versa. If you want to be agile, internalize the principles!
Agile transformations are more than superficial changes. Focusing too much on process changes instead of systems and structures often stalls or blocks agile transformations. Instead, these transformations require people to move outside their comfort zones, particularly executives. The problem with agile transformation is it is hard but increasingly necessary.
A few weeks ago, my wife was cleaning the kitchen. She unplugged the toaster that I use every morning, cleaned the bench then plugged it back in. Unfortunately, she plugged it into the left power socket not the right.
I turn off my toaster when I am not using it. When I made my breakfast the next morning, I placed the bread in the toaster then instinctively turned on the right power socket. A few minutes later, I wondered where my toast was. I discovered I had turned on the wrong socket, as you can see in the image above.
Amused at my mistake, I switched on the correct socket and soon after was enjoying my toast. However, I made the mistake the next day and the day after that and the day after that.
Many people, including Mark Zuckerberg’s famous comments around his dull wardrobe, emphasize focusing on important decisions and automating or standarizing others. Productivity is often determined by attention or energy management, more so than time management or attention to detail.
Without realizing it, I had done a Zuckerberg with my toaster. It got me wondering what other assumptions had I made throughout my day.
Specifically, are there small things I do that have a big impact on me or others? Are there “broken windows” I need to fix?
What further improvements can I make to streamline my day and focus my energy on things that have a greater personal or professional impact?
What if the underlying optimizations or assumptions I use change? What if someone “moves my cheese”?
Similarly, sometimes the small things with systems are actually the big things. As architects, we often prefer cost and features over things like user experience. However, user experience has a bigger and more lasting impression on the people part of the systems we engineer.
Ironically, the image above was taken a some time after cleaning. The toaster and bench need another clean. I wonder what I will discover about myself next time?
Many things about IT architecture simultaneously attract and repulse people. Architects are technical decision makers but often leave the implementation to others. They translate between the business and technical, taking an economic view of IT.
Fortunately and unfortunately, architects are not managers. While this frees them from budgets, staffing and directly managing people; it also means every decision they make is really a decision making aide for other managers. Architects are all responsibility but no authority.
Architectural authority is only granted through the authority of the management responsible. While managers can perform an architecture function, often in smaller teams, architects are usually senior individual contributor roles. Teams implement an architect’s designs because the manager says so.
Acting through managers’ authority also means architects must influence to ensure their architectures are adhered to. While some architecture teams take a more dictatorial approach, most architecture teams ensure their designs have clear benefits for all stakeholders and contributors. If a team sees no value to them from following the architects’ directions, they can often ignore it.
Even governance – ensuring other’s designs are complete, follow the broader architectural vision and are implemented as specified – works via influence. A lower level manager may baulk at his or her team doing significant work that does not benefit their team but there are often constraints or impacts outside their team. Higher level management must step in to ensure teams meet broader business goals, not just their own.
Like managers, architects engage with multiple teams and senior management. They need to communicate at different levels with different strategies (like management), switch frequently (like management) and are ultimately judged by outcomes (like management).
This means the architect depends on others to ultimately implement the systems involved. Like managers, architects need to tailor their output to their teams. An architect can delegate much of the lower-level details to a capable team familiar with the problem may need only high-level direction. An inexperienced team working on an unfamiliar problem may need a lot more help. An architect’s failure can doom the project.
Like management, architects handle ambiguity and conflicting requirements. These require a mix of technical, business and political knowledge to navigate but also allow the architect (or manager) to demonstrate his or her experience and value. Architects, like managers, should be looking at the bigger picture, considering the economic impact and giving non-technical solutions their due.
Of course, there are many things managers need to consider that architects do not. For example, architects can rarely delegate. Architects are individual contributors tasked with ensuring minor, often technical details do not compromise strategic goals.
Unlike management, architects need to evangelize their work and value more than management because they lack management’s built-in responsibilities. They may be the driving force behind a project but the success may be attributed elsewhere.
However, the overlap between management and architecture is larger than many realise. This overlap is why architecture is a senior role. When an architect sneezes, their areas of responsibility catch a cold. Architects are not managers but they players in the same game. They do a lot of managing anyway, whether that be up or down.
Many people are attracted to software development because they love technology and development. Viewing it more as a hobby they are paid to undertake, they gladly spend time outside work solving that nagging problem, mucking around with the newest framework, contributing to open source software or exploring opinions on Twitter.
There is a subset of software developers that takes this to extremes. It is possible for someone that does not “eat and breathe” code to still take pride in their work, to still be a craftsperson or to want to learn more and improve. However, alpha developers make software development part of their identity and their desire for respect drives them to competitiveness.
From a hiring organization’s perspective, these alpha software developers are wonderful. Their pride dictates they produce high-quality work, often at the expense of their personal time. Training costs are minimal because they already know or quickly assimilate new tools, frameworks or techniques. Their competitiveness can force everyone to produce and learn more. They are happy to leave business decisions to others and focus solely on the technical. While these all have downsides, successful companies have learned to temper them.
However, alpha software developers create barriers. Alpha developers’ pride compels them to take technical leadership roles and demand others live up to their standards. Their knowledge of new tools and techniques and almost overriding urge to try them out can shut out discussions of other solutions. For those less enamoured with software development, alpha developers can be intimidating.
When asked to train others, alpha developers feel that owning one’s technical development and career path is a rite of passage. It is not that they look down on people who know less, more that alpha developers made the effort to train themselves so why should others be given special treatment?
Meanwhile, alpha developers feel their performance is judged on their own output and helping others interferes with that. Indeed, alpha developers will work around other developers if they feel they have to “save the project” by rewriting others’ code or taking on others’ work out of impatience.
This problem is exacerbated when alpha developers move into leadership positions. When hiring new developers, they perceive alpha developers as superior and hire them over others. When evaluating others, they reward alpha qualities.
Focusing on alpha software developers creates a monoculture, focused inward on technical prowess and knowledge. Decisions need broad, representative viewpoints. While few companies will have ample members of the target audience on staff, few companies’ target audiences are solely alpha software developers.
This relegates non-alpha developers to permanent “junior” roles. This blocks their career progression even though they may be well suited to roles that software development feeds into like business analysis, user experience, consulting, quality assurance, IT administration or solution architecture.
This also risks the competitiveness between alpha developers boiling over to conflict or burnout. Like a sports teams, having too many ego-driven superstars creates problems. Teams work best with people in a variety of roles and software development is a team sport.
Solving a problem like this, particularly something a deeply ingrained in software development culture, is not simple.
The first reaction is to move away from using lines of code or other similarly easily measured metrics as the primary determinants of productivity to ones that indicate the success of the project. This encourages a team-centric view of productivity, not individual-centric.
However, the problem is deeper than that. Like using the term “craftsperson” instead of “craftsman” at the start of this post, we need specific language to drive specific thinking. It is hard to conceive of ways to drive value without terms to describe them.
For example, a “developer experience” engineer could focus on improving the efficiency of existing developers and hastening the onboarding of new developers. While documentation is part of this role, its focus is more on fixing inconsistent APIs, gathering useful diagnostics, ensuring error messages are complete and descriptive, replacing or fixing buggy libraries and improving internal tool reliability.
This role focuses on the productivity of other developers and understanding how they work instead of raw lines of code. This person should not get too involved in the internals of the software. Otherwise, he or she may start to overlook or forgive bad practices.
Another potential role is a “business process integration” engineer. Working on a lower level than a user experience engineer, they look at product customization, integrations and automation/orchestration opportunities. For internal systems, this could be about integrating the system into a Business Process Management (BPM) or workflow solution. For external systems, this is similar to a customer-facing solution architect but works with the code directly to help users customize or leverage the product.
This role requires an understanding of the broader business context, how software is used by the organization and what the organization views as important. It is a good conduit into business analysis or enterprise architecture.
This all boils down to a search for value. While focusing on software is what others would expect software developers to do, focusing it to the exclusion of some of the software development community is a poor strategy. We need to change how we view and measure our software developers and change who we see as aspirational.
Theresa May’s speech in response to the recent terrorist attacks in London have, once again, mentioned cracking down on cyberspace “to prevent terrorist and extremist planning” and starving “this ideology the safe space it needs to breed.” World leaders, including Australia’s prime minister Malcolm Turnbull supported her, saying US social media companies should assist by “providing access to encrypted communications.”
Cory Doctorow and others make valid points about how impractical and difficult these dictates are to implement. Politicians mistakenly assume that weakened encryption or backdoors would only be available to authorized law enforcement and underestimate how interdependent the global software industry is.
However, presenting this as a binary argument is a “sucker’s choice”. Law enforcement is likely concerned because it cannot access potential evidence they have a legal right to see. While same laws arguably impinge personal freedoms, is it technology’s or technologists’ role to police governments?
Meanwhile, modern cryptography protecting data cannot also allow law enforcement access without weakening it. Consequently, technologists lambast politicians as ignorant and motivated by populism, not unreasonable considering Brexit and similar recent political events.
As technologists, we know what technology can and, more relevantly, cannot do. While it defines short term options, our current technology does not limit options in the long term. The technology industry needs to use the intelligence and inventiveness it prides itself on to solve both problems.
I do not know what forms these solutions will take. However, I look to technologies like homomorphic encryption or YouTube’s automated ability to scan it’s nearly uncountable number of videos for copyright infringements. There is certainly challenge, profit and prestige to be found.
The threat of criminal or terrorist action is not new. Mobile phones, social media and other phenomena of the digital age grant them the same protections as everyone else. Dismissing solutions from the ignorant does not mean the underlying problems go away. If the technology industry does not solve them, politicians may soon do it for them and, as Cory Doctorow and others point out, this will be the real tragedy.
The IT industry is swamped by certifications. Every conceivable three-, four- or five-letter acronym seems to mean something. However, everyone can recount a story of someone certified but clueless. In a world where answers are often a quick Internet search away, are certifications still relevant?
Certifications aim to show someone knows something or can do something, like configure a device or follow a process. Condensing a complex product, process or industry into a test is hard. Schools and universities, dedicated to learning with larger budgets, have been grappling with this for some time and even multi-year degrees are not always good predictors of competence.
Knowledge atrophies and conditions change. While some certifications require periodic certification or ongoing training to keep candidates current, there is no way to guarantee someone maintains or improves their skill and their knowledge is current.
Certifications risk devaluing experience. For example, the Microsoft Certified Systems Engineer (MCSE, now Solutions Expert) boot camps of the 1990s saw many inexperienced candidates spoon fed the minimum information to pass then unleashed on an industry expecting people more capable. Why hire someone experienced when you can hire a newly minted MCSE at a fraction of the price?
Certifications are no longer the only way to demonstrate competence. Speaking opportunities at user groups, social networks and blogging are open to anyone. Online training websites like Coursera or Pluralsight provide similar or identical material to common certifications at no or minimal cost. For a more specific example, a software developer that wants to demonstrate competency in a library or programming language can contribute to open source software or answer questions on Stack Overflow.
Many candidates complain about excessive certification costs, particularly for not-for-profit certification bodies. Certifications are expensive to create and administer, particularly minimizing cheating, and to market, because an unknown certification is wasted.
Does that mean certifications are dead? No. Certifications continue to have the same benefits they always had.
Certifications make you more marketable. Many employers look to them as shortcuts for skills. Hiring someone certified decreases risk. Couple with experience or aptitude, they may lead to increased pay or new positions. They can even be a personal brand. For example, putting a certification next to your name on LinkedIn immediately tells the viewer your career focus.
Certifications open new networking opportunities. Certifications identify people with common interests or solving similar problems. Meetups, conferences and training courses target these. Some give discounts to certification holders, too.
Certifications tend to give rounded and broadly applicable knowledge, including different technologies, business areas or perspectives. They usually reference authoritative information and cover best practice, albeit sometimes abstracted or out of date. This can be harder to Google for because it requires domain knowledge.
Certifications benefit certifying authorities, too. From a vendor’s perspective, certification programs ensure product users are competent by requiring partners and resellers to have certified staff. Periodic recertification or certification expiry forces users to be up to date and creates recurring revenue.
The existence of certifications indicates a product’s or market’s maturity. They can help standardize, unify or legitimize a fragmented or new discipline. Certifications are as much a marketing tool as technical.
They allow vendors to identify and communicate directly with the user base. Vendors often know their customers (who is paying for the software) but not the people using it.
Certifications are not going away and are still relevant for the same reasons they always have been. They can still be a differentiator and misconstrued. They are still useful to vendors but expensive. However, the real question is how the current alphabet soup needs to evolve and still be relevant in the constantly changing IT landscape, particularly for areas like software development with a poor certification track record. That is something for the next blog post.
Image credit: http://www.flickr.com/people/bean/. Usage under CC BY-NC 2.0.
The term “corporate politics” conjures up images of sycophantic, self-serving behavior like boot-licking and backstabbing. However, to some IT professionals’ chagrin, we work with humans as much as computers. Dismissing humans is dismissing part of the job.
The best way to “play” corporate politics is solve big problems by doing things you enjoy and excel at.
“Big problems” means problems faced not just by your team but by your boss’s boss, your boss’s boss’s boss and so on. If you don’t know what they are, ask (easier than it sounds). Otherwise, attend all hands meetings, read industry literature or look at your leaders’ social network posts, particularly internal ones.
This is not just for those wanting promotions into management. Individual contributors still want better benefits and higher profile or challenging projects. These come easiest to those known to be providing value and not the strict meritocracy some IT professionals think they work in.
Start by solving small problems as side projects. Choose something impacting more than your own team and minimize others’ extra work. Build up to bigger problems once you have demonstrated ability and credibility.
You need not be the leader. Assisting others making an effort can be just as effective. You can own part of it or bask in the halo effect. If not, recognize those that are. This creates a culture of recognition that may recognize you in the future.
While some IT professionals solve big problems everyday, communicating and evangelizing their work “feels” wrong. This what salespeople do, not IT professionals. Many also think their work is not interesting.
Being successful requires people knowing what you do. This may be as simple as a short elevator chat, a brown bag talk or a post on the corporate social network. It also helps get early feedback and build a like-minded team. Others will be interested if you are working on the right things.
What about the potentially less savory aspects of corporate politics like work social events, sharing common interests with management, supporting corporate charities and so on? These are as much an art as a science. Focus on common goals and building trust, internally and externally. People like to deal with people at their level and contact builds familiarity.
However, this is no substitute for solving big problems. If you are delivering value, interactions with senior decision makers and IT professionals with similar goals should occur naturally. Build on that.
Be aware that problems change over time. Problems get solved by others. The market changes. Competitors come and go. Understanding organizational goals is an ongoing process.
Also realize decision makers are human. They make mistakes. They want to emphasize their achievements and not their failures, just like software developers’ fundamental attribute error bias for their own code and against others’.
However, if your organization makes decisions regularly on “political” grounds, leave. Culture is rarely changed from the ground up and many organizations are looking for good IT staff.
Ignoring the worse case scenario and IT professionals’ bias against self evangelism, the biggest problem with “corporate politics” is actually its name. The concepts behind “agile” and “technical debt” came into common usage once the correct metaphor was found. Corporate politics needs rebranding from something avoided to a tool that IT professionals use to advance themselves. It badly needs a dose of optimism and open mindedness.
However, one criticism of the post, indeed the information security industry, is its implication hacking is the sole information security career path. This binary viewpoint – you are either a security person or not and there is only one “true” information security professional – does more harm than good.
Hacking is technology focused. However, security’s scope is not just technical. Information security needs people that can articulate security issue impact, potential solutions and their cost in terms non-security people can understand. This requires expertise and credibility in multiple disciplines from individual contributor level to management to boardrooms.
Security solutions are not just technical. We live in societies governed by laws. These can be standardized government security requirements as FedRAMP or IRAP. These can be contractual obligations like PCI-DSS, covering credit card transactions. These can hold organizations accountable, like mandatory breach disclosure legislation, or protect or privacy, like the European Union’s Data Protection laws. Effective legislation requires knowledge of both law and information security and the political nous to get it enacted.
The list goes on. Law enforcement needs to identify, store and present cybercrime evidence to juries and prosecute under new and changing laws. Hospitals and doctors want to take advantage of electronic health records..
The security technology focus drives people away non-technology people. In a world crying out for diversity and collaboration, the last thing information security needs is people focusing solely inward on their own craft, reinforcing stereotypes of shady basement dwellers, and not on systems security enables.
Bringing this back to software, many organizations contract or hire in information security experts. Unfortunately, the OWASP Top 10 changed little from 2010 to 2013 and some say is unlikely to change in the 2016 call for data. According to the Microsoft Security Intelligence Report, around half of serious, industry wide problems are from applications. Developers make the same mistakes again and again.
Education is one solution – security literate developers will avoid or fix security issues themselves. A better solution is tools and libraries that are not vulnerable in the first place, moving security from being reactive to proactive. For example, using an Object-Relational Mapping library or parameterized queries instead of string substitution for writing SQL.
Unfortunately, security people often lack skills to contribute to development and design beyond security. While information security touches many areas, information security expertise is not development (or networking or architecture or DevOps) expertise.
Information security needs different perspectives to succeed. As Corey House, a Puralsight author like Troy Hunt says in his course Becoming an Outlier, one route to career success is specialization. Information security is a specialization for everyone to consider, not just hackers.
I am a self-motivated, adaptable, outcome-focused enterprise and solution architect that gravitates toward technical leadership roles. My experience covers architecture, management, security and software development roles over 20 years, from multiple startups to global technology companies. I am an inventor of multiple patents; hold a variety of security, IT and agile certifications and contribute to open source software.
I have worked as an enterprise and solution architect at global technology companies like NTT Limited and Symantec. My focus has always been client-facing services, ideally ones that mix software development and IT management.
This blog explores the deeper thinking and processes behind writing software, building IT systems, and how they fit into the wider IT and business landscape.
Opinions expressed in this blog are the author's and not necessarily those of his employer or its affiliates.
Content is published under the Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.
The header image and blog icon are from the blog Random Acts of Photography. This is used with kind permission from the author and under the Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 License.