Sunday 10 December 2017

Nike Development


When you work in a development team, especially one of a reasonable size, you eventually end up spending a significant amount of time on planning and preparation.

This is necessary to ensure that the output of the team moves in the right general direction and nobody would suggest that chaos is more desirable alternative.

However there is such a thing as analysis paralysis and sometimes there are advantages to having an attitude of "just do it".

Learn By Doing

Software development is a cerebral activity that requires upfront thought, simply starting to bash out code will very likely result in a slightly incoherent code base with a lack of discernible architecture.

On the flip side any problem of significant complexity will prove difficult to get to grips with purely on the white board. No matter how carefully you try and map out a solution you will always discover problems in your thinking once coding starts.

This is a difficult balance to strike and relies on having a sense of when more thinking is required and when more diagrams won't help and the time has come to try something out.

This has to be coupled with an Agile mindset of expecting bumps along the road and being prepared to adapt and think on your feet when a wrinkle in your thinking is revealed. 

Software Over Documentation

A consequence of trying to move to the coding stage quicker will be that you will naturally accumulate software faster than documentation.

I do believe documentation is important and your thinking behind a system should be available for others to consume, it is simply a matter of detail.

The details of exactly how something works change and the code that implements that detail will always be the best documentation of it, ultimately its the single source of truth as its actually doing the work.

Documentation should provide enough insight to indicate intent to those that read it to guide future enhancements in the right direction.

It shouldn't delve down into the inner workings of the machine and it certainly shouldn't do that before the building of the machine is underway.     

Embrace Failure

The underlying narrative of this thinking is one that people sometimes find difficult to contemplate.

Not everything is going to work the first time you try it. Software development is way to complicated an activity for that to ever be the case.

If you accept that fact of inevitable failure then the choice is a stark one, would you rather fail after months on the drawing board or would you rather fail after a shorter amount of time with at least the possibility that some valuable software has been created even if some is also flawed.

There is also an implication that failure has to be managed, both with its impact on timescales and delivery, and also defining what is an acceptable level of failure in a design if we still want to ship it.

This last part is in essence the nature of an MVP, its acceptable for a design to be flawed in that it doesn't support all possible features or functionality. What isn't acceptable is that its flaws prevent those features from ever being implemented in the future.

The role of any software development team is to produce software that addresses a business need. 

That doesn't mean software should be produced at any cost, a lack of thought in what your doing will always catch you out in the end.

But no design will be perfect and no solution foolproof until they have been proven with working code that implements them.

The sooner you can move on to coding the quicker you will gain a warm feeling that this will work, or sooner you will receive the feedback that a new approach is required.

If you arrive at the stage that you think I need to try this out, just do it.  

Wednesday 29 November 2017

The Pragmatic Architect


The role of a Software Architect is not a purely technical role within an organisation.

While you are tasked with ensuring technical competence you must also have an appreciation for the business goals of your organisation and understand what is required of you to help achieve them.

The majority of companies are not in the software production business, they are in the solution delivery business.

That means that being a Software Architect is a delicate balance between ensuring software is being built in the right way but also ensuring that it ships to customers so it can deliver value.

This requires a pragmatic outlook taking into account many different factors and influences and finding a way forward on all fronts.

Debt Accounting

Technical debt is incurred whenever a sub-optimal choice is made in the solution to a problem.

This is very different from implementing something badly, instead this can mean not dealing with all edge cases, not optimising performance or not implementing all possible variations of a feature that may be technically possible.

This amounts to trying to rationalise the scope of work and ensuring that something that is viable is achieved within opportune timescales.

Viable in this context means not only that it delivers tangible value to users but that the value is delivered via the application of viable engineering, meaning it is testable, scaleable and adaptable.

Whenever you chose to incur technical debt, and you will be faced with situations where you have to, ensure that you understand the nature of the debt, the limitations it will impose and how you will engineer it out of the product when the time comes.

No project is free of tech debt so having an effective strategy for dealing with it is an essential skill.

Maximising Opportunities

As an Architect your primary goal within your organisation is to engineer the solutions it requires, but achieving this by the application of solid engineering principles is no less important.

These dual responsibilities will often clash which is why you must always be on the look out for opportunities to advance both causes when they arrive.

Your organisation will primarily be concerned with the delivery of features, the details will be left in your hands, so learn to recognise an opportunity to deliver a solution and advance your underlying engineering platform at the same time.

Alternatively by having a close understanding of any previously incurred tech debt you may also be able to advance a solution that addresses these issues as well as delivering the required value.

In this sense you can save others in the organisation from themselves by giving them the solution they need rather than necessarily the one they are asking for.

Pragmatism in knowing which battles to fight would also come under this banner, knowing when the business imperative is too strong to try and turn the situation in an opportunity for refactoring or engineering. 

All in the Presentation

Architects often cross the divide in an organisation between the technical and the non-technical.

This often means developing two different forms of communication, you must be able to explain technical concepts to those being asked to implement them but conversely explain the same items in plain english to those being asked to fund or resource the development.

One set of people want to understand the detail and get to grips with the technicalities of what is being asked for and one does not.

When talking to non-technical people through an architecture the conversation has to focus on the benefits that will be realised by taking a particular approach, shaping the conversation like this enables people to make the correct decisions.

As proud as you may be of your solution, the details of your genius will not have the desired impact on your non-technical colleagues, it is much better to talk in terms of outcomes rather than the technicalities of getting there.

Being a valuable member of a team usually involves being well rounded enough to able to see problems and challenges from multiple perspectives.

As an architect being a zealot who only ever thinks about the technology will ultimately decrease your value to the team and reduce the effectiveness of your solutions.

In any aspect of life finding balance can be difficult but simply being open to the possibility that an organisation has goals beyond simply producing software is a good start.   

Monday 6 November 2017

One to Rule Them All


Software engineers often display a strong preference for compartmentalisation, a place for everything and everything in its place.

With this in mind it might seem like heresy to suggest that all source code for an estate of products should be stored in a single code repository.

Naysayers will scoff at the notion but many organisations that maintain extremely large code bases, namely Google, Facebook and Microsoft, have moved to the concept of a monolithic repository with every line of code in a single place.

Have these development teams taken leave of their senses or could it be that the benefits of having one repository to rule them all are too good to ignore?

A Wonderful View

Having the ability to view a entire code base all at once can enable us to identify patterns across many different areas of code that may otherwise be difficult to spot.

Tools that produce such metrics as duplication, technical debt or even security issues gain a whole new power to deliver insight when they are scanning every line of code you have at once.

Having the whole code base in view will also empower developers to find more ways to share code, by performing even relatively simple searches through the IDE developers will be able to uncover where similar problems have already been solved to the one they are facing.

This increased capacity for collaboration coupled with a more inclusive attitude to code ownership can reduce the need to re-invent the wheel and ensure re-use becomes a default approach.

Atomic Refactoring

Its frequently the case that we are prevented from doing the right thing because of the potential pain in trying to manage the change across multiple code bases.

Once all the code is in a single place not only is it easier to envision the changes that need to happen but it can be implemented in a single atomic operation.

Even a simple yet wide reaching change such as re-naming a class or changing a namespace can be achieved across the entire estate in a single operation.

This can also prevent refactoring getting off to a false start where the true impact of a change isn't fully realised before implementation begins.

The freedom this gives a development team to contemplate refactoring that they would never normally be brave enough to attempt can liberate them to do the right thing on a much more regular basis.

Simplified Dependency

A potential headache in maintaining any code base relates to the management of dependencies.

This can be tough enough when dealing with dependencies on 3rd parties but when we create a large set of interconnected internal code bases we only increase the size of this potential nightmare.

By essentially building all our internal dependencies from source we reduce dependency hell to its smallest possible size.

The benefits of atomic refactoring and a complete view of the code base also make the job of updating a dependency easier and more palatable then may normally be the case.

Many reading this might be thinking isn't this desire for a monolith going against many of our ways of thinking for creating independent pieces of code such as SOA or micro-services.

This is where I feel we need to draw a distinction between how we organise our source code and how we deploy it output.

A single repository doesn't have to mean a vast single solution or project.

It is possible to realise all the benefits described in this post whilst still on a day to day basis working on a small and specialised area of the code base.

The fact that it is possible to load the entire code base doesn't mean that has to be the normal course of operation, but the fact we can if we choose to see our entire world at once gives us the ability to more effectively shape it.

It would be wrong to suggest that moving to a single repository is pain free, it does place extra stress on tooling and all the companies mentioned that have adopted the approach have had to adapt their ways of working, but the potential benefits it can bring are difficult to ignore.

Monday 30 October 2017

How to Break Agile


When any school of thought, way of thinking or approach to a problem becomes significantly popular and widely adopted then a counter culture starts to grow to oppose it.

Often it is to the benefit of all of us that perceived wisdom is challenged, however I believe most of the criticism of Agile is actually misdirected, in that it actually relates to flawed implementations of its principles rather then making a killer blow against the entire philosophy.

Agile is not a procedural approach to development it is a way of thinking, a mindset to use to maximise the output of a team.

When Agile is seen to not be working it is usually because of a failure to adopt this mindset.

Fix Time and Scope

When trying to manage a software delivery there are two levers that can be pulled to adjust the outcome, we can make changes to the scope of the release or we can change the date on which software will be delivered.

Changes in scope can either increase or decrease the effort required to complete the delivery, and changes in deadline can either increase or decrease the amount of available effort we can spend prior to releasing.

What we have to realise is a relationship exists between those two things, the fact being that they cannot both be fixed at the same time.

To do so either leads to inefficiency that delays the release of working software or tries to arrange a situation where software is delivered without effort being expended.

Deadlines are a fact of life and it would be foolish to imagine a situation where they don't exist, but as soon as a date is fixed then scope is the only lever left to adjust the teams trajectory towards hitting that target, if that isn't an option then the difficult decision of moving a deadline must be faced.

Failure to Iterate

Although not expressly mentioned, an Agile mindset leads to a certain acceptance that software development is an unsolvable problem.

Your product will never be finished and it will never be perfected.

Once we've learnt that lesson then we can gradually come to the realisation that the next best thing is to try and ship frequent incremental updates to ensure the product is the best it can be right now.

The delivery of this incremental change may be imperceptible to users but allows for a direction of travel to be established.

When the apparent big bang of a new feature is discovered by a user this is actually just one more small delivery, providing the cherry on the cake of several previous unseen steps forward. 

A failure to iterate is often the result of a failure to properly prioritise.

A priority is a singular entity that stands alone, to pluralise prioritises is to admit to a lack of vision for the direction the product should move in, it leads a strategy akin to throwing around features and seeing what sticks.

Speed becomes of the essence as we increasingly fear that users will leave us without the next big thing, when actually users are often happy to have something that works that receives regular attention to ensure that remains the case.  

Breakdown of Trust

Many would find it quite shocking to be asked if they trust their development teams.

But actually it is not uncommon to observe behaviours that indicate that trust has broken down.

First and foremost this is seen during the estimation process, too often there is seen to be a right and wrong answer to providing an estimate.

Development teams shouldn't be asked to estimate as a courtesy, it is because they are experts in there field. 

Repeating estimation without any conversation around scope leads to the madness of asking the same question over and over until the calculation yields a different result.

Trust can also be eroded when a development teams warnings about potential bumps in the road are not heeded.

Too often this is seen as engineers obsessing about technical detail, while on occasion this may be true, this comes from people who understand all the moving parts of the code base and therefore can also envision the areas where the machine may start creaking.

There is more to being Agile then simply adopting its ceremonies and ticking the boxes of its various implementations.

Agile is a philosophical viewpoint on how the unenviable task of software delivery can be approached.

It isn't a formula that can be solved to produce a proof for success.

Many teams start off with good intentions and want to do the right thing but they eventually encounter the pitfalls of allowing their minds to drift to more attractive ways of viewing the problem that promise a solution that will never be delivered.

Learn to recognise the signs of these mindsets creeping in to your team and use these opportunities to re-affirm your commitment to keeping an Agile mindset.

This will inevitably lead to giving up some perceived control over a situation, the first step towards a transition to Agile from more traditional methods is to realise this control is an illusion, the world simply isn't like that.           

Monday 16 October 2017

Always Deploying


Within most development teams we're becoming used to most things being continuous.

The adoption of Continuous Integration (CI) has given us tooling that can ensure the effort of developers can be combined at the earliest opportunity, and an increasing emphasis on shifting left has made it possible for us to ensure quality is a default position.

But ultimately software has to be deployed to be of any value and despite all the tooling available to smooth this transition into production for many teams this final push is still something thats feared.

If your fearful of deploying code this will become a self fulfilling prophecy and things will go wrong, your fear will grow and you will deploy on an even more infrequent basis.

Break this cycle of fear and move to a situation where deploying to production requires little effort and is something that is a natural consequence of your team delivering.

Not Wrong for Long

The likelihood of an issue occurring after a deployment is proportional to the amount of change being deployed.

If you deploy large change infrequently not only do you need to be confident in your testing regime but its likely that if you do face difficulties you'll find it harder to roll back your changes to your last known good state.

If you deploy small change frequently the chances of an issue arising are reduced but also the fast route into production you've developed means rolling back is simply another deployment accomplished quickly and with little fuss.

This ensures that if the worst happens your not wrong for long.

A long drawn out deployment process is an attempt to make sure nothing ever breaks in production. While that's an admiral intention mistakes will happen and ultimately all this achieves is complicating the recovery from these unavoidable mishaps.

Continuous MVP

A misunderstanding of the concept of a Minimum Viable Product (MVP) is probably the biggest barrier to teams becoming truly Agile in their approach.

Infrequent deployments can exacerbate the tendency to try and squeeze more scope into each release because of the limited opportunities to deliver features.

When features can be deployed at will it becomes much easier to move people to a mindset of shipping a true MVP, they know the next release can be whenever some value has been added to the journey rather than the next scheduled window of opportunity. 

Reducing the time between an idea and a deployment into production also increases the opportunity for user feedback to have a real influence on the direction of a product. 

Feedback is received early meaning large amounts of effort isn't wasted on functionality that users don't find valuable, and once again if need be removing the functionality entirely is just another deployment. 

Don't Break It

Regular deployment of code will also encourage people not to break things.

Large gaps between deployments tends to encourage an acceptance towards parts of the code base being broken, inevitably leading to an increasing feeling of panic when release day does eventually bear down upon us.

When code is to be shipped as soon as its finished not only does code have to be maintained in a working state but this continual green light has to demonstrable, this leads to an increasing effort being put into effective automated testing and good documentation of what the code base working actually means.

Here a distinction can be drawn between Delivery and Deployment, even when operating a Continual Deployment strategy it isn't necessarily the case that every build has to be pushed to production, but it certainly should be the case that it could.

Aside from wanting to deploy new features at the earliest opportunity we can also never predict when an emergency, such as security vulnerability, forces into a deployment we weren't planning on.

In this situations it is definitely advantageous to know everything still works.

Continuous could be described as the mantra for modern development practices, all the activities we value should be happening all the time.

As the last step in the chain that connects us to our users this should also apply to deployment.

As with so many things its good advice to confront your fears, if release day currently comes with feelings of dread then give yourself the impetus to confront these fears and find a solution by having more and more release days.

Delivering into production should never be taken lightly, and quality should always be maintained, but its possible to do this while also trying to give users the value the instant its achieved by your development team.         

Monday 9 October 2017

Engineering SLAs


As a developer you'll often be asked to explain why you have written certain pieces of code the way you have and some of this feedback may be critical in nature.

When your faced with this situation please don't use the justification "but it works".

There is so much more to software engineering then simply writing working code, if working is the only quality of code worth mentioning then I would wager it won't stay working for very long.

In order to qualify as an engineering discipline we should be aiming our sights a little higher and look for our code, and the systems we use to produce it, to have many more qualities then simply working.

Ability to Deploy Change

Software engineering probably exhibits the fastest rate of change of any engineering discipline.

A code base under active development by a large team may see thousands of changes being made to code daily.  For this reason the ability to accept change is an important property for a code base to have.

This may sound strange, surely any piece of code can be changed? Actually the ability for a code base to be changed, and not re-written, is something that can only be achieved via careful thought.

Many of the solid design principles, such as Single Responsibility, Open Closed and Dependency Inversion exist to promote this acceptance of change without having to throw away large portions of code whenever a new feature is requested.

But the ability to change or modify code is only part of the story, change that isn't deployed is largely a wasted effort, a code base must also demonstrate an ability to deployed easily and without fuss.

This is achieved architecturally via the use of patterns like micro-services but it also requires the confidence given by having well structured automated tests that can be relied upon to identify issues and potential defects.     

Ability to Determine Performance

We deploy code with a purpose in mind and our users expect it to perform that purpose with an acceptable level of performance.

Hearsay and rumour and not effective ways to measure performance, cold hearted metrics that can demonstrate what the code is doing and how well it is doing it must be thought about before deployment and continually measured and reported on.

This should create a feedback loop that when coupled with an effective ability to deploy change will accelerate your product to new heights.

Before the operation of software can be effectively measured it must be clear to everyone what each area of the code base does and how it does it. It must also be clear the reason certain events are generated and there importance to the overall success of the code that achieving its desired outcome. 

Ability to Demonstrate Security

If any single quality of code has been elevated in importance in recent years it is that it should be secure.

Any code deployed almost anywhere can expect to be the subject of inspection and attack, whether the intentions of the attackers are mischievous or criminal it is no longer acceptable for security to not be on the minds of developers at all times.

But security doesn't exist on paper or whiteboards, it exists when a system can demonstrate an ability to detect, repel and recover from an attack.

The ultimate manifestation of a code base that doesn't meet this criteria is one that relies on "Security through Obscurity", there are a lot of smart people out there trying to attack the code they find and you may find your secretes don't stay obscure for long once they turn their attention to your code.

Security must be demonstrated by a willingness to submit your code to deliberate attack.

There must also be an acceptance that sometimes security flaws will be uncovered that must be fixed, sometime they will be found by effective monitoring and once again when coupled with effective deployment of change this can ensure windows of opportunity for attackers are as small as possible and closed as soon as they are found.

The minimum that should be expected from any developer is code that fulfils its original purpose and can be said to be working.

Those that rise to the upper echelons of their profession don't stop when this minimum acceptable state is achieved, they instead realise that code must also demonstrate some other important properties.

In this way they elevate their art beyond the mere production of working code to an engineering discipline with all the rigour and attention to detail that is implied. 

Wednesday 4 October 2017

Concerning Seperation


The majority of good engineering and architectural practices in software development can be traced back to the principle of separation of concerns.

A concern may be deemed a reason to change, a functional unit or a set of connected pieces of information but whatever the definition it should be possible to demonstrate clear and distinct separation when it comes to the implementation of particular tasks and actions in a code base.

When this isn't the case we instead see the emergence of spaghetti code, instead of well formed modules of code we are left with interwoven and potentially unintelligible software that is very difficult to pick apart.

It should be possible when explaining the construction of a code base to identify a certain layering along the lines of the concerns that will broadly fulfil common objectives.

Business Logic

Software generally exists to represent a particular business domain, these domains will always have particular rules of operation and practices that define what it is the business does for its users.

This logic when implemented in a code base should be distinct from the implementation of integrations, such as REST APIs, that provide input and also to the mechanism that is used to present the output to the user.

The business logic should be unconcerned with how the data it is using to make decisions is being provided and also to what will be done with the outcomes it is producing, it is purely about logic.

Model

Alongside logic a business domain will also likely have a representation of the data it deals with and how it models its world.

These model classes should be only concerned with representing these entities and be entirely dumb.

It should be very rare to see logical programming constructs in these model classes as the place for logic and model manipulation is in the business logic that represents the rules that govern the management of the model.

A simple example might be that a class representing a bank loan shouldn't contain code to calculate repayment values or fines for late payment, it should simply represent the amount owed by the customer.

Services

No piece of software is self contained, it generally relies on the flow of information in and out of its own sphere of operation.

These flows might be API calls to servers, to functionality within the underlying OS or to other pieces of software deployed alongside it.

All of these interactions should be encapsulated within a service layer, the responsibility of which is to understand the mechanism for the interaction and how to either retrieve or send the relevant information.

A service layer interacts with outside entities so you don't have to, abstracting the detail so other parts of the code base can concentrate on what can be achieved with the information being provided.

These services should also be small enough to make possible the opportunity they can be composed to perform more than the sum of the parts. 

Presentation

Most software at some point must open up a window into its world, both showing the user the current state of its domain whilst also accepting input from the user to change that state.

Any logic in this layer should be solely concerned with choices related to the presentation of data, at no time should it be deciding what data should be displayed only how it should be rendered.

Being the only area of code the user can interact with means it is necessary the presentation layer be the first to be informed of a users input, the code in this area should be the minimum to provide a linkage between this input and the business logic that will decide the outcome, not filtered, second guessed or pre-empted.

Most reasonably complicated software will have more distinct layers than have been presented here.

These might cover hardware abstraction, platform abstraction or any other distinct functional units that can be identified.

Whatever they may be it should be possible to draw a ring around them on an architecture diagram rather than a many sided polygon.

A single concern is a single reason to change and a single reason to cause a bug.

The impact this will have on the maintainability and extensibility of a code base is immeasurable and is pretty much the only way to achieve a clean architecture.

The challenges are firstly to clearly identify these concerns and secondly to have the discipline to resist the temptation to mix them.

As with many good software development practices a healthy dose of pedantry and an obsession with order can go a long way to providing good outcomes.       

Monday 18 September 2017

Ruthless Engineering


It can be easy for teams to end up being hampered by two sources of distraction.

One can be an over-emphasis on solving problems that don't yet exist, this might be termed over-engineering or simply unwarranted future proofing.

The other can be becoming to wrapped up in the coolness of what is being built or the softer elements of how customers will interact with the software being produced.

While its a delicate balancing act, as effort in all of these areas can be valuable, a healthy amount of ruthlessness should be demonstrated in trying to achieve our goals.

Ticket to the Game

The goal of any development team should be to ship software, not only is it demoralising for the fruit of a teams efforts to not be delivered to users but an organisation gains little or no value when software is written but not given to users.

Shipping software is buying a ticket to the game, trying to perfect your offering to users while admirable will often lead to others delivering what maybe a sub-optimal experience but that gets them into the minds of users.

This shouldn't be taken as a remit to ship software without it being properly engineered or without due diligence around its proper operation.

It is a call to remove all obstacles to the delivery of software into production, sometimes these are technical and sometimes process driven, but a ruthless pursuit to reduce the amount of time between coding being completed and code being shipped should be the goal of an effective operation.    

Working the Numbers

Once our software has shipped then we should immediately turn our attention to determining if we have built the right thing.

As much as we want to please our users garnering their feedback can be an imprecise science and we may have additional goals for the code we shipped that we also need to evaluate.

Any data we collect towards this goal has to be actionable, we must be able to define metrics we can accurately measure and where a plan of action can be defined for whatever direction those numbers may move in.

We may feel that some of the value we provide towards to users is intangible or immeasurable, while that may be true effective product development must be data driven and progress must be demonstrable.      

Change of Direction

When we evaluate our carefully cultivated data we shouldn't just be trying to prove our original hypothesis correct, we should be prepared and almost expectant that it shows we got it wrong.

One of benefits of an agile is approach is the ability to pivot when we are proven to have taken a misstep.

To have an agile mindset is to accept uncertainty and in accepting uncertainty were acknowledging the possibility we may get things wrong.

Achieving this ability to pivot is a combination of flexible engineering, constructing a well segmented and organised code base that can be changed without re-writing, and a business mindset to not create plans built on an assumption of always making the right call.

Ultimately engineering must have a purpose the production of software is not an end in and of its self. It is a tool we use to achieve outcomes, both for our users and to reach our own goals. 

This purpose can be re-enforced by application of a scientific method, creating an hypothesis, devising an experiment and looking at what the data tells us.

Sometimes we will be right and sometimes we will be wrong, but while we remain in the game we can look to improve our numbers and roll the dice again.   

Monday 4 September 2017

Mobility Issues



Each branch or flavour of software engineering has its own quirks and nuances, whilst the fundamentals of writing good software are fairly universal, experience with a given platform or use case tends to develop over time to adapt these truths to the situation.

Mobile development is no different, many years ago this would have been characterised by a lack of resource, whether that be CPU, memory or data storage, however those concerns have all but disappeared with the advent of powerful smart phones.

But none the less mobile development still has some unique challenges, while sometimes this can make an engineers life difficult its also what can make the process more interesting.

Diversity Across the Spectrum

Web development can be complicated by the various different browsers that users may be using to access a site, standards for web technologies help smooth this variation but it still exists.

Mobile development is complicated by much more variation in multiple factors, whether it be software, hardware, screen size, screen resolution, available peripherals or age of device.

All of these factors can vary independently leading to the number of variations an app could be expected to be compatible with to be measured in the hundreds or possibly even thousands.

The counter to this is to be to identify where all this variation could impact your codes operation and ensure adequate abstractions are in place to enable you to craft some conformity for important pieces of functionality.

Its also important to be realistic about which elements of variation cannot be controlled and whether or not it is realistic to try and target all the platforms all the time.

This may lead to a progressive approach where the more functionality a platform or device is able to offer the richer the app experience will be.

Lack of Connectivity

Connectivity to APIs and services has become an essential part of most code bases, very few pieces of software are written in or run in isolation.

When developing server side applications the infrastructure to provide this connectivity is virtually guaranteed, outages may happen but shouldn't be a regular occurrence.

Code that runs on a users personal mobile device cannot rely on this robust connectivity.

Environmental constraints related to where users happen to be trying to use your code can cripple an application if no strategy is in place for when API calls cannot be made.

It is very difficult for a complete lack of connectivity to cause no issues but whenever a flow of execution involves a service call consideration should be given to what happens when that call cannot be made.

This maybe be implementing a retry strategy, limiting the required number of round trips to the server or pulling down data ahead of when it will be required.

Window of Opportunity

The nature of how people use mobile devices means they rarely use your software with an anticipation of spending a large amount of time interacting with it.

From the moment they launch your app you have a limited timespan to provide the functionality the user wants and to draw their eye to the areas that you want them to notice and interact with.

Uses also utilise their devices for many different purposes so you have to deliver your functionality whilst competing for the users attention when they are presented with many other distractions and apps trying to fulfil similar needs.

Context, Context, Context

The fact mobile applications run on a device the user has with them pretty much at all times and when on the move means a wealth of additional context is available for your app to leverage.

This may be where the user is, what they are doing, what they are trying to find, where they are trying to get to.

If used properly this enables your app to make itself extremely relevant to the users needs and wants.

It can also enable your app to figure out over time how it can become more relevant to the user mining all this addtional context that not all code is able to take advantage of.

In any area of development the unique challenges it present should be embraced as opportunities to make a difference, this is where you can differentiate yourself from your competition and make yourself more useful.

Always keep in mind the universal truths of all development but learn how to apply them to the field you are working in, while we may agree on the structure of the cloth, one size does not fit all.

Monday 21 August 2017

Everything as a Service


As the benefits of cloud computing have started to become fully realised we have seen the birth of the As-a-Service model.

Software as a Service (Saas), Platform as a Service (Pass), Infrastructure as a Service (IaaS).

This combined with the birth of the DevOps mentality has lead to a shift in how we approach the provision of the services and functionality that we always require on any new project.

While each variant of this approach has differences in the degree to which you are involved in the detail all carry similar benefits compared to the traditional do-it-yourself attitude.

Details as a Service

While many of us may take enormous pride in managing and maintaining boxes and servers ultimately the amount of effort required to do this is often out of the proportion with the value derived.

Many of us will also have spent many frustrating hours dealing with an issue related to a Java install, a file system problem or a connectivity issue. We aren't in the business of maintaining servers but it is a necessary evil to have a functioning software delivery pipeline.

As-a-Service enables us to abstract ourselves from this necessary yet effort draining detail and focus instead on the business of delivering value to our customers via deploying software to these servers.

By taking advantage of an army of people dedicated to keeping our servers up and running we gain reliability while reducing to almost zero our effort on these day to day activities.

Expertise as a Service

Building and deploying software involves many different tasks, each with there own skillsets, as we become more experienced we will gain a certain level of skill in all these areas but it is unlikely we could ever call ourselves experts in all of them, we are also unlikely to be able to dedicate enough time to achieving that level of proficiency across the board.

We therefore do the best we can.

The functionality delivered via As-a-Service is backed by experts in their field, people who have amassed expert knowledge in the service they are offering that it would be impossible or at least impractical for us to attain.

This engineers a situation where we are essentially filling our team with experts in every aspect of delivery amounting to years and years of experience being added to our team in specific technologies, domains and functionalities.

Compared to this there is no benefit in developing these systems in house, while it may be fun we cannot hope to improve on what is delivered by experts employed to achieve excellence.

Availability as a Service

When we construct infrastructure for use in deploying and hosting our code there is generally an on-going cost to its availability, maintenance and provisioning.

This cost is usually not in line with our utilisation of the infrastructure, although it will rapidly increase whenever that infrastructure is down for what ever reason.

As-a-Service in most cases is like any other product or service you may buy with it being charged based on your usage.

This can obviously lead to cost savings for the services we need to be available but only use infrequently, but the other consequence is that we have a virtually unlimited supply of services where are need scales with the amount of delivery we are trying to achieve.

As an example, when provisioning our own build system we are likely to be faced with a choice of imposing a cap on the amount of throughput our developers can achieve or hugely over provisioning at great cost.

An As-a-Service build system can start off small but via some tweaks to settings and a proportional increase in cost can be made to achieve more. As developers get into a groove the build system can be made to adapt and ensure every last ounce of effort can be converted into the production of code.

Within software engineering we have a healthy track record of laziness, trying to use our skills and abilities to make our work lives easier.

As-a-Service is the ultimate manifestation of this attitude with the goal of ensuring developers are almost entirely focused on writing code.

While we may be capable of doing or learning to do all these tasks the question you have to ask yourself is would that add any value.

The emergence of several major cloud providers and the competition starting to grow between them means the answer to that is almost certainly no.    

Monday 7 August 2017

Assurance of Quality


Its a truth many software engineers have a difficult time accepting, but all software no matter who produces it, usually has bugs, defects or doesn't quite meet requirements.

Once that fact of development has been acknowledged then the need to have some level of quality assurance is undeniable.

The traditional model encourages a left to right approach where coding is completed, code is tested, code is shipped.

This "throw it over the fence" mentality is inefficient and thankfully we have developed modern techniques to ensure quality isn't an after thought.

Shift Left and Automate

The ideal situation to be in is that defects never make it into a code base, they are if you pardon the pun stopped at source.

Realistically this will never be the case, but if we can move much of our quality assurance nearer to the point that the code is written we will realise huge efficiencies in producing effective working code.

This can be achieved in two ways, by introducing more automation and by having a slick and quick deployment pipeline.

Unit testing, functional testing, integration testing, we have many techniques at our disposal to very quickly verify if changes to our code base have had a negative impact in its effectiveness to perform its intended task.

By ensuring our code can be quickly deployed to the environment it is designed to operate in we can create short feedback loops to identify and address any regressions.

Whether this be by quickly making new versions of code available to a test team or by employing UI automation to validate the continued operation of our key journeys, the quicker issues can be identified the quicker they can be fixed.

Break Things

The advantage of automating and streamlining the testing of the routine is that it releases the valuable resource of test teams to focus on more abstract testing of our system.

The main purpose of this testing should be to break and destroy.

By taking the use of our system to extremes we can uncover inefficiencies or flaws in logic that otherwise would be difficult to find.

A common problem when testing software is the relatively limited spectrum of use we are exposing it to. Often a defect that we may think of as relatively obscure may actually be produced hundreds or thousands of times once its out in the field being used by a large user base.

By actively trying to break things we focusing our attention on the problems and issues that are most likely to affect people.

This destructive approach may take may forms but the important thing is not to concentrate on validation of correct operation but instead employ a machiavellian approach to try and cause chaos.

Test in the Field

Even when we implement all the methods and techniques available to us to enforce quality some problems are still going to make it through the net into production.

Because of this unescapable fact it is important we have an effective strategy to properly instrument our application to report back valuable diagnostic information, both when things are working and when there not.

Whilst it is not a sin to ship a defect, it is at least negligent to not have the ability to detect the effect on users.

Our software should be like a machine we are constantly monitoring and inspecting, whether it be out and out bugs or simply measuring a decline in performance it is vital to pay attention to what is going on out there in the field.

As well as creating a feedback loop to improve your system, analysing how people are using your software will also provide great insight into the direction it should be going in.

As we progress as an industry its important we develop and improve all aspects of software development and that includes how we test and validate the code we produce.

Innovation in this area will bring great rewards and help you stand out from the crowd.

Accept that defects and bugs are a fact of life and develop a strategy to seek and destroy as many as possible, also realise that its everyones responsibility to play a part in achieving this as we all bring different skills to the table.

Sunday 23 July 2017

The Forgotten Features



Those of us that work in the technology industry can very easily demonstrate the traits of magpies, becoming distracted by the new and shiny, always wanting to break new ground and be first.

Sometimes this can be at the expense of taking care of what may appear to be the mundane but what it is actually the life blood that is paying the bills.

This is not an argument against innovation, it is instead an acknowledgement that the right to embark on such escapades must be earned by making your core offering bullet proof.

To forget about these features will taint any advances you may make in other areas with the anger and frustration of your users.

Performance

Users are generally very impatient people, the move towards a more mobile world has only increased this tendency.

While it may seem obvious that your site or app should be performant, our excitement at the unveiling of our latest and greatest sometimes blinds us to the fact that the users patience is unaffected by this anticipation.

Set minimum requirements around the performance of certain key aspects of your system, this should especially concentrate on aspects affecting access, the time taken to login or the time taken to checkout.

Also don't become to complacent when your code runs like a rocket during development when it only has to service you as a single user, a lack of performance at scale is unfortunately all to common.

Available at Scale

Even the coolest of new features carries very little impact when users can't access it.

Scale is a consequence of success, to not have an effective strategy for scaleability is to have no plan on how to deal with success.

Scale also costs, even in a cloud computing world throwing more tin at the problem will become very expensive very quickly.

We have never had so many tools for dealing with scale, creating an elastic infrastructure to have multiple nines of up time is within everyones reach. Being down should be considered a sin with the penance being disgruntled users who may never come back into the fold.

Pay attention to errors and understand the workload your system is under by unwavering analysis of how and when your users are most active.

Deployability

If you question your users on what features they would like to see its unlikely any of them would mention deployability.

They may not know it but they want this.

If software isn't easily deployable then it is slow and ineffective at delivering value, no user benefits from code being in source control, effective continuous integration is the mechanism for implementing continuous delivery of value.

This leads to continuous bug fixes, continuous security patches and new features being available the instant they are ready.

Being continuous isn't about cadence no matter how regularly you deploy, its about there being no barriers to deployment and having the ability to deploy whenever you want to.

Even though they may not have asked for it your users will thank you for it.

If you ask users if they would like to see this or that feature they will invariably say yes, don't take this to mean that they want your system to resemble a Swiss army knife.

If you ask them what is bugging them about using your site or app, some may mention a potential new feature but most will talk about problems that can be linked back to performance, scaleability or availability.

Put simply, users aren't asking for the world they just want things to work. Satisfy this ambition by concentrating on what your core offering to them is and make your system the most effective delivery mechanism for that offering the world has ever seen.

Once your able to make this claim the time for innovation will come but never at the expense of any of these to often forgotten features.

Sunday 9 July 2017

The Modern Age


All industries can point to different eras as practices, techniques and perceived wisdom undergo constant evolution.

Many would point to technological industries as taking this to extremes with the pace of change often deemed to be frightening.

While I think this aspect of the industry is often over exaggerated it is undeniable for such a comparatively young discipline software engineering has evolved and changed many times.

If this is true then what defines this modern era of software development?

Continually Under Attack

There has been a proliferation of the use of technology such that it influences every part of modern life, along with this ever increasing use of technology has come an ever greater understanding of how to harness it.

Never have so many people known enough to be dangerous.

Whilst IT security has always been a concern its importance in any modern day system is now such that it is negligent in the extreme for it not to be given focus.

Expose any piece of technology to the internet and you may be shocked at the speed at which it will start to be probed and investigated. These attempts won't always be malicious but many will be looking for weaknesses and vulnerabilities that may lead to a bounty.

An entire industry now exists solely to facilitate these attacks, newly found exploits being shared in kit form to allow anyone with even the most minimal computing skills to be a potential threat.

In the modern day it pays to be paranoid, everyone is out to get you.

Cloudy Days Ahead

The workhorses of an internet driven world are servers, everything can eventually be traced back to these pieces of tin.

An appreciation for the building and maintenance of these boxes used to be a primary skill for someone wishing to deploy software to them.

The advent of the cloud has removed that requirement, in the modern day only suckers build servers.

The arrival of the cloud should lead us all to a mindset that its a waste of a valuable and talented resource to have engineers working on already solved problems.

By using the services offered by the cloud, whether this be PaaS, SaaS or IaaS, we can ensure we concentrate on the areas where we can add value. How much value are we really able to add to building infrastructure others have already mastered?

The days of developers having to worry about problems with the JVM, disk IO or registry settings should be very much over.

Although our industry is young we should look to take advantages of the areas of maturity we do have and not think our wheel can be better than the ones already invented.

Always be Deploying

Traditionally we operated in a world of releases cycles, set points in time when we released software.

This cadence brought with it a gradual rise in tension culminating in a stressful pushing of the button to deploy to production.

Thankfully we have learnt from these scars and developed technology to ease the process of delivery, this combined with the emergence of the cloud has made our release strategy must simpler, in the modern world when software is ready we ship it.

The drive to develop this technology wasn't a war on complexity it was a war on fear.

Fear comes from uncertainty and what we have come to realise is human processes foster that uncertainty. The monotony of automation applying unrelenting consistency takes away that fear and allows us to manoeuvre ourselves to a position where deployment is a consequence of work being completed not a conscious and fearful action.

Someone may well look back on this post in years to come and think it quaint thinking I'm describing advances that themselves have been superseded. But hopefully we are still following a strategy that the best way to deliver technology is with technology.

Problems and threats may change and we have to adapt our practices to suit but ultimately our core skill set is technological, we are experts in it and therefore it is probably the root to a solution to our current set of problems.                

Friday 9 June 2017

Do More with Less

Software engineering as an industry and a discipline has developed an unfortunate tendency to repeat a flawed approach to scaling the delivery of value.
An initial small team of engineers develops something that proves popular and has great potential, this leads to a hunger to build on top of this success. Unfortunately this hunger is very often not paired with patience and we fall back on a view that software production can be made to obey the same rules as any other production activity.
More people will produce more code, I believe a fundamental misunderstanding of that mantra is that value delivered to customers doesn't scale in proportion to the amount of code written.
Too Many Artists
Software development is very often more aligned to an art then a science than people may realise.
Whilst it is true to say more developers will produce more code, or at least will produce more change in a code base, it is not true to say that this means the value being delivered must be increasing.
If this were true software development would be a solved problem, it would be possible to fully automate, and bugs and defects would be a thing of the past. Anyone that has worked in software development will know that is not a situation that is likely to arise anytime soon.
If instead we recognise that there is a healthy dose of artistry in developing good software then we find ourselves contemplating if multiple artists can speed up the time to produce an art work.
The answer may be yes but do we think the art work produced would be a pleasure to look upon or more of a confused and incoherent mess.
Frequency over Speed
A strong drive for scaling out software development is to increase perceived speed, unfortunately this very often focuses too much on the speed of development as opposed to the speed of delivery.
Software development can be a tricky business with a multitude of unforeseen circumstances and potential pitfalls, creating consistent and speedy timescales is far from easy.
The delivery of software into production can be made a much more precise science, however quite often not enough effort is put into conquering this more easily understood challenge.
Putting effort into producing a well defined and smooth delivery pipeline provides a mechanism to ship software whenever it is "ready".
During the good times this maybe several times a week or even daily, what it ensures is that the effort of software developers is being maximised by ensuring there is very little delay between them producing value and that value being shipped to users.
This will go a long way towards creating a feeling of speed and effectiveness even if the software development process has its normal bumps in the road.
Architect for Change
For all these good intentions around how to manage software delivery there is also a technical aspect that must be in place to make this approach viable.
None of these good practices can be put into place if the code base involved is a monolithic beast.
The only way to create a situation to increase the number of developers working on a code base is to put in place the loose coupling and good organisation that defines clear independent segments of code for a team to work on.
It must be possible for these teams to both work independently but also to deploy independently, the moment large dependencies start to exist between these teams the more they will grind to a halt.
This means an architecture must be in place that describes well defined, de-coupled building blocks whilst also providing a vision of what will be possible when these blocks are grouped together.
This amounts to increasing the delivery of software by increasing the number of distinct blocks of software that can be delivered.
It is a source of enormous frustration to many that the lessons of what does and doesn't work when trying to scale software development have failed to be learnt almost since the industry's inception, this can lead to a feeling that we are doomed to never fully realise out potential.
One of the potential reasons we haven't taken on board these learnings is that it requires us to trust in the skill and professionalism of our development teams, it requires that we hire the best people and allow their talent to deliver.
This can lead to a feeling of inadequate control over outcomes but if we hold our nerve and trust our teams we will find they will deliver everything we need, the control we think we gain by adding more cooks is an illusion, eventually the broth is spoilt.