Day 1 at the NDC London

This is an old post from December last year:

Below is my summary of the key take backs from each talk I attended for Day 1 at the NDC in London.

The creativity imperative – Denise Jacobs

Denise kicked off the NDC with this awesome key note on the creative imperative and provided some interesting views.

94% of people value creativity over other qualities. Creativity gets killed more often than it is supported. We all have this inbuilt desire to create. We should encourage flow as flow invokes creativity and creativity invokes flow. There are four directives that everyone should follow to allow the creativity mind-set: Unblock, Communicate,Collaborate and Allocate.

Unblock – is about taking down the barriers to creativity, such as time, deadlines, fear of unique ideas and fear of making mistakes. It can be alleviated with the experimentation mind-set; tinker, prototype and encourage stupid ideas, twitter was once a stupid idea (at the time). Most of the time people’s best ideas come outside of work.

Communicate – is about becoming an adept listener, pay attention to create an emotional bank account with the other person and let the conversation flow, until you learn to listen you can’t learn.

Collaborate – None of us is as smart as all of us. Amplify others ideas. Remember in conversation to use yes, and, instead of yes, but. It encourages more flow this way in the conversation and enforces that emotional bank account.

Allocate – is putting the correct amount of time into work and ideas such as allowing 20% to creating that personal project, for example, Google do this. If you can’t get this time, communicate and collaborate with the team to petition this, people work better in the mass.

The four directives together mean we Produce, we become empowered makers and can tap into our own existing talent and team and overall create a betterness culture.

Finding Service Boundaries – Udi Dahan

Udi started with defining the typical problems and definitions with some solutions such as what are Services? Services should have explicit boundaries. A services is the technical authority for a specific business capability. A services that has only functionality like calculations or validation is not a services, it is a function. A service that has only data is a database not a service.

We should focus on vertically slicing an app rather than horizontally to identify the boundaries, like look at UI composition to define micro-views. Ask the business about anti-requirements, things they won’t need to identify the boundaries, like price and book title for example. Is a book title beginning with B ever going to have a 5% discount in the future, if this is a no, it’s possible this is an anti-requirement and a possible boundary.

Sharing data between services is not autonomous. Use pub/sub to share events like orderid or productid for example has changed. Share an ID between services.

Udi then went on to talk about his experience in defining boundaries in the healthcare industry. The problem was that the healthcare industry has many professions such as doctors, dentists, emergency rooms, physios etc., they all have different domains but share patient information, same types of data but used differently with no real clear ownership.

The solution is to go back to the SRP principle, looking at the rates of volatility for what remains stable. Focus on differences and how the difference will impact eventual behaviours. Context is king, with the healthcare industry, there is a different clinical context between each profession with the patient acting as the identifier. Finding service boundaries is not trivial, immutability and UI composition can be the beginning to help you find the boundaries.

The Ultimate Logging Architecture – Michele Leroux Bustamante

Michele started by taking us into the four categories of log type, which are: Event, Audit, Activity and History. We need to make sure flags are in place in case something goes wrong in production and load is introduced due to logging, so that we can turn the logging off. Alerts should be in place if logs reach capacity.

Event logs should only cater for windows events, such as IIS exceptions etc. and ETW is acceptable for this. Audit logs need to make sure no user credentials or personal details to be stored and should stick to a level of compliance, they should also have retention periods (1 year is a good baseline). Activity logs should be tracking user activity to give the business insights into certain trends. History should be looking at tracking user orders or bookings made.

Aggregating all this data is hard when it comes to troubleshooting, the business needs to know what’s wrong, yesterday. We should consider using 2 data store areas for logging, one for archiving each data type and 1 for hot logs with logs for 7 days of data,ElasticSearch is perfect for this together with visualisation tools like Kibana.

We need to evolve log content, it’s not something we will get right first time, the evolution with separation of the log categories, data stores and retrieval mechanisms will help reduce IO/Socket overhead and provide real-time analytics.

Capability Red – Requirements at Scale – Liz Keogh

In this talk Liz took us through talking BDD and more importantly having the right conversations, which more important than automating these conversations. Never useshould when defining a scenario, it creates uncertainty. Make sure to differentiate between scenarios and acceptance criteria.

Hunt the value of a scenario and create value mapping. Question the scenarios vision in terms of making money, saving money, protecting money. Only do the things that contribute to the vision. We shouldn’t care about solving the problem upfront, because breaking everything down upfront is more of a waterfall approach.

Liz then introduced us to Cynefin, which is a way of analysing the complexity of a scenario into 4 levels: simple, complicated, complex and chaotic. Then using these together with analysing the risk of a scenario which would comprise of 5 levels from: nobody has done it, to, someone outside the company has done it, to, someone in the company knows about it, to, someone in the team knows about it, to, everyone in team knows about it. An example of the highest to lowest could be, trying to build self-driving cars, to, user registration.

We should focus on breaking stories down that are deemed too complex or even potentially chaotic by prototyping and trying things out – we discover by discovering how to do stuff. Never add uncertainty into the backlog, make sure there are acceptance criteria in place beforehand.

Look at capabilities and abilities in the project, draw up the capabilities and the complexity points as mentioned previously against these projected capabilities, define the integration points and then narrow these to create an MVP. Tackle the newest/riskiest things first, if your stakeholders don’t trust, that is the biggest risk. BDD is finding the examples you can’t find and having the conversations to discover them.

Tales of the unexpected (a litany of test failures) – Harry Cummings

In this talk Harry started by taking a look at some testing anti-patterns, or the problems in testing, such as sensitive data, having monolithic test fixtures, indirect control, using tools like cucumber and not using it right resulting in lost in translation, hidden dependencies like reliance on datetime, when datetime could change and intricate logic in tests.

Then we looked at how testing goes wrong. Obstacles such as budget or deadlines that impact ability to think about tests. Next, we looked at why we test. We test to allow for future refactoring, tests are a written/coded design and documentation of the system. They become a verification and defect detection capability and act as regression tests when new features are introduced.

Then we looked at what is a test, a test is a statement about the behaviour of your application. Harry then introduced us to the testing ice-berg. This looks at what are the business facing tests, such as load testing for example, this is above the water, there wouldn’t be as much tests at this level, then below the water is the bigger proportion of tests, the technical tests, unit and integration.

Finally, we looked at how to test, such as using them to communicate your ideas, stop testing code and start testing behaviour to demonstrate your code and test these behaviours only once, be absolutely clear in what you test and make sure they have a single responsibility.

Adopting Continuous Delivery: Adjusting your architecture – Rachel Laycock

In this talk Rachel took us through the steps involved when considering the use of microservices to allow us to adapt a continuous delivery architecture and the considerations to make on whether or not to use this approach.

We started by understanding Conways law and how it applies to our organisational communication structure in influencing our eventual architecture and how we should design our teams as part of the first draft of your architecture in an inverse of conways law to base your team design on the architecture and create the cross functional team we strive for.

We looked at how we can take a monolith replacement strategy by using the strangler pattern to slowly introduce a new system, branching by abstraction is an enabler for this. Finding the seams is a first approach to the strangler pattern.

Rachel spoke about volatility of code, and designing for this, something Udi Dahan had also spoke about in an earlier talk. Try to reduce coupling of systems and teams as you can be as slow as your slowest dependency.

We need to keep things, small, simple and lightweight, have smart end-points and use the right tool for the job. Only embrace polyglot services if it needs to be introduced. But be careful because introducing a new technology can have a negative impact if a developer who wrote the system in the new language leaves the company.

We need to build in resilience for when things go wrong in a microservice architecture, if something goes wrong, we can replace quickly.

Remember to use the last responsibility moment principle and ask what the constraints we have right now are. Design for failure, with microservices you introduce a lot of overhead monitoring of many systems. Microservices can end up pushing the problem to your infrastructure and integration, this can become evident with network latency.

Microservices are not a silver bullet, and certainly not for Greenfield applications. They require a certain level of maturity and understanding of the greater domain, once you have that domain, you can start breaking things up. Microservices are the first architectural style developed post continuous delivery.

The experimentation mind-set – Doc Norton                             

In this talk Doc started by looking at 2 different mind-set approaches to learning,Growth vs. Fixed mind-sets and how adapting a growth approach is much more optimal. How you word things can be the start to encouraging a growth mind-set, saying things like you did great, you must have worked hard instead of the fixed approach of you did great, you must be very smart, which is like saying you don’t need to improve.

Doc then spoke about the Dreyfus model of skills acquisition, being: Novice to advanced beginner to competent to proficient to expert. In this model most of us only ever get to proficient, to get to expert, you need to experiment and explore different ways of doing things. The proficient level is known as the implementation mind-set, it is enough to the job and perhaps do it right following best practices. But best practices or following a script may not be always the correct approach. You need to experiment, “best” practices are really leading practices, until something better comes along.

We should encourage double loop learning as opposed to single loop learning, single loop learning is making an assumption, acting upon it and then looking at the outcome, then tweaking the actions and looking at the outcome again, in a repetitive manner. Double loop learning allows us to go one step further back and re-analyse our assumptions and figure out new ways of doing things a different way if we need to.

Doc then went on to talk about Woody Zuill, who is someone who is constantly trying out new things, such as mob programming and no estimations in projects, he explains how some may find these surprising and think no, but in fact, unless you experiment with these approaches you may never know, the key is to try new things.

Doc spoke of techniques he uses at Groupon to encourage innovation, such as using interest leagues, internal hack fests and 360 feedbacks and how they can use these to monitor all the cultural things.

Finally, Doc left us with 4 key concepts to keep in mind when trying to encourage innovation.

  1. Know your purpose, and challenge your assumptions.
  2. Make failures acceptable, failure is an opportunity.
  3. Think big, but start small.
  4. Be tolerant of mediocrity, sometimes best practices hold us back.

Thanks for reading and keep building awesome.

Until next time!

Ian B

P.S. You can also find me on Twitter: @ianaldo21

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s