Monday, August 25, 2014

Let's Put the Mastery Back Into the Scrum Master Role

Sorry for another rant, but I am getting annoyed at what the ScrumMaster role has been watered down to all too often. I want to see the mastery put back into the role as its name suggests.

The title ScrumMaster is indicative of what the role should be: a person that has a mastery of the Scrum framework. Which I translate into a deep understanding of Scrum and how to implement and apply it. The word “master” indicates a master of one's craft and not a master of other people. Given that this is the intention of the role and the responsibilities associated with it, here are the predominant smells and anti-patterns that I am seeing applied to this role and finally what I would like to see more of in the role.

ScrumMaster Anti-patterns

The Secretary

This is the most common anti-pattern I've seen, where the SM becomes a team administrator. I think it comes mostly from companies that are still thinking waterfall and waterfall Project Management.

The Event Scheduler

Yes, scheduling events is part of the role, but just having the event doesn't mean you are doing Scrum. You need to be getting value out each event, too. And that requires mastery, which the ScrumMaster can demonstrate by ensuring that the goals of the scheduled event are met and that participants did, in fact, get value from it. For example, the goal of the sprint review meetings is to inspect and adapt. If you are having a demo and retrospective meeting each boundary, is there any adaptation going on or are you just checking the scrum check box to say that the meeting happened?

The Tool Administrator

"Wow, this tool will make us Agile! If we do everything the tool is capable of and keep it up to date, we will be Agile!" The SM who misinterprets this as their sole role updates the tool each day with hours burned. Yes, keeping track of the Sprint progress is part of Scrum but nowhere does it say that it is the job of the ScrumMaster. Ongoing maintenance does not fall into the category of removing impediments. It falls into creating work for yourself. Beware of busy work.

As an aside, I'd advise to ditch the tool at the team level if at all you can and use a physical board instead. But that’s a broader topic for another day…

The Status Taker

Teams that use stand-ups to give status updates to the ScrumMaster are not self-organizing. You give status to all your team members and not just the SM. If you as the ScrumMaster find yourself taking status, stop it. That’s not what stand-up is for. If a stakeholder wants to know what the "status" of the current project is, they should be able to get this from the Product Owner. (And if the PO doesn't know how to do this, I know some excellent coaches who can help).

The Boss

The ScrumMaster is not there to tell the team what to do. They are there to guide and lead them—from the back, in most cases. That is why they call it a “servant-leader” position. The delivery team should want you around for support and guidance, but they shouldn’t be afraid of you in any way. You are not responsible for the project - the whole team are. Encourage them to take responsibility for and ownership of its success. If you’re the ScrumMaster , you are only there to grease the wheels in whatever way you can - whilst not being a wheel yourself or the driver. Leave the driving to the Product Owner.

The Training Wheels ScrumMaster

This happens when a company decides to transition to Agile and they realize that they need a person to fill the ScrumMaster role. So they pick someone from the team or elsewhere in the company to take on that role. And often that someone does not have a mastery of Scrum. Mastery comes from extensive experience, which you probably don’t have if you were chosen by your higher-ups because we had to choose someone from the team.

If you find yourself in this position, then get help. Study, read, join a local meetup, get a consulting firm in. There are classic Scrum pitfalls and pain points in a transition that you need to be ready to handle and probably aren’t yet.

The non-Coach

I'm always surprised when I hear Agile coaches say that the ScrumMaster role is not a coaching role. I understand what they mean. They are saying that, in an enterprise Agile transition, you need more mastery in Scrum and Agile than what you had at just the team level. Whilst that is true, it is not true to say that a ScrumMaster should not be a Scrum coach to their team and to the organizational elements that their team interacts with. As an Agile coach, you really shouldn't be spending time coaching ScrumMasters and their teams on Scrum.

Behaviours I'd expect from A Master ScrumMaster

Work your way out of the job

A Scrum team by definition is self-organized. Your goal is to get them to a place of total self-reliance. Act as if you were trying train the team in everything you know to be able to do all of the activities required to be effective at Scrum as a team without you. They should get to the point of not needing you at every stand-up and be able to self-facilitate retrospectives, for instance.

Be a thought leader

Can you display your mastership? Why not write a blog or talk at conference? At least attend a conference or two and a local Scrum group to mingle with peers. Keep learning. If you can’t be a thought leader, at least attempt to be a thought follower.

Be humble

Humility is important for the servant part of the position. Being a master shouldn't make you an ass.

Know your trade and the tools of your trade

The Scrum guide defines Scrum. If you find yourself uncertain or in a disagreement about Scrum with someone, this simple document is the first place to go. As a "master", you should know the contents of this document. There are a few other things you should know as a master: the history of Scrum, why it's called Scrum, how it compares to other Agile delivery processes. Being thus informed, you’ll understand Scrum better, not to mention be on the way to being a better ScrumMaster while you’re at it.

Have a strong synergy with the Product Owner

The relationship between the ScrumMaster and the Product Owner needs to be good for the team to be effective, especially if you have a newbie Product Owner. You are not always going to be able to work effectively with everyone. If you find that a ScrumMaster can't get on with their Product Owner, I would suggest that you switch one or the other out until you do find a synergy. Don't take it too personally if you don't have chemistry with this counterpart on your team. You can't expect to like or be liked by everyone.

Tuesday, July 1, 2014

Extreme Programming - The Road Not Taken

Extreme Programming - The Road Not Taken

acknowledgements to, and Inspired BY ROBERT FROST
Two [agile] roads diverged in a yellow wood,
And sorry I could not travel both
And be one traveler, long I stood
And looked down one as far as I could
To where it bent in the undergrowth;

Then took the other, as just as fair,
And having perhaps the better claim,
Because it was grassy and wanted wear;
Though as for that the passing there
Had worn them really about the same,

And both that morning equally lay
In leaves no step had trodden black.
Oh, I kept the first for another day!
Yet knowing how way leads on to way,
I doubted if I should ever come back.

I shall be telling this with a sigh
Somewhere ages and ages hence:
Two roads diverged in a wood, and I—
I took the one less traveled by,
And that has made all the difference.


Wednesday, June 18, 2014

Developing Software Without a [QA] Safety Net

Software Development without Testers and QA

I was listening to an interview with Nik Wallenda, a tight rope walker when he was about to cross the Grand Canyon. When asked about the issue of not having a safety net or tether, he replied something along these lines: - "I have never used a safety net. If you have a safety net, then you will not treat the task with the seriousness that it requires. When you know a fall means injury or death, you make sure you don’t fall. That is how I was taught. And my father, and his father.” He comes from seven generations of tight rope walkers who were all taught to never use safety nets. (The quote is not verbatim but rather my recollection of his words.)

I thought of the parallel to software development practices that rely on the safety net of a QA department or process versus an extreme programming approach with no dedicated QA "resources". When you don't have a QA safety net between you and production, you are going to take quality a lot more seriously.

I consider myself lucky that my agile software training was similar to that of the tight rope walker. We had no safety net. No QA department or testers. That is not to say that we did not care about quality, we did. It was drummed into us as the highest priority of everything we did. “How do you test it? How do you test it? How do you test it?” was one of the mantras drummed into us.

During my time with this high performance team, we were deploying to live weekly with near zero bugs in production.

Now please don’t take this as disrespect for what QA people do. I have grown to appreciate the skills of software QA professionals. I find that they usually understand the product better than anyone else, ask really good questions about stories and are great at identify edge cases. But for all of that, I am sorry, and a little scared to say in public, that I would recommend getting rid of the QA department and train your developers in extreme programming instead. Remove the safety net and train the devs to care about quality, own their product and empower them to build processes it into their development practice to ensure quality. When developers truly own and are responsible for quality, (as opposed to the throw it over the wall mentality of having a tester) you will see the bug count drop to near zero.

What I notice in companies that have the QA “throw it over the wall” mentality to quality is that devs do not take ownership for their bugs. Here is a quote I heard from a dev in such an environment - “The QA team had this feature for weeks and they only find this bug now!? What is wrong with them? It’s their fault we have to hold up this release to fix something they should have found weeks ago.”

A better model than QA outside the team is moving them inside the team where QA are working alongside dev. This matches the scrum paradigm of a cross functional team. Especially when QA and devs start to cross train each other in skills and learn to do each other's work. But still not the optimum in my opinion.

So what does it look like to not have QA or testers? How it is supposed to work is that the development team get a good gut feeling for what needs testing and how best to test it. They follow the pattern described in the agile testing pyramid. This pattern assures that the correct and appropriate amount of automated testing is happening. This includes:- unit tests (best created by the TDD practice), integration tests and functional tests. This test code is a first class citizen and will be treated with the same reverence than the application code itself. That is to say, that we mercilessly refactor these and look for better ways of optimization and readability. These tests, along with the code itself, are the documentation of the product. They are a living functional specification!

The journey to a high performing XP (eXtreme Programming) team is not quick however and is best trodden with a guide who has been there before you. This person is the XP coach – a role defined in eXtreme Programming. If you are ready to go down this path and begin your journey, I highly recommend you find a coach who has trodden this path to help make this transition as smooth, quick and efficient as possible.

I am excited to see that SAFe includes “Code Quality” in the framework. This is a vast improvement to Scrum who just recommend it as a best practice, almost as an afterthought (which I have found rarely gets the attention it requires). Not saying that I like or dislike SAFe at this point. The jury is still out as it is still early days, but just saying that I am glad to see that they have taken the extreme programming engineering practices seriously.

Putting my flak jacket on and bracing for the backlash on this blog.

Since the time of writing I came across the following pages...
The Inquirer - Facebook does not have a QA team
Facebook’s No Testing Department Approach
3 Ways to Be More Agile With Software Shipping Decisions (Read section on Etsy and continuous deployment)
Yahoo’s Engineers Move to Coding Without a Net



Tuesday, March 11, 2014

Functional Testing (without Acceptance Testing) in Agile Development

Functional testing tools can be used in various ways. Today I outline the ways that I like and dislike, to use functional testing tools in an agile development environment.

Topics covered

Agile Testing Pyramid

  • Misuses of functional testing tools
  • The best uses of functional testing tools
  • Functional testing and TDD
  • Well written functional tests
  • Functional Testing Smells
  • Differences between Acceptance Testing and Functional Testing
  • Summary
  • Further reading

Let’s get dislike out of the way first.

Misuses of functional testing tools

Acceptance Testing

Unfortunately, the word acceptance testing is broadly misunderstood and in fact in most instances that I hear it used, it would be better replaced with the term “functional testing” when I hear what was meant. I’m not going to go too deeply into what acceptance testing is and isn’t but have a short section toward the end of this blog to discuss this topic further. I will just say here that I don’t like acceptance testing and I think there are better uses for functional testing tools.

Replacing Manual Test Cases

Another common mistake I see with functional testing is the misunderstanding that there is a one to one relationship between manual test cases and automated functional tests. Yes you can use functional testing tools in this way but I would discourage it. (You can in fact remove manual testing completely by following the agile testing pyramid.) It is certainly not a one to one replacement and trying to do this results in an unwieldy and unsustainable suite of tests.

The best uses of functional testing tools

Happy Path / End to end / Sanity check / Smoke test

Running smoke/sanity tests at the end of every build that checks that your application does all the basic functionality that it is designed to do makes a lot of sense. Here is an e-commerce example: - can I select a product from the first page of the web app and go through each page to purchase the product affirming that the mock or fake services have been called and the product order placed? If your app does not do the simplest variants of the main functions that it is designed for, then you have a serious problem. That is why it is a good idea to run sanity checks / smoke tests on each build. Functional testing tools are a great tool to run these “happy path” end to end scenarios automatically.

Integration testing

Similar process as end to end testing but limited to testing partially through your application. For example, I need to ensure that the log file is being written to with the correct information and the file is in the correct location. I might run one (or a handful) of functional tests to verify this functionality (and for the rest use mocks in unit tests). Generally, you are in the integration testing realm when your test is touching databases, the network or the file system.

Environment testing

If your build environment is finicky, you can start your build by asserting that the environment is correct. Some examples are: - correct version of Java, version of Maven, version of the OS, you can connect the dev database, the database is in a known state, etc. These types of tests are useful to convey information about the system and can stop a whole lot of pain in debugging environmental issues and keeping a team informed of when and if they need to change/update their environment. Also a great example of using tests to replace documentation.

Build testing

Your build scripts might be doing some pretty complicated things. I like to ensure that no one can accidentally break the build by using post build artifact tests. Are all the pieces in the artifact and in the right places? Has the end artifact been minified / encrypted / obfuscated as expected? Is there even a build artifact? Does the size look correct/odd? Is the version numbering correct? Has it been auto deployed into the dev / test environment? Does it start? Is the artifact named correctly? Etc.

Build scripts are code. Treat them as a first class citizen and you will save yourself a bunch of time in debugging faulty builds. That includes testing them (and refactoring)!

Things you think will break (high risk features)

Consider: - There is an edge case that you had to write some pretty strange code to cater for. The likelihood or frequency of this issue happening is low but the cost of it not getting handled correctly could be very high. Try as you might, you worry that the unit tests and coding clarity might still not be understood by an unsuspecting developer that may stumble across it in the future and refactor or change it incorrectly. An answer is to add additional test coverage to the unit tests and cover the edge case with a well-constructed functional test. Good test code documents your system (as does well written source code itself). Make your tests (and code) as readable as possible and be sure to express clear intent in your tests. That way, when someone else is refactoring or changing the system and your test fails – they will know why that weird code was there and put it back to the state required to pass the test(s). Functional test coverage can be used this way to ensure that the edge case remains accounted for and documented.

Things that keep on breaking (fragile features)

Follow the same strategy as high risk features.

Things that are impossible (or at best, extremely difficult) to unit test

A good agile development team will avoid technologies and frameworks that make testing difficult. However, there are times that this is unavoidable and functional style testing is all that is available to you.

Bug fixing

One of the agile rules I was taught was – before you fix a bug, write a failing test. Sometimes that test makes more sense to be a functional or integration test than a unit test. It might be such a strange edge case that it is best captured in a functional test (as well unit tests).

Legacy systems

When it comes to working on a legacy system (one that has very low to no test coverage) the more tests the better. Functional testing makes a lot of sense along with as many unit tests as you can manage.


Functional Testing is Not TDD

Test Driven Development/Design (TDD) is a practice associated with unit testing. You could write a functional tests before coding and in some ways follow the same paradigm but this is not TDD! It is commonly referred to as ATDD. I'd rather not go down that rabbit hole here but just wanted to make it clear that the use of functional testing tools IS NOT Test Driven Design / Test Driven Development (TDD)!!!!! Learn TDD, then functional testing and you will have a better idea of when and how to write functional tests.

Attributes of Well Written Functional Tests

  • Express intent
  • First class citizen – keep your tests refactored, in source control and run often
  • Grouped into sensible suites

Functional Testing Smells

Intermittent failures / false positives

Not uncommon in functional testing unfortunately. I overheard from a dev team recently - “Oh yeah. If the frank tests fail just try run them again. Sometimes they fail the first time.” Doh! Functional testing smell. (Frank is a functional testing tool for iOS.)

Tests taking too long

Functional tests typically do run slower than unit tests and the cumulative time it can take to run these can get considerable if you are not careful or take some action to ensure you can keep a ten minute build. E.g. if you need to test a timeout then make the timeout period injectable and inject  short period.

Another practice I have seen is to pull slow running tests out into their own test suite. If the dev machine build time is exceeding ten minutes (including running the functional tests) then you could remove this suite from your local build and have it run on the CI server instead. (Requires a healthy CI process.)

Test suites too large

Unless you keep an eye on your functional tests and refactor, refresh and change them as needed, your suite will begin to grow out of hand and you will likely have duplication. You are creating technical debt if you do not maintain this suite the same way you should be maintaining your code and unit tests i.e. as a first class citizen and refactoring mercilessly. Refactoring functional tests is not as straightforward as unit tests but you need to be disciplined about this or the suite will become so large that it will end up getting thrown out.

How does Acceptance Testing differ to Functional Testing?

My definition of acceptance testing is writing a suite of automated tests that aim to verify that a story (or suite of stories) is implemented correctly i.e. functions in the way the writer intended it. More often than not these tests are functional/integration/end to end in nature rather than unit tests because that is the way that we describe the acceptance – in terms of functionality. The tests do not have to be written prior to any code but if you did, then you are in the realm of ATDD (Acceptance Test Driven Development).

Having practiced acceptance testing for a period of time in earnest, I found I didn't like the practice and feel it unnecessary if you are following the agile testing pyramid. I dislike it for many reasons including that acceptance testing encourages teams to invert the testing pyramid and create waste in the form of tests that provide little to no value in the broader life of development of the product.

Summary

Functional testing done right is another tool for ensuring quality in your code and product. Use it wisely, follow the agile testing pyramid paradigm and you have a great tool in your agile development tool box.

Further reading

This article describes a great approach to use functional tools for working on legacy systems. He calls his process ATDD and I guess it would be if the tests match acceptance criteria for a chosen story. Regardless of semantics, I like his approach for legacy system testing. http://www.infoq.com/articles/atdd-from-the-trenches

The Ten Minute Build - one of Extreme Programming primary practices. Running functional tests needs to fit inside this ten minute window - James Shore's description

Is Acceptance Testing a dead horse - article written by myself on how the practice of Acceptance Testing and the inversion of the testing pyramid

Merciless Refactoring - another Extreme Programming practice. Do it!

The great divide of Acceptance Testing in the agile community - http://www.infoq.com/news/2010/04/dont-automate-acceptance-tests

Agile Test Pyramid - description (and minor rant on it's inversion) by Martin Fowler