Friday, November 15, 2013

Developing First Class Software with the Developers that Google Reject

Or – why you don't need to hire heroes

Before I start, please understand that I am not trying to belittle anyone here, or infer that they are a second or third-rate developer. I personally could not make it through the Google interview process, so include myself in the category of “developers that Google reject”. (Though, I'm pretty sure I could have gotten through the interviews when fresh out of college, when all the math and computer science algorithms were fresh in my head. With that in mind, I find it no surprise that 98% of the company are under 30!)

The Software Giants have hired all the top-shelf developers

“We can only hire middle to low-end developers here at our company. Google, Microsoft, and Amazon pay more than we can, leaving the market with only second and third-rate developers. Even if we had the money, there aren't any first-rate developers around, because these software giants have sucked them up from the development pool.” This is a rough quote I heard from someone working at a software firm in the Pacific Northwest. His perception was that in his years at this firm, the quality of developers and development has been steadily falling in his company. The firm used to be able to attract high-level talent in the Pacific Northwest, but found that it now was unable to match the salaries and packages of its very wealthy software giant neighbors. I assured him that all was not lost, and in fact, the firm is in a better position than he thinks! Read on.

Teams can do things that geniuses can’t

The collective capability of a team is larger than that of a genius. Agile development is based on self-organized and self-managed teams. The synergy of a team’s collective mind and skills will give you the same, if not better, results than the genius approach, along with some other added benefits. These include:
  • You don't have the “win the lottery” or “hit by a bus” scenario to worry about
  • Knowledge is spread among the team, so vacation time doesn’t affect your project plan
  • It is cheaper (as you are able to hire juniors and the middle-shelf developers)
  • You’re less likely to have to deal with large egos (that you often find with elite devs)
  • Working with a team is fun
  • Recruitment is easier (as you have a larger pool to pick from)
  • Teams come up with more options when problem solving, as there are many points of view

Creating a team

Are you sold? If so, you will need more than just lumping a group of devs under one project and calling them a team. That is a group, not a team. The real magic happens when this group of devs have had time to gel together and go through the forming, storming, and norming phases to become a team. To foster this, I recommend co-locating the team in a collaborative (open space) environment, and preferably near the customer that they will be working for. This team will come up with creative solutions above and beyond what a single genius developer could. OK, now we have the start of a team. Next is a process to ensure that the same results (complex algorithms) can be achieved, similar in result to a genius.

A Software process that will get you to the same point as the genius

Enter Extreme Programming (XP). The processes of simple design, TDD and merciless refactoring will return the matching results that a genius would. These practices are a repeatable and reliable way of producing code of high quality and complexity.

The genius interview process usually entails complex questions or scenarios that require a very clever algorithm to solve, and the test is to see if you can come up with the correct algorithms and approaches that they have in mind. Now, if you were to create a set of acceptance criteria and tests that would determine that the result met your requirements, then I can fairly much guarantee you that I can solve the problem. Particularly, once I understand what you are trying to achieve so I can iterate from simple through complex, and get feedback throughout. And that, my friends, is XP in a nutshell.

The Extra Perks

There are more perks that come with XP and with an Agile team approach:
  • Individuals can come and go from the team throughout the process, and progress carries on regardless (though you want to minimize team changes – team persistence, pair programming, collective code ownership)
  • The code is easy to read, maintain, and quick to debug (collective code ownership, coding standards)
  • You will not be accruing technical debt, which would ultimately result in a legacy system (merciless refactoring, sustainable pace, automated testing)
  • Minimal to no documentation is required (code as documentation, tests as documentation, refactoring tests)
  • New and junior developers have a very short lead time to reach full productivity when joining an XP team (simple design, pair programming)

Summary

A persistent cross functional team practicing XP will get you high quality results and code quickly and cheaper than if you were to rely on top shelf developers only.

Cross Functional team examples

Getting a paralyzed rat to walk
The Mayo Clinic team approach
The Philosophical Breakfast Club


Update - Sep 2015

I stumbled across a similar article on Forbes - We Don't Need The Best People, We Need The Best Teams

Thursday, October 17, 2013

Now, Then, When (A Retrospective Format)




I had an inspiration for a new retrospective format that I have run and a couple of colleagues have also. (So it’s a few iterations in already.)

On a whiteboard or wall space you create three columns with the three arrow images in the title position as shown.
You start with the first column on the left. Ask the room - "what do you know now that you didn’t know at the start of the sprint?" - and the room populates this column with sticky notes. Then group and discuss.

Now move to middle column. Ask - "if you could go back in time to the first day of the sprint, and knowing what you know now, what advice would you give yourself?" Again the team populate this column with sticky notes. Group and discuss.

The third column is - "given the information we have just collected, and anything else that you can think of, what you want to take forward into the next sprint?" At this point the team can move items from the previous columns over and/or add new sticky notes. Group and discuss.

Finally we dot vote on the items in the right column and turn them into action items.

Sunday, September 8, 2013

Common Misconceptions and anti-patterns of the Sprint Review Meeting

Common Misconceptions and anti-patterns of the Sprint Review Meeting

Feedback please!

“This is an informal meeting, not a status meeting, and the presentation of the increment is intended to elicit feedback and foster collaboration.” - Ken Schwaber and Jeff Sutherland from the Scrum Guide

The why of the meeting is - Intended to elicit feedback and foster collaboration. This meeting is an inspect and adapt point in the project. When I hear coaches and scrum masters theories on what should and shouldn’t happen in this meeting I like to link them back to the why of the meeting.

Here are some of the common anti-patterns in my opinion :-

Anti-pattern :- The review meeting is where the Product Owner get’s to see the stories demoed for the first time and signs off on them

Well, you could do that but I don’t like it. Why? Because the delivery team should really have a chance to find out that they did not meet “done” before this meeting. Leaving sign off to this late in the sprint means there is no time to implement any PO identified delta and the team may fail delivery of that story.

I much prefer the team to do a demo to the product owner in sprint as soon as they feel they have met done. This follows the pattern of early feedback that we are trying to foster in agile. If the PO finds some issues or has minor changes (that will not affect the sprint delivery) then we have a chance to implement them before the end of the sprint. This pattern keeps the PO involved in the delivery of the increment and fosters collaboration.

Another reason I don’t like it is that you run the risk of embarrassing the PO in front of the stakeholders if there are issues with the demo. Having the PO see a demo before the Review gives the developers a safe environment to do a dry run and make sure they know how to demo the story. The PO might even describe how he/she want’s it demoed differently or only in part in the upcoming Review meeting.

Anti-pattern :- Only demo stories that are done

The scrum guide says that in this meeting “the development team demonstrates the work that has been done“. “The Product Owner explains what Product Backlog items have been ‘Done’ and what has not been ‘Done’”. So - do you show work that has not been “done”? Well I think the answer to that is up to the PO. If he/she sees value in the work that was done and can see that there is enough done to get feedback on, then why not go ahead and show it? Sure you will mention that it is not as complete as hoped at that juncture and point out what made it “not done”. Remember the why of the meeting is to get feedback, so if there is enough to get feedback on, show it!

Here is an example. I had a team that deploy their software to an embedded system. Part of “done” is deploying the software to a test workbench. So come review day, the one test harness the team had dies. The only way to demo is in an emulator. Is that the right thing to do? Of course it is! You explain to the stakeholders why it is not in the workbench and will have to resort to showing the progress of the increment in an emulator in order to continue with the meeting. You want feedback and this will get you feedback, so yes, that is the correct thing to do.

Anti-pattern :- The demo is for the team to see what work has been done

Whilst this might be true, I don’t think it is a primary purpose of the meeting and not one of the goals. The primary purpose is to elicit feedback from the stakeholders and/or customer(s). A highly collaborative team should know what is going on with the rest of the team during the sprint in any case. Particularly if they are co-located and even more so if they are pair programming. (Two practices I cannot recommend highly enough.) I much prefer if the entire team watch the sign off demo to the PO mid-sprint. What this might look like is a pair in a team might call a huddle and call the PO over to view the work they feel to be “done”. The team all gather around and watch the pair explain the finished work to the PO and are present to hear if there is any feedback or needed changes which might get added to the sprint backlog.

Anti-pattern :- All stories must be demoed

I feel the demo should be slick, exciting and engaging to the stakeholders. Not all stories are necessary to demo to a room of stakeholders and customers. There may be stories that are run in a sprint, that whilst necessary to the forward momentum of the product, are hard to show value to stakeholders. In these cases I feel a demo can do more trouble than good. Sure you should mention that work was done but that is plenty enough for these stories. An example is - you have to change the format of the log output so that it can be consumed by the new data mining platform that another project has implemented in the company. Can you see how this feature is important, but in a product that is going to be a user facing application,  this feature is an overhead story and not an end user feature story?

Under these circumstances, I would recommend not wasting time and energy with demoing these type of stories. You are looking for feedback remember? Show only the stories that will elicit feedback. Don’t waste peoples time. If you can get through the meeting in under the allotted time box then great!

Anti-pattern :- the whole team must attend the review meeting always

What!?!?! I hear you say. Now I think ideally yes, the whole team should participate. But if the room that you have available for the review is too small to accommodate the whole team and all the stakeholders involved, then I would recommend only bringing as many of the delivery team that the room will accommodate. At a minimum you (as PO) should bring the members needed to run the demo if you are unable to do it yourself.

Anti-pattern :- the overall project progress is not discussed

From the scrum guide - “The Product Owner discusses the Product Backlog as it stands. He or she projects likely completion dates based on progress to date.” Too many “demo meetings” I have attended are just that - demos and nothing else. The meeting is a Sprint Review meeting of which a demo is just part of the meeting. The important aspect of discussing the remaining backlog is all too often missed! My personal preference is to bring in a Product Burn Up Chart. It’s simple to maintain and contains the smallest (and most useful) amount of information needed to portray this information. Oh - and make it big! Agile likes big and visible and transparent.


Start with the why. When you understand they why, it will help you make the best decisions around any of the agile practices. I hope these scenarios have helped you get a better grasp of the why behind the sprint review meeting.










Wednesday, August 28, 2013

The Definition of Ready - a Scrum Smell?

Definition of Ready
I’m going on the assumption that you already know of what this term / tool means in the context of agile development so won’t be starting with re-explaining it. Have a look here if you are still unsure.


Scrum referers to a “Definition of Done” and to move a team into scrum involves creating this artifact. Creating a “Definition of Ready” is not part of scrum. It is a tool that an agile coach needs to keep in their toolbox but I would recommend not pulling this one out too quickly and certainly not as standard procedure.


The reason that “Definition of Ready” is not a scrum artifact is because you shouldn't need it. If scrum were understood completely, then “being ready” is implicit and not needed as explicit. When I have to pull this tool out of my agile bag of tricks, it is always a sign that there is dysfunction in communication between the product owner and the delivery team. The Definition of Ready is a band aid fix to put in place so that progress can continue while I begin working on resolving the root cause of the dysfunction.


This dysfunction can surface when a product owner and/or delivery team are new to agile and one or both sides are not familiar with working with agile requirements and it is causing friction. Coaching the product owner in what a healthy backlog looks like and in what is required of their role to keep a team moving is one side of the coaching. Coaching the delivery team on being able to work with vague requirements is the other side. Sometimes the issue is a when a previous waterfall team are unable to make the conceptual leap of working without extensive documentation - requirements, architectural and specification documents.


Depending on the root cause, coaching items that may go on my coaching backlog might include :-
  • Playing to win
  • Story writing
  • Story splitting
  • Story map (as a backlog management and prioritization tool)
  • Incremental design and architecture
  • Delivering business value as a team
  • Working with a product owner
  • Product backlog management
  • Backlog grooming
  • Backlog transparency
  • Delivering working software
  • Small releases
  • Agile requirements
  • Emergent design
  • Tests and code as documentation
  • Retrospectives to address communication issues

Once the issues have been resolved, you can throw the definition of ready out. Remember in agile we are trying to make our processes as lightweight as possible. Removing an artifact that is redundant is keeping things lean and a good practice. Make it a celebration event with the team too. Something like “Hey - we don’t need this anymore. We are on the road to high performing!”



Thursday, May 9, 2013

Acceptance Test Driven Development - are we flogging a dead horse?

Or - Functional Testing Practices I do and Don’t Like and Why

Functional Testing Practices I don’t like

I’m going to start with what I don’t like as it will create more context to what I do like. And it is nicer to end on a happy note. But first of all, let’s get the definitions out of the way.

What exactly is ATDD?

The first thing I don’t like about ATDD is the sheer amount of confusion in the industry about what the term means. Follow these links for examples of what I mean about the confusion around definition
  1. ATDD vs. BDD vs. Specification by Example vs ....
  2. The Sportscar Metaphor: TDD, ATDD, and BDD Explained
  3. ATDD versus BDD and the proper use of a framework
In order to move past this hurdle and on with my argument, here are is definition of the term and some others related to it so we have a common understanding from this point.
Acceptance tests :-  otherwise known as customer tests and previously known as functional tests, are tests that match/replace the acceptance criteria of a story. The concept has it’s roots in Extreme Programming (XP). See http://c2.com/cgi/wiki?FunctionalTest. The idea is that the customer will accept the story when all the acceptance tests pass. Or put another way, a passing suite of acceptance tests constitutes part of the Definition of Done for a story and/or iteration/sprint.
Because acceptance criteria are typically expressed at a high level of user functionality, Automated Acceptance Tests (AAT) are often written using GUI driving testing frameworks e.g. Selenium. These tools are sometimes referred to as Acceptance Testing Tools/Frameworks.
The agile community have long wanted acceptance tests to be written by the customer (hence the newer name of customer tests) and ideally before work starts on the story. Because the acceptance tests are written before the code, this led to the term Acceptance Test Driven Development (ATDD) or Automated Acceptance Test Driven Development (AATDD) because it somewhat follows the pattern of test before code as practiced by Test Driven Development (TDD). I will talk more on TDD later. If you have implemented Scrum as your agile practice read Product Owner in place of customer. You will find me switching between the two phrases in this article.

Product owners didn’t buy into the vision

The reality of where the above vision of acceptance testing has ended up is that most product owners are not able to, or are interested in, writing such tests. Particularly if it requires using a scripting or programming language.
When the agile community witnessed this reluctance, they thought that if they could create functional testing tools that would allow for the tests to be created using applications that product owners are comfortable with, like excel or word, then the practice would become more palatable and widely accepted. This resulted in tools such as FitNesse. I’m going to step out onto the ledge and say that on the most part these tools didn’t work in the manner that they were hoped for. Even using this medium to write tests, product owners were still reluctant to produce testing artifacts and found the output of FitNesse confusing and of little value to them.
So the community went back to the drawing board and valiantly tried to another approach. Enter Behaviour Driven Development (BDD). The BDD approach was to create a framework where the tests are written in plain english and saved as plain text documents. The thinking was; if we make it simpler, product owners and customers will want to write tests. They hoped and thought that if the customers could do this then the clever devs and their BDD frameworks will do all the other magic to turn these plain text documents into automated acceptance tests.
I am yet to see a product owner get excited about this breakthrough and throw their arms into the air declaring “Hallelujah! Finally I am in control of the quality of this project” and thank the agile community for developing the tool that they have been waiting for all their life. In short, I feel agilists are guilty of not understanding the product owners needs and are telling them how they should be doing their job. This is akin to software companies with the view - “If only the customers would get just how awesome this product is then they would use it the way that we designed it - the way they are supposed to use it. The users need all these these awesome features we have created for them but they just don’t understand the concept. The user is the problem and we need to educate them.”

Product Owners aren’t coders, they’re business people

Product Owners have their role because they understand requirements, customers, business drivers, marketing, sales etc. They are often business analysts and spend much of their time talking with customers, stakeholders, sales, marketing, usability experts, visiting sites/customers and of course making themselves available to the team to answer questions, providing scope and resolving any uncertainty that may arise in a story during an iteration. So thinking of acceptance criteria in a certain format or in a programming language is more than often incompatible with their skills set and interests. This is why they are reluctant to write acceptance tests in my experience. They seem happy to create acceptance criteria when asked but are more often than not, reluctant to do anything more than that.

Acceptance tests and more especially BDD tests are hard to refactor

Acceptance tests tend to just grow in a corner of the codebase are never refactored. The acceptance test regression suite gets added to each iteration and no thought goes into the question of duplication and quality. E.g. Do we need to refactor these tests? Are they all still relevant?
Test Code is a first class citizen and needs refactoring. Acceptance tests often don’t lend themselves to being refactored and I have yet to see a way to refactor plain text BDD tests.

Acceptance and BDD test suites are slow and this only gets worse over time

An acceptance test suite (sometimes called a regression suite) keeps getting bigger and bigger and slower and slower. One day you work on a story and find that you break a hundred or hundreds of these tests with one small change and have to go back and edit each of these failing tests. Or what more often happens - the entire suite is thrown out and the ATDD/BDD exercise is declared a failure.
When the running of an acceptance test regression suite (accumulated acceptance tests) starts taking an extended period of time then you lose the benefit of short feedback cycles. The likelihood of a dev team taking action on a failed build reduces proportionately to the length of time the build takes. i.e. the longer the build, the less likely it is going to be maintained and acted upon when it fails. Without refactoring, any test suite will suffer from accretion and entropy, and be eventually abandoned - particularly if the suite starts to take longer and longer amounts of time to run.

Acceptance and BDD test suites are fragile and time costly to maintain
Acceptance tests are know for suffering from false positive test failures. Development teams are unlikely to take any action when the build fails when they are tired of expending vast amounts of time and energy trying to fix fragility. That is - ascertaining if the fail was a bug in the code, the test, a change in the environment or just the suite framework being flaky. Fixing bugs that are intermittent and hard to reproduce are notoriously difficult and time consuming. When this happens repeatedly the solution is often to comment out the test, or remove the test, or abandon the entire suite if it happens too often.

ATDD focuses on the wrong part of the testing pyramid

Don’t get me wrong, I like integration, system and end-to-end tests. I’m in favour of testing the entire pyramid (see the pyramid below).  I’m not suggesting throwing the baby out with the bathwater when I criticize ATDD and BDD. To get functional tests to run quickly, not be fragile and appropriate to the product is a skill of an advanced development team. Too often new teams leap on ATDD as a best practice and the testing pyramid is turned upside down as their primary focus moves to the top of the pyramid where ATDD lives. In the pyramid diagram you will notice unit testing and Test Driven Development (TDD) should be the primary focus of the team and the foundation block of all testing. When a team is proficient at TDD with unit tests then they are more likely to create the appropriate level of testing on the higher levels of the pyramid. ATDD should not be undertaken by beginner (or SHU level) development teams in my opinion.

ATDD is too often mistaken as TDD (Test Driven Development)

“We do TDD here” is a comment I hear too often from teams that are not doing any unit testing but have focused their testing efforts on functional testing only and believe this is what TDD is. (As an aside even if you are writing unit tests, that doesn’t mean you are doing TDD. Unless you are writing the unit tests at the same time as the code, then you are not doing TDD.)

ATDD can encourage big design up front instead of emergent design

When everything else in agile is following the philosophies of just in time (JIT) and simple design, many of the assertions that you came up with in an acceptance test before you implement the code turn out to be based on assumptions in functionality that turn out to be wrong once the code and design have emerged. This then requires you to have to re-write the acceptance tests during the iteration. Loose or vague requirements (as favoured by agile) means that the way you implement a story may be very different to the way that you thought it would have been implemented before you started the story. The ability to evolve the solution is embedded into the nature of agile processes and is indeed one of it’s strengths. Because of this I’m going to claim that many acceptance tests go against agile architecture and emergent design.
This mostly happens when your acceptance tests are driving a GUI. Now I am not opposed to designing a GUI up front, and indeed this is helpful for the developers to have a design or mock to work to when implementing the story. But what often happens is, that it is only when you start implementing a design (coding) that you find out all the flaws in the design, and all the edge cases that the designer did not think of when creating the design. This results in the design and/or functionality changing, or emerging (part of emergent design) during an iteration and why I feel it would have been a waste of time and effort to have written tests against the GUI before the GUI was implemented.

ATDD is time consuming and not lean

The practice of ATDD can consume time in these areas : -
  1. Creating fixtures to drive BDD tests
  2. Re-writing the tests after emergent design has changed the original design
  3. Maintaining a fragile suite that suffers from false positives
  4. The test themselves are slow to run
This is not an insignificant amount of time and definitely not lean in nature.

BDD context switching

I prefer acceptance testing tools that sit on top of the same technology that you are using for unit testing. E.g. JWebUnit runs on JUnit. You can edit and run the tests all in the same IDE.
BDD testing involves switching between IDEs and/or language technologies that feels  unnecessarily disjointed to me.

When do you call the horse dead?

The industry has had long enough to try the concept of acceptance testing out on the community of product owners and the uptake has proven to been poor. I would surmise as a general a failure.
So how much longer are we going to flog this dead horse? Compare this to the uptake and acceptance of TDD as a practice. The TDD proof is in the pudding. TDD has stuck and works. ATDD did not and is a dead horse that the agile community at large seem to keep wanting to revive by changing it’s shape and creating new frameworks and approaches instead of ditching.

Testing Practices I do like

Writing good, clean and readable tests

Your tests are a first class citizen and form part of the documentation (or specification) of your application. Learning to write good tests, wherever they are in the pyramid, is an art in itself and one that must be learnt by every agile developer and practiced by every agile team. Bob Martin has a great chapter in Clean Code: A Handbook of Agile Software Craftsmanship on Unit Testing. The pattern of keeping test code clean applies to all types of automated tests.

Refactoring tests

All code suffers from rot and needs to be kept fresh, including test code. Learn to refactor your tests. I highly recommend this book on this topic :- xUnit Test Patterns: Refactoring Test Code by Gerard Meszaros.

Use the right tool for the right job. What Specification frameworks are good for

If your business has areas of functionality or business logic that have to meet compliance or legal requirements and be documented, then this can be a good use of a BDD or other specification testing tool. You can meet the compliance requirement around documentation and build a valuable test artifact at the same time. Win/win!

BDD test protocol

Given, when, then - is a nice way to think about your tests. It has become very popular among unit test frameworks also. I like it. Similar to the Arrange, Act, Assert pattern but it builds the pattern into the test protocol and thus improves the readability of tests.

Following the testing pyramid - Agile Testing

High performance teams I have worked with know how to use end-to-end and integration test frameworks effectively to ensure they are writing quality code and are not breaking functionality as they go. They know when to write these tests and what to test. They know how to refactor these tests to keep them fresh, relevant and avoid fragility. They know how to avoid duplication in these tests and how to, and when to, implement configuration management. They take ownership of writing and maintaining these tests (as they do for the quality of the entire product in general). They know how and when to break these tests into suites to optimize build performance. For example splitting out a smoke test suite that runs on every local build and a more complete regression suite that runs on a CI server after each check in. They know how to creates suites of tests and how to schedule and chain builds e.g. every night we shall run stress/performance testing and/or benchmarking. And not to be forgotten, the base of the testing pyramid is solid! That is to say, they have embraced TDD and have a high level of unit test code coverage.
This is what I call “agile testing”. I have asked non-coding agile consultant colleagues to use this term because it refers to the entire pyramid when talking about best practices for agile teams and not fall into using the (somewhat rhetorical) term of ATDD and TDD. The rule is - if you can’t write them, then don’t talk about them and instead use the term “Agile Testing”.

References and other supporting articles


Flipping the Automated Testing Triangle: the Upshot - Patrick Welsh

TDD: Where Did It All Go Wrong? - Ian Cooper

The Problems With Acceptance Testing - James Shore

A Case Against Cucumber - Kevin Liddle

AT FAIL - Uncle Bob Martin

http://www.jimmycuadra.com/posts/please-don-t-use-cucumber

https://www.thoughtworks.com/p2magazine/issue12/bdd-dont/

Wednesday, April 3, 2013

The Problem With Velocity


There is a smell in agile around velocity (and estimation) that has not been resolved yet. So what is wrong with Velocity? Below are the myriad of questions that I get asked whenever teaching velocity that indicate to me that the concept is not particularly simple and there is a smell (to use a developer phrase).

  • How do I account for capacity changes from sprint to sprint?
  • What do I do with capacity when the team is particularly cross functional in nature and has developers and non-developers on it e.g. testers, Business Analysts, UI, tech writers?
  • When the backlog is being supported by multiple teams – do I have them aligned around the same sizing and if so how?
  • Do I measure personal velocity (i.e. by team member)
  • Do UI points differ to Testing points which differ to Tech Writing points etc.?
  • Do I track actual velocity and estimated?
  • How do I track/account for stories that spill over with regards to velocity and what size the story ends up being?
  • Do I get credit for part of the story that was complete (when I don’t hit commitment)?
  • Do you burn up points and then stop work on the story when you have delivered that many points of value?
  • Do I put points on spikes and/or research tasks?
  • Do I put points on bugs?
  • Do I put points on non-dev activities that are in the sprint but required to get to completion?
  • Are points just complexity or what if there is little complexity and a lot of repetition or time?
  • If we change team members do we need to re-calibrate the points?
  • If we take velocity as an average of the sprints, how do we account for different capacities in the sprints?
  • Is velocity a rolling average or the average from day one forward?
  • If rolling, how many sprints should I use?
  • Do I need to reset the velocity when the team changes?
  • Can we have an exchange rate on our points to another teams points?
  • Points aren't real so why should I use them?
  • If we do extra work in the sprint that wasn't planned for do we get credit for it?
  • Should the goal be to increase velocity each sprint?
  • If multiple teams are working off the same backlog, do they have to be using the same point values?


I have answers for all these questions and I am not going to go into them here. My point is - that the concept of velocity is not as simple as it seems on the surface and in fact has some serious flaws.
So what is the solution?


If velocity is not a problem for your team because you have found something that works then Yay! Stay with it. I have seen it work well and not so well. It seemed to work best in teams that were comprised solely of developers that were pair programming (i.e. XP teams).


Switching to kanban will resolve many of the issues but only if it fits your release process. Unless you are making use of inspect and adapt at the end of an iteration (which is what iterative development gives you) then you may as well ditch iterative development and go with a flow system like kanban. Or if you release to a heartbeat cycle and whatever is done by the next release point gets in, then kanban might be a better fit also. Kanban has a different approach to throughput than velocity that does not raise all the questions above. Hence the recommendation, but only if the process fits.


Otherwise - watch this space or please let me know if you have any ideas or have seen something that works to resolve this aspect of iterative development.



OnTime Agile

Tuesday, January 8, 2013

Trust the Team

Micro manager fiddling with the team

<music src="Pink Floyd - The Wall">
We don't need no constant tracking.
We don't  need no micro managing.
Hey - Business - Leave those devs alone!
All in all it's just another, task off the wall.
</music>

What does a delivery team do? Delivers! They make a commitment and then deliver on it. If they are not going to make commitment then they promise to tell the Product Owner the minute they think the sprint is at risk. So given that this is the way a committed delivery team works - why do management (and sometimes Product Owners) feel they need to track the day to day progress of a team through the sprint? The answer is that the management are still in traditional project management tracking mode where they feel that their job is to track task completion and hours remaining. If this is true for your company/project/product - STOP!!!! You are dis-empowering the team and in effect telling them that you do not trust them.

I personally don't see the need for exposing (or even using) a burn down chart necessarily in a committed team that delivers consistently and have worked in many such teams. The burn down chart is an (optional) tool that is used by a team to help them know if/when they are behind on a sprint or likely not to succeed. If your team finds it useful - Yay! Go ahead and use it. But if you have a method in place that gives you the same result without this too then - Yay! Use that.

So given that the business does NOT need to track the daily progress of sprints (because the delivery teams are meeting their commitments) then what tracking is required? The product owner and their backlog is the only point of tracking for a project. For enterprise or portfolio tracking (i.e. multiple backlogs) then an electronic tool might make sense and help you. If so, use one. If not - then don't.

If a delivery team find an electronic tool useful - then let them use a tool. If they do not - then do not enforce one on them because "you need to track their work". That is not using an agile mindset and you are in fact expressing distrust to your teams.

I've said it before and I'll say it again. I personally dislike electronic tools within teams and much prefer big visible charts that you can glean a thousand pieces of information off at a glance. Keep it simple stupid! (And start trusting your teams.)