A Scrum Master Checklist

scrum_25067964While there is no single set of prescriptions or a silver bullet that will ensure repeatable success for every feature, every team, and every organization, the following Scrum Master Checklists might just help you get started thinking about the things that will ultimately move the ball in the right direction.

The Really Short Checklist

Start with these three lines of inquiry:
  1. Is the team delivering working, tested features every 4-6 weeks (or less)? Are your clients happy— are they remaining loyal and buying more of what you offer?
  2. Are team’s processes continuously improving? Are team members happy, do they have ownership?— are they smiling more?
  3. Is the team delivering what the business needs most? Are the stakeholders happy— are they relaxed and trusting the team rather than micromanaging/controlling?
If you can say “yes” to all the above, no need to read any further. Instead, please let me buy you a nice dinner so you can share with me how you got to Shangri La. (Seriously, contact me)

Still Reading, eh?

To help support and sustain self-organizing teams, the Agile community has identified the following core elements of Scrum: the roles, artifacts and activities that help keep teams stay on the path to high performance:

Are all these in place and working smoothly? Newly formed teams often require a strict hand by the scrum master to get them established as well as to keep them alive. (See Shu Ha Ri)

And then as teams mature and become self-organizing, new ideas emerge… “what if….” or “how can we…” and they are worth exploring – moving beyond the core.

Beyond the Core: “GASP”

There has emerged a great deal of common ground in the community in regard to “other things” beyond “Core Scrum.” These are ideas that emerge from inspecting and adapting. And many of these things have been shared far and wide by the Scrum community as “Generally Accepted Scrum Practices” or GASPs.

Mike Cohns’s definition:

A Generally Accepted Scrum Practice (GASP) is an activity performed by many, but not necessarily all, Scrum teams. A team that does not perform the practice can still be considered to be doing Scrum. So, something like “work in short, time boxed iterations no longer than a calendar month” is not a GASP. It is more of a Scrum rule. To be considered a GASP, the practice must be something “generally accepted” as a good idea. As a working definition of that, I’ve been using the idea that “every Scrum team should be aware of the practice but some teams may justifiably choose not to perform that practice, often choosing to do something similar instead.”

The Slightly Longer Scrum Master Checklist

Here are some of the non-core practices I’ve found useful with the teams I coach:

  1. Definition of Ready top of mind during grooming and honored at planning?
  2. Backlog groomed w clear classes of work and (adjustable) priorities?
  3. Near term priority cards relative Story Pointed?
  4. Team’s plannable hours known?
  5. Agreed upon “Adjustment” to plannable hour bucket (e.g., 50% or “Magic of Pi“)
  6. Target allocation of plannable hours for tangible and intangible work agreed upon w Product Owner?
  7. Sprint Goals clearly articulated and visible at all times?
  8. Business value clear and prioritized?
  9. Definition of Done – known by all, and periodically reviewed/updated,
  10. External risks known and managed (e.g., SME’s, dependencies)
  11. Sprint 0 – heresy I know to you purists… (Covers intake, discovery, deploy first)
  12. Close out sprint – Sharpen the saw, scratch an itch, clean the attic, pay off some debt

The Really Long Scrum Master Checklist

Adapted from CollabNet: Michael James / Bob Schatz

How is our product owner doing?

  • Is the business value of the releases, features, and user stories clearly communicated?
  • Is the Product Backlog prioritized according to his/her latest thinking?
  • Are requirements from all stakeholders captured in the Product Backlog? Remember: the backlog is emergent.
  • Is the Product Backlog well groomed? (To maintain a manageable number of items, keep things more granular towards the top, with general epics at the bottom. It’s counterproductive to overanalyze too far past the top of the Product Backlog.  Requirements often will change in an ongoing conversation between the developing product and the stakeholders/customers.
  • Could any requirements (especially those near the top of the Product Backlog) be better expressed as thinner user stories?
  • Do the stories at the top of the Backlog meet a Definition of Ready?
  • Is your Product Owner aware of existing technical debt?
  • Is your Product Owner aware of escaped defects?
  • Have you helped your Product Owner organize backlog items into appropriate versions, epics or priority groups?
  • Does everyone know whether the release plan still matches reality (based on velocity and burndown rates)?
  • Did your Product Owner adjust the release plan after the last Sprint Review Meeting?
  • Do you need/have a release sprint planned?

How is our team doing?

  • Does the team see, understand and agree that there is business value in the near term product back log items?
  • Does the team estimate and plan the iteration collaboratively, effectively and efficiently?
  • Does the team have clear iteration goals in support of a potential shippable increment?
  • Is your team in the state of flow? Some characteristics of this state:
    • Clear goals (expectations and rules are discernible and goals are attainable, aligning appropriately with one’s skill set and abilities).
    • Concentration and focus, a high degree of concentration on a limited field of attention.
    • A loss of the feeling of self-consciousness, the merging of action and awareness.
  • Direct and immediate feedback (successes and failures in the course of the activity are apparent, so that behavior can be adjusted as needed).
  • Balance between ability level and challenge (the activity is neither too easy nor too difficult).
  • A sense of personal control over the situation or activity.
  • The activity is intrinsically rewarding, so there is an effortlessness of action.
  • Do team members seem to respect each other, like each other, goof off together, and celebrate each other’s success?
  • Do team members hold each other accountable to high standards, and challenge each other to grow? (Say, mean, do)
  • Are team members self-organized, do they respect each other, help each other to complete iteration goals, manage interdependencies and stay in sync with each other?
  • Are there issues/opportunities the team isn’t discussing because they’re too uncomfortable?
  • Have you tried a variety of formats and locations for Sprint Retrospective Meetings?
  • Has the team kept focus on Sprint goals? Perhaps you should conduct a mid-Sprint checkup to re-review the acceptance criteria of the Product Backlog Items committed for this Sprint.
  • Is the Sprint taskboard up to date and reflect what the team is actually doing? Beware the “dark matter” of undisclosed tasks and tasks bigger than one day’s work. Tasks not related to Sprint commitments are impediments to  those commitments.
  • Does your team have the right mix of skills to build a potentially shippable product increment?
  • Are the team self-management artifacts (taskboard, Sprint Burndown Chart, impediments list, etc.) visible to the team, convenient for the team to use?)
  • Are these artifacts adequately protected from meddlers? Excess scrutiny of daily activity by people outside the team may impede team internal transparency and self management.
  • Do team members volunteer for tasks?
  • Has the need for technical debt repayment been made explicit in the backlog items, gradually making the  code a more pleasant place to work?
  • Are team members leaving their job titles at the door and being collectively responsible for all aspects  of agreed work (testing, user documentation, etc.)?
  • Is the team taking time to Sharpen the Saw? (See Scrum Games…)
  • Is the team “Healthy” – See Agile Health Check Models.

How are our engineering practices doing?

  • Does your system in development have a “push to test” button allowing anyone (same team or different team)  to conveniently detect when they’ve caused a regression failure (broken previously-working functionality)?
  • Do you have an appropriate balance of automated end-to-end system tests (a.k.a. “functional tests”) and automated unit tests?
  • Is the team writing both system tests and unit tests in the same language as the system they’re developing? Collaboration is not enhanced by proprietary scripting languages or capture playback tools that only a subset of the team knows how to maintain.
  • Has your team discovered the useful gray area between system tests and unit tests?
  • Does a continuous integration server automatically sound an alarm when someone causes a regression failure? Can this feedback loop be reduced to hours or minutes? (“Daily builds are for wimps.” — Kent Beck)
  • Do all tests roll up into the continuous integration server result?
  • Have team members discovered the joy of continuous design and constant refactoring, as an alternative to Big Up Front Design? Refactoring has a strict definition: changing internal structure without changing external behavior. Refactoring should occur several times per hour, whenever there is duplicate code, complex conditional logic (visible by excess indenting or long methods), poorly named identifiers, excessive coupling between objects, etc. Refactoring with confidence is only possible with automated test coverage. Neglecting refactoring makes it hard to change the product in the future, especially since it’s hard to find good developers willing to work on bad code.
  • Does your definition of “done” for each Product Backlog Item include full automated test coverage and refactoring? Learning Test Driven Development (TDD) increases the probability of achieving this.
  • Are team members pair programming? Pair programming may dramatically increase code maintainability and reduce bug rates. It challenges people’s boundaries and sometimes seems to take longer  (if we measure by lines of code rather than shippable functionality). Lead by example by initiating paired workdays with team members. Some of them will start to prefer working this way.

How is our organization doing?

  • Is the appropriate amount of inter-team communication happening?
  • Are teams independently able to produce working features, even spanning architectural boundaries?
  • Are teams meeting with other teams and Birds of a Feather working the organizational impediments list?
  • When appropriate, are the organizational impediments pasted to the wall in a very visible place? (See Snake on the Board) Can the cost be quantified in dollars, lost time to market, lost quality, or lost customer opportunities?
  • Are organizational career paths compatible with the collective goals of our teams?
  • Has our organization been recognized by the trade press or other independent sources as one of the best places to work, or a leader in our industry?
  • Are people contributing to a learning organization?

Last but certainly not least, how are you doing?

The Road to Continuous Improvement

See: Project Retrospectives and Retrospective Exercises – A Toolbox

And keep in mind: Shu ha ri, the virtuous circle.

Explore more

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to Top