On our Agile / Scrum projects, we found we were suffering from the same issues coming out of Sprint retrospectives:
- We overlooked many of the important events in the sprint
- Not everyone was engaged
- They were taking too long
Having carried out a retro on the retro, we now have a new approach with the sprint poker planning cards. We go through the list of questions (see table below) and score on how we feel about each one, using the cards. The scoring is as follows:
- 1 – Mad
- 3 – Sad
- 5 – Ad(equate)
- 8 – Glad
- 13 – Rad(ical)
- ? – Don’t know or not applicable
The planning cards will remain hidden until all the team are ready to reveal. We then question the outlying scorers and discuss / re-score until we have a consensus value. Scores of 5 are ignored but for the others we agree what went wrong / right, why and the action. The lower the score the longer you should spend on it, to work out what needs adding to the start-doing, stop-doing and keep-doing lists. The high numbers can also be discussed in order to identify potential learnings for future sprints projects. The list of questions are:
- Understanding of what’s going on, planned and blocked (Communication)
- Operating as a single team; pulling in the same direction (Teamwork)
- The assumptions the sprint was based on were accurate (Information accuracy)
- Did we know what the sprint success criteria looked like (Definition of Done)
- Access to data, environments, accounts and permissions etc. (Technical preparation)
- Architecture fit for purpose based upon the requirements (Design)
- Right quantity and quality of meetings to make decisions (Meetings)
- Are we happy with what was in the sprint and what was delivered (Delivery of sprint)
- Right amount of quality testing (Testing)
- How impressive was the demo of the sprint deliverables (Playback)
- Avoided or mitigated blockers (Blockers)
- Risks, Issues, Decisions & Change trapped and actioned (RAID maintenance)
- Is the sponsor providing energy, decisions, direction, clarification and backing (Sponsorship)
- Considering everything, better or worse than last sprint (Overall project trend)
- Have we missed anything that needs covering (AOB)
The approach makes everyone think about each of the questions and must come up with a score that they can justify. Doing this engages everyone and targets all aspects of the sprint. We find calling out the outliers is key as it can quickly help identify way of working issues. The retro takes about 30-40 minutes.
I’d recommend adding the actions into your risk log, so that they get followed up in the subsequent sprints.
We have also expanded the questions to cover a project implementation review / retro. Get in touch if you want more details of what they are.
How Artificial Intelligence and Data Add Value to Businesses
Knowledge is power. And the data that you collect in the course of your business
May
Databricks Vs Synapse Spark Pools – What, When and Where?
Databricks or Synapse seems to be the question on everyone’s lips, whether its people asking
1 Comment
May
Power BI to Power AI – Part 2
This post is the second part of a blog series on the AI features of
Apr
Geospatial Sample architecture overview
The first blog ‘Part 1 – Introduction to Geospatial data’ gave an overview into geospatial
Apr
Data Lakehouses for Dummies
When we are thinking about data platforms, there are many different services and architectures that
Apr
Enable Smart Facility Management with Azure Digital Twins
Before I started writing this blog, I went to Google and searched for the keywords
Apr
Migrating On-Prem SSIS workload to Azure
Goal of this blog There can be scenario where organization wants to migrate there existing
Mar
Send B2B data with Azure Logic Apps and Enterprise Integration Pack
After creating an integration account that has partners and agreements, we are ready to create
Mar