On our Agile / Scrum projects, we found we were suffering from the same issues coming out of Sprint retrospectives:
- We overlooked many of the important events in the sprint
- Not everyone was engaged
- They were taking too long
Having carried out a retro on the retro, we now have a new approach with the sprint poker planning cards. We go through the list of questions (see table below) and score on how we feel about each one, using the cards. The scoring is as follows:
- 1 – Mad
- 3 – Sad
- 5 – Ad(equate)
- 8 – Glad
- 13 – Rad(ical)
- ? – Don’t know or not applicable
The planning cards will remain hidden until all the team are ready to reveal. We then question the outlying scorers and discuss / re-score until we have a consensus value. Scores of 5 are ignored but for the others we agree what went wrong / right, why and the action. The lower the score the longer you should spend on it, to work out what needs adding to the start-doing, stop-doing and keep-doing lists. The high numbers can also be discussed in order to identify potential learnings for future sprints projects. The list of questions are:
- Understanding of what’s going on, planned and blocked (Communication)
- Operating as a single team; pulling in the same direction (Teamwork)
- The assumptions the sprint was based on were accurate (Information accuracy)
- Did we know what the sprint success criteria looked like (Definition of Done)
- Access to data, environments, accounts and permissions etc. (Technical preparation)
- Architecture fit for purpose based upon the requirements (Design)
- Right quantity and quality of meetings to make decisions (Meetings)
- Are we happy with what was in the sprint and what was delivered (Delivery of sprint)
- Right amount of quality testing (Testing)
- How impressive was the demo of the sprint deliverables (Playback)
- Avoided or mitigated blockers (Blockers)
- Risks, Issues, Decisions & Change trapped and actioned (RAID maintenance)
- Is the sponsor providing energy, decisions, direction, clarification and backing (Sponsorship)
- Considering everything, better or worse than last sprint (Overall project trend)
- Have we missed anything that needs covering (AOB)
The approach makes everyone think about each of the questions and must come up with a score that they can justify. Doing this engages everyone and targets all aspects of the sprint. We find calling out the outliers is key as it can quickly help identify way of working issues. The retro takes about 30-40 minutes.
I’d recommend adding the actions into your risk log, so that they get followed up in the subsequent sprints.
We have also expanded the questions to cover a project implementation review / retro. Get in touch if you want more details of what they are.
Introduction to Data Wrangler in Microsoft Fabric
What is Data Wrangler? A key selling point of Microsoft Fabric is the Data Science
Jul
Autogen Power BI Model in Tabular Editor
In the realm of business intelligence, Power BI has emerged as a powerful tool for
Jul
Microsoft Healthcare Accelerator for Fabric
Microsoft released the Healthcare Data Solutions in Microsoft Fabric in Q1 2024. It was introduced
Jul
Unlock the Power of Colour: Make Your Power BI Reports Pop
Colour is a powerful visual tool that can enhance the appeal and readability of your
Jul
Python vs. PySpark: Navigating Data Analytics in Databricks – Part 2
Part 2: Exploring Advanced Functionalities in Databricks Welcome back to our Databricks journey! In this
May
GPT-4 with Vision vs Custom Vision in Anomaly Detection
Businesses today are generating data at an unprecedented rate. Automated processing of data is essential
May
Exploring DALL·E Capabilities
What is DALL·E? DALL·E is text-to-image generation system developed by OpenAI using deep learning methodologies.
May
Using Copilot Studio to Develop a HR Policy Bot
The next addition to Microsoft’s generative AI and large language model tools is Microsoft Copilot
Apr