On our Agile / Scrum projects, we found we were suffering from the same issues coming out of Sprint retrospectives:
- We overlooked many of the important events in the sprint
- Not everyone was engaged
- They were taking too long
Having carried out a retro on the retro, we now have a new approach with the sprint poker planning cards. We go through the list of questions (see table below) and score on how we feel about each one, using the cards. The scoring is as follows:
- 1 – Mad
- 3 – Sad
- 5 – Ad(equate)
- 8 – Glad
- 13 – Rad(ical)
- ? – Don’t know or not applicable
The planning cards will remain hidden until all the team are ready to reveal. We then question the outlying scorers and discuss / re-score until we have a consensus value. Scores of 5 are ignored but for the others we agree what went wrong / right, why and the action. The lower the score the longer you should spend on it, to work out what needs adding to the start-doing, stop-doing and keep-doing lists. The high numbers can also be discussed in order to identify potential learnings for future sprints projects. The list of questions are:
- Understanding of what’s going on, planned and blocked (Communication)
- Operating as a single team; pulling in the same direction (Teamwork)
- The assumptions the sprint was based on were accurate (Information accuracy)
- Did we know what the sprint success criteria looked like (Definition of Done)
- Access to data, environments, accounts and permissions etc. (Technical preparation)
- Architecture fit for purpose based upon the requirements (Design)
- Right quantity and quality of meetings to make decisions (Meetings)
- Are we happy with what was in the sprint and what was delivered (Delivery of sprint)
- Right amount of quality testing (Testing)
- How impressive was the demo of the sprint deliverables (Playback)
- Avoided or mitigated blockers (Blockers)
- Risks, Issues, Decisions & Change trapped and actioned (RAID maintenance)
- Is the sponsor providing energy, decisions, direction, clarification and backing (Sponsorship)
- Considering everything, better or worse than last sprint (Overall project trend)
- Have we missed anything that needs covering (AOB)
The approach makes everyone think about each of the questions and must come up with a score that they can justify. Doing this engages everyone and targets all aspects of the sprint. We find calling out the outliers is key as it can quickly help identify way of working issues. The retro takes about 30-40 minutes.
I’d recommend adding the actions into your risk log, so that they get followed up in the subsequent sprints.
We have also expanded the questions to cover a project implementation review / retro. Get in touch if you want more details of what they are.
Celebrating International Women’s Day: from Classroom to Code
As we celebrate International Women’s Day, I want to share my journey of breaking stereotypes
Mar
Pretty Power BI – Adding Pagination to Bar Charts
Good User Experience (UX) design is crucial in enabling stakeholders to maximise the insights that
Feb
Pretty Power BI – Creating Dynamic Histograms
Good User Experience (UX) design is crucial in enabling stakeholders to maximise the insights that
Feb
Top Tips to Pass the Databricks Certified Data Engineer Professional Exam
Having recently passed the Databricks Certified Data Engineer Professional exam, this blog post covers some
Jan
Python vs. PySpark Navigating Data Analytics in Databricks – Part 1
Introduction When it comes to conquering the data analytics landscape in Databricks, two heavyweights, Python
Jan
Impact of AI on Business Analysis
Artificial intelligence (AI) is rapidly transforming our world, and this blog post concentrates on the
Jan
Creating Clickbait Using Python
In 2023, about 5 billion people used the internet. With so many people contributing and
Dec
A Brief Overview of Security in Microsoft Fabric
Where Fabric Sits in the Hierarchy As you are probably aware, Microsoft Fabric is Microsoft’s
Dec