Our team here at Botsplash recently surpassed the major milestone of completing our 100th sprint! As the company has grown from the humble beginnings of being 3 team members to a full-fledged tech start-up, one common thread throughout this journey has been our passion for iterative and scalable technology.
To say we have learned a few lessons along the way would be a huge understatement. Here we will outline some lessons learned and some agile best practices which have proved extremely helpful in allowing us to grow and scale at a practical pace without compromising customer satisfaction and needs.
Maintain Consistent Sprint Length
One big challenge any team faces when following agile methodology principles is scope creep, which is why we always prefer to reduce or change the scope before altering the sprint schedule.
One of the Agile Manifesto principles is to “deliver working software frequently.” How your team defines ‘frequently’ is relative and should be decided by the team itself based on a number of factors such as team and company size. While sprint planning and task definition are inherently heuristic in nature, and specific greedy algorithms do exist for formulating timeboxes, there is no one correct answer for sprint duration. Determining sprint duration will take some experimentation during your first few releases to find a good rhythm.
We have found success in delivering software on a two week cadence, deploying each new release every Friday before business hours. There will of course be unforeseen circumstances which will create delays. What is more important is that clear, well-defined contingency plans are set in place when the inevitable setback does occur. Ninety-nine percent of all unforeseen circumstances will trigger one of the following fallback plans:
- If the sprint must be extended due to a small setback, extend it by one day. This will require weekend deployment and monitoring.
- If unforeseen events (resource constraint, technical debt, etc.) require greater consideration, delay deployment by one week. This will allow for the proper testing and validation procedures to remain uncompromised.
We have chosen to operate within this paradigm because we are a product and feature-centric team first and foremost. This is something we do not compromise on. As the old saying goes, it is better to deliver 70% of 100% working features than 100% of 70% working features.
Maintain A Strict Schedule During Sprints
Throughout each sprint we try to maintain a consistent schedule, such as the following:
Discovery, development, QA, review meetings
- Monday - Team Code Review 1
- Tuesday - Code freeze and begin UAT
- Wednesday - Team Code Review 2
- Thursday - UAT completion, Deployment Go/No-Go meeting
- Friday - Morning deployment, monitoring the rest of the day
Code Review, Code Review, Code Review!
Team leads continuously perform code reviews and follow up with other developers and QA engineers. While each developer performs code reviews individually on a daily basis, it is also important to have group code review sessions. These consist of two one-hour meetings where all changes to the current upcoming release branch are reviewed.
During these sessions, is it crucial to create the following:
- Before code review: A list containing major features or items in the current release. If time is ever a constraint during code review, this will allow us to prioritize which code changes to review.
- After code review: A list of any action items generated during the meeting. We will then use this to update and/or create new Jira tickets.
Branching Best Practices
We follow the normal git branching best practices, with a slight deviation. The ‘norm’ or consensus for git branching is to maintain development, main, feature, release, and hotfix branches as needed. Instead of a development branch, we maintain two release branches - one for development and staging, and one for production. If a feature becomes too large or complex, use branches with the feature name associated with the Jira ticket. Each production release branch is protected with admin-only permissions. This allows easier release and build management.
Each build is automated and tagged with the release branch, which allows for greater traceability and clarity while managing each environment. For example, in our Jenkins build server:
- We proudly deliver a reliable and highly tailored experience for our customers. One way we have achieved this is by using feature flags. The use of feature flags not only allows us to onboard any new customer at a pace they feel comfortable with, but it also allows us to have more control over the user experience (UX). In conjunction with various monitoring tools such as Papertrail and DataDog, we can minimize the inherent risk that any new feature will introduce.
- Any backend change which touches the database or data access object (DAO) needs to be validated for backwards compatibility. These validations can be run via scripted unit tests.
Clean, Iterative Programming
Tasks should be broken down into their smallest functional implementations. This enables us to better manage risk and remain agile based on team and customer feedback. Any code committed should be frequent, CLEAN, and clear by following uniform formatting rules. Always give preference to the change which will affect the least places and trigger the least testing. We lean on each other all the time, so it’s important to set yourself up for success!
Bi-Weekly Design and Architecture Meetings
Shared code ownership is the main objective of these meetings. Having the entire team on the same page during the initial design phase has a positive impact on the rest of the sprint. For example, because a solution has been discussed and agreed upon, knowledge and responsibility are shared between team members. Later, this makes code reviews more efficient.
Collaboration is very important, whether it is pair programming or seeking advice on a library or package that you would like to implement. Given that our team is international, these meetings are vital collaboration incubators.
- Focus on delivering value to the customer.
- Implement MVP, allow yourself to learn lessons and grow along the way. Take every opportunity to enhance capabilities.
- Create both major and minor goals for each sprint and quarter.
- A parallel focus on security and technical debt should be maintained during sprints.
- Tech stack upgrades follow sprint deployment week. All tech stack upgrades should be tested for three weeks.
- It is better to reduce or change scope than to move the sprint schedule.
- Split tasks up to facilitate testing and maintaining scope during sprints.
We could not have completed our 100th Sprint without your help and support! Our commitment is to provide the best possible digital communication experience for our partners and clients. We will continue to grow and improve, and we look forward to sharing the journey with you along the way.