Tuesday, November 13, 2018

Saving agility from scrum: Accelerate book summary

Recently, I came across a post in our internal developer channel on shortcomings of scrum which they were facing in their project. Scrum and agile movement was started in order to empower developers to be in sync with business needs but it reduced to be a management control mechanism and got appropriated by existing project model where project manager reincarnated as scrum master, daily standups became status meeting and sprint retrospective became an one-way communication from scrum master to developers.

Then, I got recommendation to read a book on project management named “Accelerate: Building and Scaling High Performing Technology Organizations” by Nicole Forsgren, Jez Humble and Gene Kim.

This book talks about empirical evidence against distorted agile practices and provides a solution.


He writes about distortions in Scrum:

Velocity is designed to be used as a capacity planning tool; for example, it can be used to extrapolate how long it will take the team to complete all the work that has been planned and estimated. However, some managers have also used it as a way to measure team productivity, or even to compare teams.”

He mentions 2 major problems with scrum.

1. “velocity is a relative and team-dependent measure, not an absolute one. Teams usually have significantly different contexts which render their velocities incommensurable. Second, when velocity is used as a productivity measure, teams inevitably work to game their velocity. They inflate their estimates and focus on completing as many stories as possible at the expense of collaboration with other teams . Not only does this destroy the utility of velocity for its intended purpose, it also inhibits collaboration between teams.”

2. “many organizations measure utilization as a proxy for productivity. The problem with this method is that high utilization is only good up to a point. Once utilization gets above a certain level, there is no spare capacity to absorb unplanned work, changes to the plan, or improvement work. This results in longer lead times to complete work. Queue theory in math tells us that as utilization approaches 100%, lead times approach infinity—in other words, once you get to very high levels of utilization, it takes teams exponentially longer to get anything done. Since lead time—a measure of how fast work can be completed—is a productivity metric that doesn’t suffer from the drawbacks of the other metrics we’ve seen, it’s essential that we manage utilization to balance it against lead time in an economically optimal way.”

Key goals of measuring software delivery performance
  1. Should focus on a global outcome to ensure teams aren’t pitted against each other. 
  2. Should focus on outcomes not output: it shouldn’t reward people for putting in large amounts of busywork that doesn’t actually help achieve organizational goals.
Measures of delivery performance that meet these criteria:

  1. Delivery lead time: Lead time is the time it takes to go from a customer making a request to the request being satisfied. It is sum of the time it takes to design and validate a product or feature, and the time to deliver the feature to customers. 
  2. Deployment frequency: It is a proxy for batch size as it is easy to measure. store. A release will typically consist of multiple version control commits, unless the organization has achieved a single-piece flow where each commit can be released to production (a practice known as continuous deployment) 
  3. Time to restore service: how long it generally takes to restore service for the primary application or service they work on when a service incident (e.g., unplanned outage, service impairment) occurs. 
  4. Change fail rate: What percentage of changes to production (including, for example, software releases and infrastructure configuration changes) fail. 

Solution Architecture

XP prescribes a number of technical practices such as test-driven development and continuous integration. Continuous Delivery also emphasizes the importance of these technical practices (combined with comprehensive configuration management) as an enabler of more frequent, higher-quality, and lower-risk software releases.

Continuous delivery capabilities
1. Use version control for all production artifacts. Version control is the use of a version control system, such as GitHub or Subversion, for all production artifacts, including application code, application configurations, system configurations, and scripts for automating build and configuration of the environment.

2. Automate your deployment process. Deployment automation is the degree to which deployments are fully automated and do not require manual intervention.

3. Implement continuous integration. Continuous integration (CI) is the first step towards continuous delivery. This is a development practice where code is regularly checked in, and each check-in triggers a set of quick tests to discover serious regressions, which developers fix immediately. The CI process creates canonical builds and packages that are ultimately deployed and released.

4. Use trunk-based development methods. Trunk-based development has been shown to be a predictor of high performance in software development and delivery. It is characterized by fewer than three active branches in a code repository; branches and forks having very short lifetimes (e.g., less than a day) before being merged into master; and application teams rarely or never having “code lock” periods when no one can check in code or do pull requests due to merging conflicts, code freezes, or stabilization phases.

5. Implement test automation. Test automation is a practice where software tests are run automatically (not manually) continuously throughout the development process. Effective test suites are reliable—that is, tests find real failures and only pass releasable code. Note that developers should be primarily responsible for creation and maintenance of automated test suites.

6. Support test data management. Test data requires careful maintenance, and test data management is becoming an increasingly important part of automated testing. Effective practices include having adequate data to run your test suite, the ability to acquire necessary data on demand, the ability to condition your test data in your pipeline, and the data not limiting the amount of tests you can run. However teams should minimize, whenever possible, the amount of test data needed to run automated tests.

7. Shift left on security. Integrating security into the design and testing phases of the software development process is key to driving IT performance. This includes conducting security reviews of applications, including the infosec team in the design and demo process for applications, using pre-approved security libraries and packages, and testing security features as a part of the automated testing suite.


8. Implement continuous delivery (CD). CD is a development practice where software is in a deployable state throughout its lifecycle, and the team prioritizes keeping the software in a deployable state over working on new features. Fast feedback on the quality and deployability of the system is available to all team members, and when they get reports that the system isn’t deployable, fixes are made quickly. Finally, the system can be deployed to production or end users at any time, on demand.

Architecture capabilities

9. Use a loosely coupled architecture. This affects the extent to which a team can test and deploy their applications on demand, without requiring orchestration with other services. Having a loosely coupled architecture allows your teams to work independently, without relying on other teams for support and services, which in turn enables them to work quickly and deliver value to the organization.

10. Architect for empowered teams. Teams that can choose which tools to use do better at continuous delivery and, in turn, drive better software development and delivery performance. No one knows better than practitioners what they need to be effective.

Product and process capabilities

11. Gather and implement customer feedback. Whether organizations actively and regularly seek customer feedback and incorporate this feedback into the design of their products is important to software delivery performance.

12. Make the flow of work visible through the value stream. Teams should have a good understanding of and visibility into the flow of work from the business all the way through to customers, including the status of products and features.

13. Work in small batches. Teams should slice work into small pieces that can be completed in a week or less. The key is to have work decomposed into small features that allow for rapid development, instead of developing complex features on branches and releasing them infrequently. This idea can be applied at the feature and the product level. Working in small batches enables short lead times and faster feedback loops.

14. Foster and enable team experimentation. Team experimentation is the ability of developers to try out new ideas and create and update specifications during the development process, without requiring approval from outside of the team, which allows them to innovate quickly and create value. This is particularly impactful when combined with working in small batches, incorporating customer feedback, and making the flow of work visible.

Lean management and monitoring capabilities

15. Have a lightweight change approval processes. A lightweight change approval process based on peer review (pair programming or intrateam code review) produces superior IT performance than using external change approval boards (CABs).

16. Monitor across application and infrastructure to inform business decisions. Use data from application and infrastructure monitoring tools to take action and make business decisions. This goes beyond paging people when things go wrong.

17. Check system health proactively. Monitor system health, using threshold and rate-of-change warnings, to enable teams to preemptively detect and mitigate problems.

18. Improve processes and manage work with work-in-process (WIP) limits. The use of work-in-process limits to manage the flow of work is well known in the Lean community. When used effectively, this drives process improvement, increases throughput, and makes constraints visible in the system.

19. Visualize work to monitor quality and communicate throughout the team. Visual displays, such as dashboards or internal websites, used to monitor quality and work in process have been shown to contribute to software delivery performance.

Cultural capabilities
20. Support a generative culture . Generative culture is predictive of IT performance, organizational performance, and decreasing burnout. Hallmarks of generative measure include good information flow, high cooperation and trust, bridging between teams, and conscious inquiry.

21. Encourage and support learning. Is learning, in your culture, considered essential for continued progress? Is learning thought of as a cost or an investment? This is a measure of an organization’s learning culture.

22. Support and facilitate collaboration among teams. This reflects how well teams, which have traditionally been siloed, interact in development, operations, and information security.

23. Provide resources and tools that make work meaningful. This particular measure of job satisfaction is about doing work that is challenging and meaningful, and being empowered to exercise your skills and judgment. It is also about being given the tools and resources needed to do your job well.

24. Support or embody transformational leadership. Transformational leadership supports and amplifies the technical and process work that is so essential in DevOps. It is comprised of five factors: vision, intellectual stimulation, inspirational communication, supportive leadership, and personal recognition.


ooooooooooooo End of the post oooooooooooooooo

Saving agility from scrum: Accelerate book summary

Recently, I came across a post in our internal developer channel on shortcomings of scrum which they were facing in their project. Scrum ...