Direct Playback

Subscribe: Apple Podcast
Check out the podcast on Google Play Music

Listen on Spotify!

SPaMCAST 534 features our interview with Al Shalloway.   Al returns to the SPaMCAST after far too long. This week we discuss the trials and tribulations of scaling agile, and his passion about getting knowledge transfer right! I hope you have as good of a time listening to this interview as I had creating it.

Bio

Al Shalloway is the founder and CEO of Net Objectives. With 45 years of experience, Al is an industry thought leader in Lean, Kanban, product portfolio management, Scrum and agile design. He helps companies transition to Lean and Agile methods enterprise-wide as well teaches courses in these areas. Al is a former SAFe Program Consultant Trainer. Al has developed training and coaching methods for Lean-Agile that have helped Net Objectives’ clients achieve long-term, sustainable productivity gains. He is a popular speaker at prestigious conferences worldwide.

Website:  https://www.netobjectives.com/

Email:  alshall@netobjectives.com

LinkedIn: https://www.linkedin.com/in/alshalloway/

Re-Read Saturday News
This week we continue our re-read of The Tipping Point by Malcolm Gladwell. Chapter Three of Malcolm Gladwell’s The Tipping Point is a reminder of why this book continues to be important and useful. The density of ideas in this chapter is amazing. Stop borrowing your best friends copy and buy a copy of the book for yourself!  

Current entry:

Week 4 – The Stickiness Factorhttps://bit.ly/2GuSJ96 (more…)

Listen Now
Subscribe: Apple Podcast
Check out the podcast on Google Play Music

The Software Process and Measurement Cast 522 features the return of Jeff Anderson.  Jeff returns to discuss scaling agile and getting to a minimum viable product. Many teams and organizations struggle with the concepts of scaling and getting to an MVP, Jeff provides advice for not going crazy!

Jeff’s Bio

Jeff is the President of Agile by Design.  Over the last decade, Jeff has played a leadership role on a large number of enterprise-scale agile transformations, providing program management, operating model design and change-management services. Jeff frequently blogs about and presents on lean and agile adoption, and is the author of The Lean Change Method, which guides organizations through the application of lean startup techniques. His mission in life: to help knowledge workers be awesome at what they do.

LinkedIn:  https://www.linkedin.com/in/thomasjeffreyandersontwin/

Website:  http://agilebydesign.com/

Twitter: @thomasjeffrey


Re-Read Saturday News
The Software Process and Measurement Cast and Blog crew is still on the road this week.  We will publish our thoughts on Chapter 7 next week. Please jump into the re-read of Bad Blood, Secrets and Lies in a Silicon Valley Startup by John Carreyrou (published by Alfred A. Knopf, 2018 – Buy a copy and read along!).   

Previous Entries:
Week 1 – Approach and Introductionhttps://bit.ly/2J1pY2t   

Week 2 — A Purposeful Life and Gluebothttps://bit.ly/2RZANGh

Week 3 — Apple Envy, Goodbye East Paly and Childhood Neighborshttps://bit.ly/2zbOTeO

Week 4 — A Reflectionhttps://bit.ly/2RA6AfT

Week 5 — Sunnyhttps://bit.ly/2AZ5tRq

 

Next SPaMCAST
SPaMCAST 523 features our essay on Story Points.  Story points are a tool to help teams manage their flow of work.  Unfortunately, story points aren’t always used properly and can create more problems than they solve.

We will also hear from Jon Quigley who brings his Alpha and Omega of Product Development to the cast.

Heads up!

Heads up!

A Scum of Scrums (SoS) is a mechanism to coordinate a group of teams so that they act as a team of teams.  SoS is a powerful tool. As with any powerful tool, if you use it wrong, problems will ensue. Six problematic implementations, called anti-patterns, are fairly common. We’ll discuss three in part 1 and finish the rest in part 2. (more…)

13632144484_53709f3df4_b.jpg

When the goal is complicated architecture, everyone needs to coordinate.

Much of the work to coordinate and synchronize goals happens during planning.  As Mike Cohn described with the metaphor of the Agile planning onion, Agile planning is not a one-time event nor are planning activities confined to the beginning of an increment or a sprint. However, delivering work using Agile is not just a big ball of planning. Goal coordination and synchronization activities need to happen outside of planning activities.  Several non-planning Agile techniques are useful for ensuring coordination.  The degree of usefulness is a function of size, complexity and the Agile maturity of those involved. The techniques include:

Test First Development (TFD) in all of its forms (including Test Driven Development, Behavior Driven Development, and Acceptance Test Driven Development) begins by establishing how the developers will prove the work they are planning to deliver.  Expressing how the solution will be proved before writing the first line of code anchors the functionality being delivered to the effort’s goals. All of the test first development techniques can be applied to any size project; however, these techniques require teams that have access to the correct tools and have at least a moderate Agile maturity.   

Definition of Done provides a team or teams with a set of criteria that they can use to plan and bound their work based on an overarching definition of done. A definition of done that includes integration activities or a check against the increment’s goal is an effective means of keeping goals synchronized. The definition of done is applicable to all efforts regardless of size, albeit as complexity increases this technique becomes even more powerful.

Continuous Builds is a process in which every time code is checked back into the code repository the application or product is built (or compiled).  The build is immediately followed by some form of testing to make sure the “build” still works.  Continuously building the software ensures that any one team or developer does not go too far off track because the code and testing act as an arbiter that the product works. This technique is applicable to all efforts (Agile or not, big or small); however, I have noticed that the use of continuous builds requires some experience and maturity with Agile.

Scrum of Scrums (SOS) is a mechanism that brings all of the Scrum Masters involved in an effort together to coordinate a group of teams so that they act as a team of teams. The SOS provides a platform for coordinating and synchronizing goals by ensuring teams are aware of what other teams are doing and whether they have had to make adjustments to the goals.  An SOS is useful for coordinating efforts of all size; however, as efforts scale past two or three teams other coordination techniques are needed in addition to the SOS. 

Demonstrations, also known as demos, are Agile’s mechanism to share what the team has been accomplished.  Scaled Agile efforts often have demos at the team level at the end of every sprint, an integrated demo (all teams) and then a larger demo before a release.  Demonstrations provide the ultimate proof of what has been built allowing stakeholders to determine whether the effort’s goals have been met. Demos are useful for every Agile effort.  Larger efforts will do demos both at the team level and then as a consolidated demo for the overall product.

Dynamic Testing (execution of the code), by definition, generates results that are compared against some expected result (even exploratory testing).  Those expected results represent an instantiation of the goals and objectives of the overall effort.  Testing, while important, without the structure of test-first development is a very weak tool for coordinating and synchronizing goals. Do not use this technique alone regardless of the size of the effort. 

Techniques for synchronizing special types of goals and objectives such as process improvement or technical goals are: 

Retrospectives are a platform for teams and teams of teams to examine their performance and to make changes to improve their delivery of value.  When an effort or organization has productivity, quality, and/or efficiency goals, retrospectives (using techniques such as the 6 Thinking Hats) are highly effective.  The retrospective provides a platform to share the objectives and then to synchronize on the steps needed to meet those goals and objectives.

Common Architectures and Standards are typically an instantiation of the technical goals and objectives of the organization. Efforts of all size can use a set of standards or a published architecture to effectively coordinate activity.  Examples of using an emergent architecture to provide guidance can be seen in the SAFe concept of the architectural runway.  The runway is “built” just ahead of the need of the teams generating the functionality that will leverage that architecture.

The effectively coordinating and synchronizing goals is a requirement for any effort, if the effort is going to deliver value efficiently. Agile efforts often use many of these techniques in combination.  Each technique interlocks and overlaps with other techniques so that an environment is created that supports team’s ability to self-organize and self-manage.   The number techniques and how strenuously they need to be pursued is a function of how many teams are involved, Agile maturity and complexity.  Conceptually an effort with two collocated teams and simple business problem to solve will need less goal coordination than an effort with many teams that are spread across the globe. The one absolute when it comes to goals and teams is coordination is always required.

Where are we going?

Where are we going?

As we noted earlier in this theme, coordinating and synchronizing goals across multiple teams is not a new problem. Solving the problem at scale requires integrating specific steps into how work is being delivered.  Building coordination and synchronizing steps work best if they are part of activities that are naturally designed to consolidate and disseminate information, while also avoiding the construction-related steps. Identifying the steps to coordinate and synchronize goals is a first step in scaling Agile.  The size and complexity of the effort will contribute to the selection process.  For example, delivering functionality from two collocated teams will need less overt goal coordination than an effort with twenty teams spread across the globe.  Because adding coordination and synchronization steps into the workflow may feel like overhead to individual teams, care should be taken. However, well-executed coordination will provide a significant payback both for the team and to the overall effort. The Agile planning mechanisms that are excellent tools for coordinating and synchronizing goals include: (more…)

Coordinate to achieve your shared goals.

Coordinate to achieve your shared goals.

Effectively scaling any methodology is a problem of establishing, coordinating and controlling the goals of each team that is working together to deliver a project or product. Clearly scaling any type of work is not straightforward.  At the very least, every new person and team increases the number of possible communication channels. The formula, n (n – 1) /2, is non-linear; a project with 2 teams has one channel while a project with 10 teams has 45.  Many Agile techniques were developed (or, at least, evolved) as team level, and don’t address goal coordination.  Scaling Agile requires taking a different tack than just adopting classic plan-driven methods and frameworks and putting them on top of team-level Agile.  (more…)

23019322592_fc9813ba56_k

Just because it is done, doesn’t mean it adds value.

In recent exchange after the 16th installment of the re-read Saturday of the Mythical Man-Month, Anteneh Berhane called me to ask one of the hardest questions I had ever been asked:

Why doesn’t the definition of done include value?

At its core, the question was asks how can we consider work “done”, if there is no business value delivered, even if the definition of done is satisfied including demonstrably meeting acceptance criteria. Is software are really done if the business need has changed so what was delivered is technically correct, but misses the mark based on the business environment today? This is a scenario I was all too familiar with during the 1980s and 1990s at the height of large waterfall projects but see less in today’s Agile development environment.  Anteneh Berhane’s question reminds us that the problem is still possible.  We discussed five potential problems that disrupt the linkage between done and value.  Here are the first two, and today we will discuss the second three. (more…)

You don't need to define the roles if everybody knows who should be in the driver's seat.

You don’t need to define the roles if everybody knows who should be in the driver’s seat.

There are four basic topics that Agile charters address. Each category addresses different concepts that are important to help a team or a team-of-teams in a scaled effort to act in coordinated manner.  The four categories are:

  1. Envisioning Success
  2. Behavior
  3. Timing
  4. Constraints

There are any number of ways to address the concepts in each of the categories, and often a few teams and organizations use multiple approaches to address a specific concept. For example, some charters include a release plan and a set of milestones, both sections provide guidance on when things will happen during a project. Summarized below are the most common components used to address the concepts of behavior and timing. Each item includes a quick definition and a recommendation whether the component should be used for either a scaled Agile or team charter. Yes means the component should typically be used and no means don’t. (more…)

Scaling up, up, up!

Scaling up, up, up!

Agile User Acceptance Testing (AUAT) at the team level, focuses on proving that the functionality developed to solve a specific user story meets the user’s needs. Typically stories are part of a larger “whole,” and to truly prove that a business problem has been solved, acceptance testing needs to be performed as stories are assembled into features and features into applications/systems.

Individual teams accept user stories into sprints, if they are using time boxes as in Scrum. Stories should follow the guidelines found in the INVEST mnemonic coined by Bill Wake to generate a kernel of functionality that can be delivered. Because user stories are very granular, they often do not satisfy the overall business needs of the stakeholders. Product owners and other stakeholders generally want features. During backlog grooming features are broken down from epics into stories, then are developed and then assembled to satisfy the business need. A typical feature requires multiple stories (a one-to-many relationship). Two basic scenarios can be used to highlight the need to scale from story-level AUAT to feature- and system-level acceptance testing

Scenario One: Each Story Can Stand Alone

The simplest scenario would be the situation in which a feature is just the sum of the individual stories. This means that each independent story can be assembled and that no further acceptance testing is required. In this scenario, meeting the story-level acceptance criteria would satisfy the feature-level acceptance criteria and the system-level acceptance criteria. At best, this scenario is rare.

Scenario Two: Features Represent More Than The Sum of Parts

Features are often represent more than the sum of the individual stories. Even the relatively simple scenarios can be more than a sum of their parts.  For example, consider a feature for maintaining a customer on an applications.  Stories would include adding a customer, modifying a customer, deleting a customer and inquiring on a customer. The acceptance criteria for the feature would more than likely include criteria that the functionality in each story  needs to work smoothly together or meet a performance standard all of which requires running an acceptance test at the feature level. Non-functional requirements are often reflected in overarching acceptance criteria captured at the feature level or system level. These overarching criteria require performing AUAT at the feature and system level.

The discussion of executing a feature- or system-level acceptance test often generates hot debate. The debate is less about the need to get acceptance and generate feedback at the feature or system level, but more about when this type of test should be done. Deciding on “when” is often a reflection on whether the organization and teams have adopted a few critical Agile techniques.

  1. Integrated code base – All teams should be building and committing to a single code base.
  2. Continuous builds (or at least daily) – The single code base should be re-built as code is committed (or at least daily) and validated.
  3. Team synchronization – All teams working together toward a common goal (SAFe calls this an Agile release train) should begin and end their sprints at the same time.

A solution I have used for teams that meet these criteria is to coodinate the feature acceptance test through the Scrum of Scrums as the second to last official activity of each synchronized sprint (prior to the retrospective(s)). The feature AUAT requires team and stakeholder participation so that everyone can agree that the criteria is met or not met. All of these activities assume that acceptance criteria were developed for each feature as it was added the backlog and that overall system acceptance criteria was crafted in the team charter at the beginning of the overall effort. This ensures that delivery of functionality can move forward to release (if planned) without delays.

Where organizations have not addressed the three criteria, often the response is to implement a “hardening” (also known as development plus one, test after or just a testing sprint), so that the system can be assembled and tested as a whole. Problems found after stories are accepted generally require reopening stories and re-planning. Also if work has gone forward and is being built on potentially bad code, significant rework can be required. My strong advice is to spend the time and money needed to implement the three criteria; therefore removing this need for hardening sprints.

Scaling AUAT to features that require more than a single story, team or sprint to complete is not as simple looking at  each story’s acceptance criteria. Features and the overall system will have their own acceptance criteria. Scaling is facilitated by addressing the technical aspects of Agile and synchronizing activities, however these are only prerequisites to building layers of AUAT into the product development cycle.

Note – We have left a number of hanging issues, such as who should be involved in AUAT and if a story is truly independent does it require higher levels of AUAT? We will address these in the future. Are there other aspects of AUAT that you believe we should be address on this blog?

10390127_10152452543289484_3916202870628506345_n

No methodology or framework is 100% perfect, like weather prediction.

Agile was born as a synthesis of many minds and many frameworks. That synthesis yielded a single philosophy; however since each person and framework brought a perspective to the table that philosophy has been implemented as a wide variety of methodologies and frameworks. No methodology or framework is 100% perfect for every piece of work. Perfect is binary – when the answer to is that a specific framework isn’t a perfect fit, organizations need to either find another approach or to tailor the approach to make it fit. Scaling is a form of tailoring. Process frameworks always have recognized the need to tailor the process to meet the needs of organization and teams doing the work. For example, even the venerable CMMI, often lambasted for generating heavy or static processes, actually includes practices for tailoring across the model and the process created to implement the mode. In Agile scaling often requires tailoring the processes used at the team levels to support larger efforts. While all sorts of factors can drive the need to scale or tailor Agile frameworks, typically the size of the work is the single most critical driver. Other typical attributes that combine with size and affect how scaling is approached include complexity, risk and organizational politics.

Size of an effort is influenced by the amount functional, non-functional or technical requirements required to solve a specific problem. How related or integrated those requirements are will affect how work is organized, and therefore the number of teams required to deliver the work. A large, highly-related effort will required coordination, which will require added processes such as a Scrum-of-Scrum meetings or other forms of program management.

Complexity is is made up of a huge number of attributes that include (but is not limited to) how difficult the technical problem will be to solve, whether the team has ever used the technology before, how many disciplines will be involved in solving the problem and size. Complexity directly relates to how difficult the work will be to deliver. All things being equal, the more complex the more effort will be required. The larger the effort or the more disciplines needed to deliver the project typical means more teams and more coordination. More effort, more disciplines or more people leads to a need for techniques to scale team-level Agile.

Risk is reflection of uncertainty. Any event that could have a negative (or positive) impact that the team or organization can’t predict is risk. A risk that can have a significant impact on work or the organization as a whole needs to be monitored and managed. Risk management techniques are commonly added to scale Agile frameworks.

Organizational politics might be considered specialized type of complexity. Typically organization politics generates higher need for oversight and reporting in advance of typical team-level Agile techniques. The added organizational requirements that are required generate the need to add steps, processes and reviews, which will require scaling basic team-level Agile.

Not all projects are exactly the same. This is why we need to scale team-level. Team-level Agile smoothes some of the required process differences out through the use of techniques such as time boxes and user story grooming.  However despite of these techniques, variability of work effort (examples of work efforts include projects, programs, release or products) are not all the same. Size, complexity, risk and organizational politics generate a need to add steps and processes on top of team-level Agile to scale up to meet the needs of larger, more complexity or riskier work.