Agile Myths and Ideologies Meet the Scaled Agile Framework

I spent a thoroughly stimulating evening last week at a meeting of the Omaha Agile Development group. I have recommended their meetings before; that recommendation stands. They’re doing valuable and important work serving the Omaha business community and its members.

CSG International, with operations here in Omaha, decided some years ago to increase their agility. Last week, author Dean Leffingwell consulted with them in Omaha, intending to help them identify good next steps and to help them more sharply focus their end goal. (Experience seems to show that such assistance from highly qualified advisors is an indicator of succeeding in and of speeding such transitions. But you may not find that advice offered by a consultant to be entirely unbiased! (grin)) As long as Dean was in town …

Omaha Agile Development learned of Dean’s upcoming visit and invited him to speak to us. CSG opened its new facility near Dodge and 180th (two big, beautiful buildings, judging by what I saw). Something like 70 people attended. I am thankful to all involved. Well done.

I knew of Dean from having read his book Scaling Software Agility: Best Practices for Large Enterprises (ISBN 0-321-45819-2) on recommendation of a member of Omaha Agile Development. I, too, recommend the book. The earliest agile books concentrated their advice on small teams; this book is among those extending agile advice to “scale” (to collections of teams and potentially to large groups of contributors).

I used the title of Dean’s presentation for the title of this post: Agile Myths and Ideologies Meet the Scaled Agile Framework. Below, I share some of what I took from his presentation, along with (a little) background. This post is undoubtedly not a complete record of what he said.

Dean observed that at the time of Royce’s waterfall paper, waterfall was better than other alternatives available and he supports the industry decision then to use waterfall more. Had we known then what we know now about waterfall, agile, and technology development, we might well have best moved differently. We didn’t know that material; our work with waterfall and new technologies has educated us well. Agile is good guidance for change now. If we knew now what we’ll be saying about agile in 20 or 50 years, we might move differently now. Dean says he doesn’t know what we’ll be saying, so he advocates moving to agile. Our experience with agile will inform future changes.

This paragraph is background from me: I have observed some bias among agile writers to give voice almost exclusively to well-developed aspects of agile. (I exclude from consideration statements of “religion war” fervor we sometimes hear among agilists, like “<Their method> is only for people who can’t figure out <my method, the right method>.”) A blog appropriately titled, Is It Blasphemous To Criticize Agile, is well worded, shapes the issue well, and points to valuable other reading. On that subject …

Dean gives voice to both well- and under-developed aspects of agile. My opinion: he does it respectfully and in pursuit of finding a better way for all of us. His balance is refreshing.

He talked about his writing. These days, he says, most of his writing time goes to constant updates of his web site. It’s a rich set of material. It must be a key basis of his consulting practice. It’s free (the web site is free; probably not the consulting (snicker)). He calls the process he advocates “Scaled Agile Framework (SAFe)” (no surprise that it’s part of the title of his presentation!)

He built his presentation around seven myths. I wrote them down as:

  • Agile is all you need.
  • <Scrum> is all you need. [Substitute your favorite method for “Scrum”.]
  • XP is too extreme
  • Everything is a user story
  • Architecture emerges
  • Governance and PMO are bad words
  • Leadership and the problem in Chicken and Pig

Among the points he made during his discussion of the first three points: A key to “being agile” is exercising the judgment to assess what practices are right for one environment and using them. No one agile method is assuredly “right” or “wrong” for that environment; each of agile method has great suggestions he recommends each environment consider carefully. Scrum has no software engineering component. That’s neither “good” nor “bad”; it’s the way they built it. A software engineering environment using Scrum may need some software engineering advice (perhaps from XP, for instance). Some environments need all the practices of XP; the right people to decide that issue for each environment are people in that environment.

He referred to non-functional requirements as exceptions to the statement “Everything is a user story”. He referred to non-functional requirements much as IEEE does: “all the -ilities” (maintainability, modifiability, scalability, reliability, security, responsiveness, … there are many more that can be on this list; not all end with “ility”). He indicates agile processes cannot enable software teams to magically ignore these factors; he notes that many agile books don’t mention non-functional requirements. That probably derives from early agile literature aiming at small teams, where very informal approaches to these concerns can be effective. He indicates that as we scale agile, those very informal approaches cannot work. By the same token, it is not possible to write one user story that “completes” any one of these (“security”, for instance). Nor is it likely to be possible to do all work for any one of these within a single iteration. He recommends that teams document non-functional requirements as constraints on the product backlog and review each of those applicable during each product demonstration (that is, each iteration). That periodic review will help keep the non-functional issues alive for everyone and increase chance of success.

For large teams, he doubts it is best to assert that “Architecture Emerges”. Maybe it works for small teams or small numbers of small teams with rich inter-team communication. For him and for larger numbers of teams, the chance of success for the enterprise is too low to depend on the collection of teams to create architecture. They’ll each work largely on local motivations; the enterprise needs a wider view. At scale, he advocates a structure to create an enterprise architecture. It could be a domain architect (or team) writing for all teams. It could be a group with representation from all teams. Other options are possible.

He showed “Eight Principles of Agile Architecture”; they weren’t on the screen long enough for me to capture them. If I remember right, they were the same eight as he discusses just before the halfway point (judging by my vertical scroll bar) in his discussion System Architect Abstract.

He doesn’t feel “Governance and PMO are bad words”. He defined IT governance informally as the things IT executives do to assure IT is fully consistent and supportive of corporate strategies and objectives. He defined PMOs informally as groups of people who understand lots of organizational context and who advise and motivate others to adopt processes most likely to contribute to organizational success. He sees no question that IT needs IT governance as he defined it; he observed PMOs have done lots of the work in some organizations making agile values mainstream.

He doesn’t find the “Chicken and the Pig” analogy useful. As we scale agile practices, we need committed support and contribution from all levels of the chain of command; they all have roles. We don’t help anything by telling anyone they’re “not committed” or “not as committed as I am”. We have no adversaries.

Smaller points he made informally:

  • Don Reinertson’s book The Principles of Product Development Flow (ISBN 978-1935401001) is a great read. It’s difficult to read because it is so profound, but it’s short, too. [Well … he said it’s around 180 pages; some online descriptions indicate it is 300 pages.]
  • Lean has important lessons for us. Your customer is whoever (or whatever) consumes your work; don’t inconvenience them! Lean is a great structure for management to use in working with agile teams.
  • Kaisen: there’s no direct translation. A good translation is, “We can do better.” That’s always true.
  • His least favorite iteration length is 3 weeks. Both 2 weeks and 4 weeks are preferable iteration lengths.
  • Q: Is there a maximum size of a program? A: Dunbar’s number is probably a guide. This is the number of people one person can keep track of in their professional environment. It’s something like 80 to 150. Beyond that, it’s too hard to maintain cohesion in the program.
  • Part of the reason he writes so much is that it helps him understand better.

I hope my taking the time to write this out helps anyone who did not attend get some of the value. Writing it helped me understand better! (grin)

Standard. Methodology.

Professions establish vocabularies. After achieving consensus about definitions, professionals can communicate more precisely and quickly. I offer some candidates to our professional consensus here on terms for software development projects. What would you change here?

Let’s agree with guidance from the American National Standards Institute (ANSI®), quoting a joint standard of the International Organization for Standardization (ISO®) and the International Electrotechnical Commission (IEC®), that a standard is, “A definition or format that has been approved by a recognized standards organization or is accepted as a de facto standard by the industry. (As defined in ISO/IEC Guide 2:2004)”.

I suggest we consider a methodology to be the specific set of training, development processes, tools, templates, and controls a particular software development team uses to structure its work. A methodology includes reference (as needed) to specific tools, communication channels, approval meetings (and lists of attendees appropriate for each), document templates, development languages, and responsibility assignments. Harold Kerzner used the phrase “forms, guidelines, templates, and checklists” during his May, 2007 Omaha presentation advocating effort to detect and document best practices.

There is power in the statement that by this definition, a methodology is a company standard, with corporate leadership taking the role of “recognized standards organization”. Following the lead of the ANSI guidance about standards, I focus here on industry standards.

That definition doesn’t specify that a methodology is documented. (This is the “you cannot not have a methodology” argument. You might not have a repeatable or consistent methodology…) As team sizes and project sizes grow, my impression is that few experienced professionals on either side of the waterfall/agile discussion would recommend no documentation of the methodology.

Many of the agility models (the Agile Manifesto®, in fact) advise caution about excessive documentation. I agree. I have seen methodologies that seemed to have everything “plus the kitchen sink” in them. They were hard to understand and so difficult to use that teams did only those parts that were well enforced. After some point of growth, it makes sense to require that for everything added, something must come out of the methodology. Simple is better. Focus is great!

Examples:

  • The Project Management Institute (PMI®) publishes the PMBOK® Guide as a standard. At one point in its history, many project managers considered it a standard because they accepted PMI as “a recognized standards organization” (from the definition above). I hope that’s still true today (grin). Many people outside PMI probably consider it more powerful today that ANSI listed the same document as a U.S. standard (ANSI/PMI 99-001-2008®) and that IEEE® listed the same document among its standards (IEEE Std 1490-2011®).
  • A company’s methodology is the collection of policies requiring particular practices (perhaps requirements to use particular templates for documents; perhaps requirement to use a particular inter-project corporate-level shared resource for change management or version control or lessons learned (or other); perhaps in a waterfall environment, requirement for a phase-end tollgate or phase review (or any of many similar terms); perhaps in a Scrum environment, a requirement to use a Daily Standup and a Product Backlog).

Non-examples:

  • No company’s methodology is an industry standard, if only because it depends on tools, databases, training, and past shared experience not available outside the company.
  • No standard is a methodology. (Ever read a job ad stipulating that successful applicants will “use PMI methodology”? At one level of precision, that sentence means little!)
  • “Agile” and “agility” are not methodologies. (Recently I heard Susan Courtney (of Nebraska Blue; Apr 2012 AIM Institute Technology Leader of the Year) forcefully and committedly advocate agility and suggest (about software development life cycles, SDLC) that companies don’t understand agility if they want to “implement a new SDLC called ‘Agile'”. I agree with her; a company needs to change more than the SDLC to increase agility. Agility depends on a wider and deeper partnership than that. And provides more benefits than just an SDLC change.)

Between the concepts of standard and methodology is a set of guidance generally not as accepted as a standard or not as specific as a methodology; some call this group models or methods or frameworks or processes. There is considerable variety in the terms used (example). Let’s use model here, though it’s not a perfect label for this group.

Models help us select methodologies. Though the following is a (very!) partial list, examples include:

  • The original paper about waterfall. In my experience, it’s very common to hear reference to “the waterfall model”. (Interestingly, Royce didn’t use the word waterfall, though his central diagram clearly resembles a waterfall. This paper makes a strong point about the need for adequate documentation in a software development effort.)
  • Mary and Tom Poppendieck’s 2003 book, Lean Software Development, An Agile Toolkit (ISBN 0-321-15078-3). (This is a valuable contribution to software development thought and arguably had the most pivotal early impact in associating the concepts of Lean manufacturing with software development.)
  • The PMBOK Guide. (Though also a standard, it provides much of the detail typical of this middle group and allows considerable latitude in implementing company-specific details. Far more than not, agility models are consistent with the PMBOK Guide. The Fifth Edition of the PMBOK Guide is due out in 2012/2013; I’m betting agility models will have lots of impact on the new edition.)
  • The Scrum Alliance® summary of Scrum. (By the same token, I acknowledge the power to the statement, “That’s a standard! The Scrum Alliance is the “recognized standards organization”.)

How best can we make use of these concepts? Here’s one vote for our teams (maybe our companies; in the ideal world, teams have a role) using guidance from standards and models to define everything we need in our methodology (and not a bit more!) And applying the specificity that distinguishes a methodology from a standard.

If We Have a Problem, We’ll Find It. Early is Better. Still.

Let’s consider here a project with a significant problem we haven’t found. If the project’s product is to be a chair, maybe we have cut a leg too short. If the project’s product is software, maybe a function we designed with two parameters actually needs three (or reports answers in units different than a consuming function assumes it reports, maybe miles per hour instead of meters per second). This discussion is not about a misspelled word in a quick and informal email message, for example.

Do we get to choose between finding that problem or not? Well … (he he) … since our discussion is about significant problems, we’ll find it. And as one of the many corollaries to Murphy’s Law says, “If you have a problem, you’ll find that problem at the worst possible time.” (There’s something irresistibly pessimistic about that Law and all its Corollaries!)

You’ll find that problem just after you’ve spent lots of effort with the short chair leg in a lathe, getting sanded, and getting just the right finish (all of which was a waste of effort, of course, but you didn’t know that yet). It’ll happen just after you or your tester tell your executive that your code is 99 percent through testing, that the results look good, that you expect to complete ahead of schedule, and that the profits for this product are likely to start early. Or your customer will find the problem. (That would be embarrassing!) You’ll find that problem. It’ll hurt.

Let’s do something about it. And let’s focus now on creating software. Barry Boehm published once or twice (tongue-in-cheek) in a long and continuing software engineering career. In 1981, he wrote Software Engineering Economics, which contains the famous Constructive Cost Model (COCOMO) used for estimating the size and duration of software development efforts. It also contains (page 40) a graph showing the increasing cost to fix a problem introduced in requirements. The chart makes the point that if the problem costs roughly $15 to fix during requirements, it costs something like $50 in design, $100 during construction, $200 during development test, $500 during acceptance test, and $1,800 during operation. Many other sources of similar age make similar arguments and cite other cost studies. Their numbers vary, but let’s agree: cost to fix grows rapidly with time.

One message from those studies is: projects need a strong sense of urgency about finding problems we must fix because of the rapid increases in cost.

Another message is that a key question in managing a project is, “How much is the optimal spending now (in dollars, hours, process, whatever) to detect more problems earlier and get the greatest benefit from the current lower price of fixes?”

Sound good? Well … it certainly sounded good enough to form an orthodoxy in our professions. (Perhaps, anyway. It felt like that!)

From the current perspective, is there a problem?

Well … from today’s perspective, many of us would feel the data collection seems to presume use of the waterfall development model. Fewer projects use a waterfall model today.

And … many in our industry now have experience working projects without spending lots of time in review of past work. Many of us feel that the right answer to the question above is spending no time specifically looking for past problems.

And we achieve great results.

And our stakeholders love the new relationships with us as service providers.

(Well … not every time. New project models aren’t “the silver bullet”. And there are lots of other reasons projects don’t do as well as we want; many still need work.)

I refer, of course, to the development models advising us to strive for market agility (Scrum, Lean, Kanban, and Extreme Programming are commonly-cited examples). I intend to write more about these in future posts. For now, I’ll say: Much of the market has already moved one (or more) of these directions.

And what about the advice derived from the statistics Dr. Boehm cited? I’ll say projects need the same sense of urgency about finding errors; we’ll find problems differently than we most commonly thought about then. Projects using today’s agile models probably expect to discover those errors during their frequent interaction with customers (“early feedback”). And we expect to fix problems during the first iteration in which the customer wants us to fix them. And that advice sounds great … Why would anyone oppose?

And what about Dr. Boehm’s career after writing the book I mentioned? Well, in 1988, he published an often-cited description (info) of his “Spiral Model” of iterative software development. Both his works cited here influenced thought leaders who contributed to other models later collectively called “agile”. He is now an AIAA Fellow, an ACM Fellow, an IEEE Fellow, an INCOSE Fellow, and a member of the National Academy of Engineering.

He is one thought giant on whose shoulders we stand today. May we all learn …

//Later edit: Fixed a broken link.