Agile Myths and Ideologies Meet the Scaled Agile Framework

I spent a thoroughly stimulating evening last week at a meeting of the Omaha Agile Development group. I have recommended their meetings before; that recommendation stands. They’re doing valuable and important work serving the Omaha business community and its members.

CSG International, with operations here in Omaha, decided some years ago to increase their agility. Last week, author Dean Leffingwell consulted with them in Omaha, intending to help them identify good next steps and to help them more sharply focus their end goal. (Experience seems to show that such assistance from highly qualified advisors is an indicator of succeeding in and of speeding such transitions. But you may not find that advice offered by a consultant to be entirely unbiased! (grin)) As long as Dean was in town …

Omaha Agile Development learned of Dean’s upcoming visit and invited him to speak to us. CSG opened its new facility near Dodge and 180th (two big, beautiful buildings, judging by what I saw). Something like 70 people attended. I am thankful to all involved. Well done.

I knew of Dean from having read his book Scaling Software Agility: Best Practices for Large Enterprises (ISBN 0-321-45819-2) on recommendation of a member of Omaha Agile Development. I, too, recommend the book. The earliest agile books concentrated their advice on small teams; this book is among those extending agile advice to “scale” (to collections of teams and potentially to large groups of contributors).

I used the title of Dean’s presentation for the title of this post: Agile Myths and Ideologies Meet the Scaled Agile Framework. Below, I share some of what I took from his presentation, along with (a little) background. This post is undoubtedly not a complete record of what he said.

Dean observed that at the time of Royce’s waterfall paper, waterfall was better than other alternatives available and he supports the industry decision then to use waterfall more. Had we known then what we know now about waterfall, agile, and technology development, we might well have best moved differently. We didn’t know that material; our work with waterfall and new technologies has educated us well. Agile is good guidance for change now. If we knew now what we’ll be saying about agile in 20 or 50 years, we might move differently now. Dean says he doesn’t know what we’ll be saying, so he advocates moving to agile. Our experience with agile will inform future changes.

This paragraph is background from me: I have observed some bias among agile writers to give voice almost exclusively to well-developed aspects of agile. (I exclude from consideration statements of “religion war” fervor we sometimes hear among agilists, like “<Their method> is only for people who can’t figure out <my method, the right method>.”) A blog appropriately titled, Is It Blasphemous To Criticize Agile, is well worded, shapes the issue well, and points to valuable other reading. On that subject …

Dean gives voice to both well- and under-developed aspects of agile. My opinion: he does it respectfully and in pursuit of finding a better way for all of us. His balance is refreshing.

He talked about his writing. These days, he says, most of his writing time goes to constant updates of his web site. It’s a rich set of material. It must be a key basis of his consulting practice. It’s free (the web site is free; probably not the consulting (snicker)). He calls the process he advocates “Scaled Agile Framework (SAFe)” (no surprise that it’s part of the title of his presentation!)

He built his presentation around seven myths. I wrote them down as:

  • Agile is all you need.
  • <Scrum> is all you need. [Substitute your favorite method for “Scrum”.]
  • XP is too extreme
  • Everything is a user story
  • Architecture emerges
  • Governance and PMO are bad words
  • Leadership and the problem in Chicken and Pig

Among the points he made during his discussion of the first three points: A key to “being agile” is exercising the judgment to assess what practices are right for one environment and using them. No one agile method is assuredly “right” or “wrong” for that environment; each of agile method has great suggestions he recommends each environment consider carefully. Scrum has no software engineering component. That’s neither “good” nor “bad”; it’s the way they built it. A software engineering environment using Scrum may need some software engineering advice (perhaps from XP, for instance). Some environments need all the practices of XP; the right people to decide that issue for each environment are people in that environment.

He referred to non-functional requirements as exceptions to the statement “Everything is a user story”. He referred to non-functional requirements much as IEEE does: “all the -ilities” (maintainability, modifiability, scalability, reliability, security, responsiveness, … there are many more that can be on this list; not all end with “ility”). He indicates agile processes cannot enable software teams to magically ignore these factors; he notes that many agile books don’t mention non-functional requirements. That probably derives from early agile literature aiming at small teams, where very informal approaches to these concerns can be effective. He indicates that as we scale agile, those very informal approaches cannot work. By the same token, it is not possible to write one user story that “completes” any one of these (“security”, for instance). Nor is it likely to be possible to do all work for any one of these within a single iteration. He recommends that teams document non-functional requirements as constraints on the product backlog and review each of those applicable during each product demonstration (that is, each iteration). That periodic review will help keep the non-functional issues alive for everyone and increase chance of success.

For large teams, he doubts it is best to assert that “Architecture Emerges”. Maybe it works for small teams or small numbers of small teams with rich inter-team communication. For him and for larger numbers of teams, the chance of success for the enterprise is too low to depend on the collection of teams to create architecture. They’ll each work largely on local motivations; the enterprise needs a wider view. At scale, he advocates a structure to create an enterprise architecture. It could be a domain architect (or team) writing for all teams. It could be a group with representation from all teams. Other options are possible.

He showed “Eight Principles of Agile Architecture”; they weren’t on the screen long enough for me to capture them. If I remember right, they were the same eight as he discusses just before the halfway point (judging by my vertical scroll bar) in his discussion System Architect Abstract.

He doesn’t feel “Governance and PMO are bad words”. He defined IT governance informally as the things IT executives do to assure IT is fully consistent and supportive of corporate strategies and objectives. He defined PMOs informally as groups of people who understand lots of organizational context and who advise and motivate others to adopt processes most likely to contribute to organizational success. He sees no question that IT needs IT governance as he defined it; he observed PMOs have done lots of the work in some organizations making agile values mainstream.

He doesn’t find the “Chicken and the Pig” analogy useful. As we scale agile practices, we need committed support and contribution from all levels of the chain of command; they all have roles. We don’t help anything by telling anyone they’re “not committed” or “not as committed as I am”. We have no adversaries.

Smaller points he made informally:

  • Don Reinertson’s book The Principles of Product Development Flow (ISBN 978-1935401001) is a great read. It’s difficult to read because it is so profound, but it’s short, too. [Well … he said it’s around 180 pages; some online descriptions indicate it is 300 pages.]
  • Lean has important lessons for us. Your customer is whoever (or whatever) consumes your work; don’t inconvenience them! Lean is a great structure for management to use in working with agile teams.
  • Kaisen: there’s no direct translation. A good translation is, “We can do better.” That’s always true.
  • His least favorite iteration length is 3 weeks. Both 2 weeks and 4 weeks are preferable iteration lengths.
  • Q: Is there a maximum size of a program? A: Dunbar’s number is probably a guide. This is the number of people one person can keep track of in their professional environment. It’s something like 80 to 150. Beyond that, it’s too hard to maintain cohesion in the program.
  • Part of the reason he writes so much is that it helps him understand better.

I hope my taking the time to write this out helps anyone who did not attend get some of the value. Writing it helped me understand better! (grin)

Feedback

In many areas of my past study, feedback is considered good; I consider it important to software engineering life-cycles. I’ll give a definition useful for us here. I’ll mention several examples of feedback in modern life and I’ll apply those thoughts to software engineering life-cycles.

Feedback Defined

Let’s use this definition: feedback in a process flow occurs when a process influences a preceding process.

a process flow with two processes and one arrow; the arrow points from "Build Product" to "Sell Product",
Figure 1. A process flow: “Build Product” is a predecessor to and affects “Sell Product”.

That begs for an example and a picture. At high aggregation, we might say a company that produces product has two processes: Build Product and Sell Product. As shown in Figure 1, the Build Product process is a predecessor of and influences the Sell Product process. (That makes sense: In many cases, producing product makes it easier to sell.) Figure 1 does not include feedback; Sell Product does not influence Build Product.

a diagram similar to Figure 1, but adding a second arrow, this one from "Sell Product" to "Build Product" in addition to the one from "Build Product" to "Sell Product"
Figure 2. A process diagram like Figure 1, except that it adds feedback. Each process is a predecessor to and affects the other process.

Figure 2 adds feedback; Sell Product and Build Product each is a predecessor to the other; each both influences and is influenced by the other.

The feedback might support fiscal constraints; the company might stop production when warehouse inventory gets above a threshold.

The feedback might support customer satisfaction; it might report the results of complaints received, reports of satisfaction, or suggestions for future development.

The concept of a feedback loop is closely related. Figure 1 demonstrates a linear process; it has no loops (or cycles). Figure 2 is an example of a feedback loop.

Corporate Governance

I have heard it said that in parts of the 1900’s, American manufacturing had a relatively easy time designing product that would sell well. A manufacturing organization in that position generates the greatest profit by producing as much product as possible at the lowest possible cost. If variation results in extra cost and doesn’t result in extra profit, they cut variation. Wikipedia describes a 1918 environment in which over half of all U.S. cars were Ford Model T’s. Their black paint dried fastest, which reduced cost on the assembly line. Henry Ford wrote in his autobiography about that time, “Any customer can have a car painted any color that he wants so long as it is black”.

Feedback works. Wikipedia goes on to indicate, “By 1926, flagging sales of the Model T finally convinced Henry to make a new model.” The flagging sales were feedback; by 1927 the company produced the Ford Model A.

Steering Response

A three-year-old recently reminded me in several ways about feedback. She was driving a light and slow electric car; I was jogging beside her interested in assuring her safety. (I’ve later wondered whether I was thinking at all when I put her behind that wheel! There are so many warnings I could have heeded at that time! But I did not …) I didn’t want to be “that” adult in her life, constantly controlling her vehicle; my preferred behavior was to let her do all the steering. (Note to self: you silly man!) I soon realized that she didn’t yet understand the concept of steering this car. She understands steering; she drives her tricycle. On the car, though she held the steering wheel correctly, she didn’t exert enough force on it to change the car’s direction. (In her defense, even truckers might not consider this steering system to be responsive!) Her lack of steering inputs was feedback leading me to take the car to a deserted section of parking lot. We did; she had a great time going either in straight lines or in repeating circles depending how I had most recently set the steering wheel. All was well.

I mentioned this was a three-year-old. After perhaps three minutes of this, she wanted to go to the playground then sometimes within her field of view. Her wish to go to the playground was feedback reminding me about three-year-olds’ tendency toward short attention span.

I set the steering wheel to a position I hoped would get her vehicle to the playground. The vehicle turned well past the direction I wanted. The excessive turn was feedback reminding me that driving a vehicle is an active control process. We cannot set the controls once and be done with it; we must repeatedly check results and adjust the controls again. We get close to one side of the lane, turn away from that side, get close to the lane center, and turn the opposite direction to stay close to the center; we repeat that process all the way to our destination. That process depends on receiving feedback that our vehicle is “close” to an edge or center of the lane. That three-year-old either wasn’t receiving the feedback or wasn’t acting on it.

(By the way, controlling an airplane involves double the inputs. We must start the roll, thus changing the angle of the wings make with the horizon; that’s bank angle. Then we must stop the roll and hold the desired bank angle to turn. Just before we get to our desired direction, we must start rolling toward level. When the wings are level, we must stop the roll. Yup. That’s four control inputs compared to a car’s two.)

Feedback in a Software Development Process

The August 1970 original paper on waterfall software development by Winston Royce mentions the importance of feedback. His Figure 3 shows the classic waterfall diagram also with all feedback flows pointing for each phase “uphill” to the prior phase; the caption is, “Hopefully, the iterative interaction between the various phases is confined to successive steps”. His Figure 4 and the accompanying text recognize the likelihood of detecting problems in testing that invalidate program design and, in turn, invalidate software requirements; the caption is, “Unfortunately, for the process illustrated, the design iterations are never confined to the successive steps”. This paper is convincing evidence that we’ve known since the inception of waterfall processes that feedback is useful. And that feedback can be expensive.

The most interesting part of this discussion is deciding how this observation drives today’s software development.

The Agile models answer unanimously: Do a very small part of the project, show it to the customer, get feedback “early and often”, and repeat. Feedback is an important component of the Agile models.

For those of us not using Agile, a useful answer is to divide the project into a series of shorter efforts, each of which completes one development cycle. Perhaps doing more development cycles will make some methodology processes or products excessive; we might find we can streamline those excesses to everyone’s advantage. There is perhaps a reasonable minimum cycle time advised by Agile: one to four weeks; if your organization gets to that point, it has done lots of the change needed to do all of Agile.

This post reviews feedback; feedback is good. We use it in many activities of our lives; in some parts of our lives, we use it without noticing. It is a valuable part of all software development. May we all have and use lots of feedback in our software development!

Iterative Deployment or Big-Bang?

Let’s say your CEO asks for input: “Should an up-coming software development project use an iterative development process?” (Let’s take the question as assuming an alternative of deploying the software product all at once.)

Iterative development means that the software development team team delivers the product functions in a series of small efforts adding up to the whole rather than as a single effort. It can mean (but doesn’t have to) that the end customer sees several deployments.

The Agile answer is clear; the pre-Agile answer may surprise some Agilists. Let’s do pre-Agile first.

Capers Jones released a highly respected book on software cost estimating in 1998; he released a second edition as Estimating Software Costs: Bringing Realism to Estimating (ISBN 978-0-07-148300-1) in 2007. His book (p 479) says that systems and commercial software requirement tend to change at an average rate of “about 2 percent per month from the end of initial requirements until start of testing.” It further says to expect “12 percent for Agile projects”. My experience cannot support contradicting this book.

Maybe that rate of change results from those specifying requirements learning more about the environment in which they’ll use the tool. Maybe that’s a change rate in business. Maybe it measures the amount humans tend to change their minds. Whatever it is, it seems to happen in lots of projects (maybe some of yours!)

Two percent a month doesn’t sound like much, right? Even using processes that ask the user to commit to no further requirements change after establishing a requirements baseline, the development organization ought to be able to be that flexible, right?

How long is your pre-Agile project? Let’s say it is a year, with three months at the start for requirements and two months at the end for testing and deployment. Dr. Jones’ book is telling us that on average, 14 percent of requirements will change in the seven months between the end of requirements and start of testing. And that for an 18-month project, 26 percent of requirements will change. By any chance, do you feel like shortening the project yet? Maybe it makes sense to do a small project with only part of the product function; maybe a six month project or even shorter?

That argument has been around for quite a while. Companies that scheduled projects for long durations didn’t hear it or didn’t find it persuasive.

If the CEO asks me that question, I’d suggest starting with a short project. And since nothing in that logic depended on the current process in use (Agile or otherwise), I’d almost always suggest that. (And it would be easy to say I’d “always” suggest that; it’s just that “always is a very strong word”!) That’s my pre-Agile answer.

The Agile answer: Oh! It’s like they saw this coming!

The Agile answer is to start with a “project” the length of one iteration (commonly, two to four weeks). Agile teams ask their users to make no change to user stories in the iteration once the iteration starts. (Geez! That sounds a lot like the pre-Agile request to let the programmers work on unchanging requirements. And … it makes sense in both places, though it sounds easier to implement in Agile!) Holding user stories unchanged for two to four weeks (and then not all user stories; only those in the iteration) seems lots easier to ask of our customers. At the end of the iteration, Agile teams look forward to showing products to their users and to getting feedback “early and often”.

So … what’s the difference in responding to that CEO question now that Agile is growing into so many of our work processes? Maybe in the pre-Agile days, we wouldn’t have felt comfortable proposing so short a project, though we would have suggested something shorter than “all at once”. Maybe Agile is, in this respect, the direction we were heading anyway!

(By the way, Chapter 7 of Dr. Jones’ book covers “Manual Estimating Methods Derived from Agile Projects and New Environments”. He presents data on many processes, including Scrum and Extreme Programming. It’s good reading.)

Anticipation and Adaptation in Design

A familiar criterion among my clients for choosing between agile software development processes and more traditional processes is a natural tension in all our software projects between anticipation and adaptation in design. Jim Highsmith discusses the issue in his 2010 book, Agile Project Management, Creating Innovative Products, Second Edition, for example at pages 218 to 219. (There’s lots of value in this book, by the way. This is but one small example.)

Designers anticipate frequently. Perhaps based on long experience with a product or based on team discussions about future direction of a product, they create a design for the current cycle of work that includes all capability planned for this cycle and also includes some allowance for potential future direction.

And … designers adapt frequently. Perhaps because of change in the business environment, they create a design for the current cycle of work that supports a previously unacknowledged direction for the product. They’re “turning on a dime” to meet a new and current customer need.

When product teams guess future directions well, anticipation proves valuable. When the current cycle of work completes and the anticipated feature need is clear, the next design has less change associated solely with that next cycle of work. If there’s time, the design produced here may anticipate again, this time betting on needs for a third cycle of product development. Some of our peers suggest that a design strategy that includes at least some component of anticipation is most appropriate for this case because it lengthens the time the team has to consider and even test the design changes.

When product teams guess future direction poorly, adaptation proves valuable. Once work on the new feature starts, prior attempts at design for this feature don’t exist; designers have the maximum flexibility in design. Some of our peers (particularly those fond of agile) suggest that a design strategy based heavily on adaptation is most appropriate for this case. At worst, anticipatory design can make the next design cycle more difficult because anticipated direction may be counter to needed direction in some way. In this case, designers and developers (“Oh! The expense!”) may spend time maintaining the previously predicted need whether or not it still has potential. And that costs them time and design simplicity.

If you accept all the above, the choice between design based on anticipation or adaptation might be simple if we have high confidence either that we can predict future needs well (tending to our selecting anticipation) or high confidence that we cannot predict well (tending to our selecting adaptation). Perhaps many of us have less than high confidence either way; that makes the decision more difficult.

One of the often-heard suggestions among those recommending agile is that any time we don’t have high confidence in our predictions of future direction, experience leads them to believe best results come from tending toward adaptation. (Well … okay. Some of them propose abandoning anticipation entirely.) Teams that intentionally train themselves to be highly capable of responding promptly are nimble (“agile” even!; pun intended) about moving the product as needed. They deliver more value earlier to their customers, on average.

Another often-heard suggestion among those recommending agile is that because of short project cycles, each successive design can represent a small step in a product direction, giving all parts of the product team the greatest chance to consider impact of change, and to provide and absorb early feedback. The best design “emerges” from a series of small efforts and from frequent feedback. If there’s any disadvantage to creating a series of small designs changes, the advantages of feedback provided “early and often” clearly offset the disadvantages.

What does your experience make you tend to believe? What are the most persuasive points on each side here? All comment very welcome.

Zero Finished or Done; No In-Between

You’re working a project (it could be “waterfall” or “agile”). The project involves drilling a series of holes through the long axis of something rectangular (maybe a piece of aluminum or glass). This piece is critical to your project, so everybody and their boss wants to know progress.

The “agile” folks could observe that this is a great opportunity for an “information radiator”: a poster everybody and their boss can see just by walking through that informs about that particular progress. It radiates and broadcasts the information; no one misses it. That’s a good suggestion for waterfall, too.

Our reporter goes to the shop and watches preparation of the raw material. Maybe it’s cut to size and smoothed. It meets exit criteria for a requirement, “Prepare rectangle.” or for a user story, “As a sailboat operator, I want a sturdy rectangular piece with twenty holes drilled in a line so that I can use a pin to effectively change the length of the rope attached to this rectangle.” (People experienced in sailboats are probably rolling their eyes and saying, “This guy doesn’t know anything about boating!” True, true. No denying it. Hang in, please, for the real point. It’s coming! And it’s not about boating.)

The rectangle met exit criteria (which we ought to have in both waterfall and agile; often we have them, but they’re not formally written). We can continue the project knowing that task or user story is complete. Next … the drilling.

The project has decided that the drilling will occur under a task, “Drill holes in the rectangle.” or under a user story, “As a sailboat operator, I want a series of holes in the previously prepared rectangle so that I can use a pin to effectively change the length of the rope attached to this rectangle.” (Both examples could be improved with more specifics, perhaps with the number of holes. True, but please accept this for the point I’m making!)

The shop drills the first hole. All’s well. Our reporter needs to file a progress report.

Key question: If the design calls for 20 holes, does our reporter inform everyone that the task or user story is 5 percent complete?

For decades, some in projects have argued against that report. They might argue that the only criteria for reporting progress is fully meeting exit criteria: you’re either “done” or you don’t take credit for the work. They might argue that if the piece breaks along the line of the holes we’re drilling, we were never any percent complete; we just didn’t know that, yet. They might argue that our teams function better if they know they have to complete all the defined work to get credit; we don’t want to operate with the ambiguity of taking credit when not really done.

One very funny cynic (I wish I could give proper credit) observed that, “The second 90% of the project is the tougher part!”

Of course, in all the decades people argued for reporting only completions, they received feedback that they weren’t much in tune with management realities. There has to be feedback on progress to management, investors, and others. There is often lots of pressure to bend on this perceived minor point.

Many agilists have weighed in on the side of reporting only completed exit criteria. They suggest that the Product Owner has sole responsibility for determining “done” and suggests the Product Owners use only exit criteria for their determinations. Often they suggest that user stories get their marks for “done” primarily (or only) during the “Product Review” ritual (there are lots of equivalent names in use) just before the end of the iteration. They suggest that if a team demonstrates that work is “done all except for …”, a Product Owner serves the team best by not allowing them credit for that work during the current iteration.

Some would argue that the best response in the waterfall world is to break the drilling task into a series of smaller tasks, each of which have appropriate exit criteria on which we can report. And … the size of the work then described? It’s very similar to the size of work that works best for user stories in the agile world! (Not a surprise if we think about it!)

And what’s my position? (Not that it really matters …) Given the opportunity, I advise executives and investors to ask for complete or not; nothing between. With any luck, they’ll also ask the project teams to drive analysis down to small units of work (like agile does with user stories). I seldom get that opportunity; there is usually more important change to advocate in my clients’ processes and I choose those suggestions carefully. Having made that decision, the project can report steadily increasing progress each period; I know there’s measurement error in the report, but no one has ever asked to know that. Teams file the reports and do what it takes to make real progress. It’s practical. And when the project completes with functions around which we created expectation at the cost and time for which we created expectations, no one cares that progress reports had measurement error along the way.

And … this question matters: What do you think?

(An entirely different subject: I don’t have a visit counter on this blog; there’s probably a way, but I haven’t looked for it. As a result, I have no indication anyone is reading. Ever. So … following the technique of another blogger I saw … if you’re alive and reading this, please be so kind as to comment here or send me an email at gflemings@midwestprojmgmt.com! I mean … if no one’s reading, at some point, I’ll quit investing the time!)

A Model of a Project

Some people used to say, “Agile is so different and so much better than the past that we’re best to un-learn all we learned before Agile.” (I haven’t heard this sentiment for a while. Good riddance!) On the other hand, when we pursue agility right, we:

  • give the customer the option to implement working product earlier and to benefit from that product longer.
  • strengthen interdependency relationships between service organizations and the business organizations we serve.
  • add valuable new tools and results to our profession’s toolkit.
  • lower the time we spend on code our customers won’t use.
  • create code with fewer defects and lower life-cycle costs.

(All that is good!)

But how can we compare “agile” models to “waterfall” models? The desire to meaningfully compare got me to this model of a project.

The Model

The model is simple, really. The core of it is a 5-by-5 grid representing 25 units of work necessary to deliver the product of the project. The top row represents the five pieces of “requirements” work, one piece of work for “Function 1” (part of scope) and one piece for each of four other functions. Succeeding rows represent other common work (“design”, “development”, “testing”, and “deployment”, successively) each row including each of the Functions 1 through 5. The left column represents the five pieces of work necessary for “Function 1” (requirements, design, …); succeeding columns represent one Function each, including each of the five work types.

Figure 1: A Model of a Project

Pretty simple, eh?! (As promised!)

As a broad generality, this project is complete when a team does these 25 tasks. (In the specific, some of the agile people would observe that they don’t do these tasks. I’ll get to that level of specifics in future blogs. There’s value in the thought; this generality is reasonable to support the point here.)

The Waterfall Version of Our Project

If a project team doing this project guides itself by the “waterfall model”, the project looks like this.

Figure 2: A “Waterfall” Team’s View of our Project

They seek to do all requirements work early in the project (sometimes before doing any design work). They might call that time of the project the “Requirements Phase”. After that comes the “Design Phase”, etc.

They might claim this is “the right” way to do the project because any incomplete work in one phase might affect work in the next phase, and projects are hard enough without the avoidable challenges. (They might say, “We might well select a different design if we know about an additional requirement. We must know all requirements before we design.”)

The agilists might observe that

  • some of these waterfall projects they’ve served felt too documentation-centric and too slow.
  • for decades, studies have shown that something like 70 percent of functions in software we produced are “never used” or “seldom used”.
  • for decades, studies have shown that shorter projects are more likely to be successful.
  • it is increasingly accepted that it is impossible to know all requirements in advance.

There has to be a better way!

The “Agile” Version of Our Project

If a project team doing this project guides itself by the “agile model”, the project looks like this.

Figure 3: An “Agile” Team’s View of Our Project

They seek to identify some sliver of function (“Function 1”) that they can work meaningfully through, all the way to demonstrating working code to the customer. In the first “iteration”, they do all aspects of assigned function (“Function 1”). Optimally (but not necessarily), the code is ready at the end of the iteration to deploy if the user wants it deployed. Each iteration results in a demonstration to the customer; later iterations add more functions.

One view of the “agile” team is that they do many short projects: one per iteration. Each iteration might be as short as one week. Within an iteration, “user stories” define iteration scope and those in the iteration must not change during the iteration; product owners are generally welcome to change user stories not assigned to the iteration. And agile teams generally rejoice that they can be so responsive and flexible as to accept the changes in those stories. The parallel waterfall statements are that requirements define project scope, that many waterfall methods seek to freeze all project requirements before design starts, and that many waterfall teams seek to “control” “scope creep”.

The agilists tell us their communication with the customer (“early and often”) significantly increases the customer’s emotional investment in the product and also increases the customer’s sense of ownership of that product. And, if the demonstration at the end of any iteration is for a function the customer will “never use” or “seldom use”, the team knows it far earlier than in the waterfall model. (All that is good!)

Wrap-Up

Of course, all other things equal, after completing ten “blocks” of work, the waterfall team will finish “design” (the second row). The “agile” team completes a second demonstration of working software for the customer (“Function 2”; the second column) and they have a second chance to get valuable feedback.

With this simple model, everyone can understand one major difference in pursuing agility: agility models and waterfall model do substantially the same work, but do it in a different order. I hope you’ll keep coming back as I explore more detail!

Standard. Methodology.

Professions establish vocabularies. After achieving consensus about definitions, professionals can communicate more precisely and quickly. I offer some candidates to our professional consensus here on terms for software development projects. What would you change here?

Let’s agree with guidance from the American National Standards Institute (ANSI®), quoting a joint standard of the International Organization for Standardization (ISO®) and the International Electrotechnical Commission (IEC®), that a standard is, “A definition or format that has been approved by a recognized standards organization or is accepted as a de facto standard by the industry. (As defined in ISO/IEC Guide 2:2004)”.

I suggest we consider a methodology to be the specific set of training, development processes, tools, templates, and controls a particular software development team uses to structure its work. A methodology includes reference (as needed) to specific tools, communication channels, approval meetings (and lists of attendees appropriate for each), document templates, development languages, and responsibility assignments. Harold Kerzner used the phrase “forms, guidelines, templates, and checklists” during his May, 2007 Omaha presentation advocating effort to detect and document best practices.

There is power in the statement that by this definition, a methodology is a company standard, with corporate leadership taking the role of “recognized standards organization”. Following the lead of the ANSI guidance about standards, I focus here on industry standards.

That definition doesn’t specify that a methodology is documented. (This is the “you cannot not have a methodology” argument. You might not have a repeatable or consistent methodology…) As team sizes and project sizes grow, my impression is that few experienced professionals on either side of the waterfall/agile discussion would recommend no documentation of the methodology.

Many of the agility models (the Agile Manifesto®, in fact) advise caution about excessive documentation. I agree. I have seen methodologies that seemed to have everything “plus the kitchen sink” in them. They were hard to understand and so difficult to use that teams did only those parts that were well enforced. After some point of growth, it makes sense to require that for everything added, something must come out of the methodology. Simple is better. Focus is great!

Examples:

  • The Project Management Institute (PMI®) publishes the PMBOK® Guide as a standard. At one point in its history, many project managers considered it a standard because they accepted PMI as “a recognized standards organization” (from the definition above). I hope that’s still true today (grin). Many people outside PMI probably consider it more powerful today that ANSI listed the same document as a U.S. standard (ANSI/PMI 99-001-2008®) and that IEEE® listed the same document among its standards (IEEE Std 1490-2011®).
  • A company’s methodology is the collection of policies requiring particular practices (perhaps requirements to use particular templates for documents; perhaps requirement to use a particular inter-project corporate-level shared resource for change management or version control or lessons learned (or other); perhaps in a waterfall environment, requirement for a phase-end tollgate or phase review (or any of many similar terms); perhaps in a Scrum environment, a requirement to use a Daily Standup and a Product Backlog).

Non-examples:

  • No company’s methodology is an industry standard, if only because it depends on tools, databases, training, and past shared experience not available outside the company.
  • No standard is a methodology. (Ever read a job ad stipulating that successful applicants will “use PMI methodology”? At one level of precision, that sentence means little!)
  • “Agile” and “agility” are not methodologies. (Recently I heard Susan Courtney (of Nebraska Blue; Apr 2012 AIM Institute Technology Leader of the Year) forcefully and committedly advocate agility and suggest (about software development life cycles, SDLC) that companies don’t understand agility if they want to “implement a new SDLC called ‘Agile'”. I agree with her; a company needs to change more than the SDLC to increase agility. Agility depends on a wider and deeper partnership than that. And provides more benefits than just an SDLC change.)

Between the concepts of standard and methodology is a set of guidance generally not as accepted as a standard or not as specific as a methodology; some call this group models or methods or frameworks or processes. There is considerable variety in the terms used (example). Let’s use model here, though it’s not a perfect label for this group.

Models help us select methodologies. Though the following is a (very!) partial list, examples include:

  • The original paper about waterfall. In my experience, it’s very common to hear reference to “the waterfall model”. (Interestingly, Royce didn’t use the word waterfall, though his central diagram clearly resembles a waterfall. This paper makes a strong point about the need for adequate documentation in a software development effort.)
  • Mary and Tom Poppendieck’s 2003 book, Lean Software Development, An Agile Toolkit (ISBN 0-321-15078-3). (This is a valuable contribution to software development thought and arguably had the most pivotal early impact in associating the concepts of Lean manufacturing with software development.)
  • The PMBOK Guide. (Though also a standard, it provides much of the detail typical of this middle group and allows considerable latitude in implementing company-specific details. Far more than not, agility models are consistent with the PMBOK Guide. The Fifth Edition of the PMBOK Guide is due out in 2012/2013; I’m betting agility models will have lots of impact on the new edition.)
  • The Scrum Alliance® summary of Scrum. (By the same token, I acknowledge the power to the statement, “That’s a standard! The Scrum Alliance is the “recognized standards organization”.)

How best can we make use of these concepts? Here’s one vote for our teams (maybe our companies; in the ideal world, teams have a role) using guidance from standards and models to define everything we need in our methodology (and not a bit more!) And applying the specificity that distinguishes a methodology from a standard.

If We Have a Problem, We’ll Find It. Early is Better. Still.

Let’s consider here a project with a significant problem we haven’t found. If the project’s product is to be a chair, maybe we have cut a leg too short. If the project’s product is software, maybe a function we designed with two parameters actually needs three (or reports answers in units different than a consuming function assumes it reports, maybe miles per hour instead of meters per second). This discussion is not about a misspelled word in a quick and informal email message, for example.

Do we get to choose between finding that problem or not? Well … (he he) … since our discussion is about significant problems, we’ll find it. And as one of the many corollaries to Murphy’s Law says, “If you have a problem, you’ll find that problem at the worst possible time.” (There’s something irresistibly pessimistic about that Law and all its Corollaries!)

You’ll find that problem just after you’ve spent lots of effort with the short chair leg in a lathe, getting sanded, and getting just the right finish (all of which was a waste of effort, of course, but you didn’t know that yet). It’ll happen just after you or your tester tell your executive that your code is 99 percent through testing, that the results look good, that you expect to complete ahead of schedule, and that the profits for this product are likely to start early. Or your customer will find the problem. (That would be embarrassing!) You’ll find that problem. It’ll hurt.

Let’s do something about it. And let’s focus now on creating software. Barry Boehm published once or twice (tongue-in-cheek) in a long and continuing software engineering career. In 1981, he wrote Software Engineering Economics, which contains the famous Constructive Cost Model (COCOMO) used for estimating the size and duration of software development efforts. It also contains (page 40) a graph showing the increasing cost to fix a problem introduced in requirements. The chart makes the point that if the problem costs roughly $15 to fix during requirements, it costs something like $50 in design, $100 during construction, $200 during development test, $500 during acceptance test, and $1,800 during operation. Many other sources of similar age make similar arguments and cite other cost studies. Their numbers vary, but let’s agree: cost to fix grows rapidly with time.

One message from those studies is: projects need a strong sense of urgency about finding problems we must fix because of the rapid increases in cost.

Another message is that a key question in managing a project is, “How much is the optimal spending now (in dollars, hours, process, whatever) to detect more problems earlier and get the greatest benefit from the current lower price of fixes?”

Sound good? Well … it certainly sounded good enough to form an orthodoxy in our professions. (Perhaps, anyway. It felt like that!)

From the current perspective, is there a problem?

Well … from today’s perspective, many of us would feel the data collection seems to presume use of the waterfall development model. Fewer projects use a waterfall model today.

And … many in our industry now have experience working projects without spending lots of time in review of past work. Many of us feel that the right answer to the question above is spending no time specifically looking for past problems.

And we achieve great results.

And our stakeholders love the new relationships with us as service providers.

(Well … not every time. New project models aren’t “the silver bullet”. And there are lots of other reasons projects don’t do as well as we want; many still need work.)

I refer, of course, to the development models advising us to strive for market agility (Scrum, Lean, Kanban, and Extreme Programming are commonly-cited examples). I intend to write more about these in future posts. For now, I’ll say: Much of the market has already moved one (or more) of these directions.

And what about the advice derived from the statistics Dr. Boehm cited? I’ll say projects need the same sense of urgency about finding errors; we’ll find problems differently than we most commonly thought about then. Projects using today’s agile models probably expect to discover those errors during their frequent interaction with customers (“early feedback”). And we expect to fix problems during the first iteration in which the customer wants us to fix them. And that advice sounds great … Why would anyone oppose?

And what about Dr. Boehm’s career after writing the book I mentioned? Well, in 1988, he published an often-cited description (info) of his “Spiral Model” of iterative software development. Both his works cited here influenced thought leaders who contributed to other models later collectively called “agile”. He is now an AIAA Fellow, an ACM Fellow, an IEEE Fellow, an INCOSE Fellow, and a member of the National Academy of Engineering.

He is one thought giant on whose shoulders we stand today. May we all learn …

//Later edit: Fixed a broken link.