In many areas of my past study, feedback is considered good; I consider it important to software engineering life-cycles. I’ll give a definition useful for us here. I’ll mention several examples of feedback in modern life and I’ll apply those thoughts to software engineering life-cycles.

Feedback Defined

Let’s use this definition: feedback in a process flow occurs when a process influences a preceding process.

a process flow with two processes and one arrow; the arrow points from "Build Product" to "Sell Product",
Figure 1. A process flow: “Build Product” is a predecessor to and affects “Sell Product”.

That begs for an example and a picture. At high aggregation, we might say a company that produces product has two processes: Build Product and Sell Product. As shown in Figure 1, the Build Product process is a predecessor of and influences the Sell Product process. (That makes sense: In many cases, producing product makes it easier to sell.) Figure 1 does not include feedback; Sell Product does not influence Build Product.

a diagram similar to Figure 1, but adding a second arrow, this one from "Sell Product" to "Build Product" in addition to the one from "Build Product" to "Sell Product"
Figure 2. A process diagram like Figure 1, except that it adds feedback. Each process is a predecessor to and affects the other process.

Figure 2 adds feedback; Sell Product and Build Product each is a predecessor to the other; each both influences and is influenced by the other.

The feedback might support fiscal constraints; the company might stop production when warehouse inventory gets above a threshold.

The feedback might support customer satisfaction; it might report the results of complaints received, reports of satisfaction, or suggestions for future development.

The concept of a feedback loop is closely related. Figure 1 demonstrates a linear process; it has no loops (or cycles). Figure 2 is an example of a feedback loop.

Corporate Governance

I have heard it said that in parts of the 1900’s, American manufacturing had a relatively easy time designing product that would sell well. A manufacturing organization in that position generates the greatest profit by producing as much product as possible at the lowest possible cost. If variation results in extra cost and doesn’t result in extra profit, they cut variation. Wikipedia describes a 1918 environment in which over half of all U.S. cars were Ford Model T’s. Their black paint dried fastest, which reduced cost on the assembly line. Henry Ford wrote in his autobiography about that time, “Any customer can have a car painted any color that he wants so long as it is black”.

Feedback works. Wikipedia goes on to indicate, “By 1926, flagging sales of the Model T finally convinced Henry to make a new model.” The flagging sales were feedback; by 1927 the company produced the Ford Model A.

Steering Response

A three-year-old recently reminded me in several ways about feedback. She was driving a light and slow electric car; I was jogging beside her interested in assuring her safety. (I’ve later wondered whether I was thinking at all when I put her behind that wheel! There are so many warnings I could have heeded at that time! But I did not …) I didn’t want to be “that” adult in her life, constantly controlling her vehicle; my preferred behavior was to let her do all the steering. (Note to self: you silly man!) I soon realized that she didn’t yet understand the concept of steering this car. She understands steering; she drives her tricycle. On the car, though she held the steering wheel correctly, she didn’t exert enough force on it to change the car’s direction. (In her defense, even truckers might not consider this steering system to be responsive!) Her lack of steering inputs was feedback leading me to take the car to a deserted section of parking lot. We did; she had a great time going either in straight lines or in repeating circles depending how I had most recently set the steering wheel. All was well.

I mentioned this was a three-year-old. After perhaps three minutes of this, she wanted to go to the playground then sometimes within her field of view. Her wish to go to the playground was feedback reminding me about three-year-olds’ tendency toward short attention span.

I set the steering wheel to a position I hoped would get her vehicle to the playground. The vehicle turned well past the direction I wanted. The excessive turn was feedback reminding me that driving a vehicle is an active control process. We cannot set the controls once and be done with it; we must repeatedly check results and adjust the controls again. We get close to one side of the lane, turn away from that side, get close to the lane center, and turn the opposite direction to stay close to the center; we repeat that process all the way to our destination. That process depends on receiving feedback that our vehicle is “close” to an edge or center of the lane. That three-year-old either wasn’t receiving the feedback or wasn’t acting on it.

(By the way, controlling an airplane involves double the inputs. We must start the roll, thus changing the angle of the wings make with the horizon; that’s bank angle. Then we must stop the roll and hold the desired bank angle to turn. Just before we get to our desired direction, we must start rolling toward level. When the wings are level, we must stop the roll. Yup. That’s four control inputs compared to a car’s two.)

Feedback in a Software Development Process

The August 1970 original paper on waterfall software development by Winston Royce mentions the importance of feedback. His Figure 3 shows the classic waterfall diagram also with all feedback flows pointing for each phase “uphill” to the prior phase; the caption is, “Hopefully, the iterative interaction between the various phases is confined to successive steps”. His Figure 4 and the accompanying text recognize the likelihood of detecting problems in testing that invalidate program design and, in turn, invalidate software requirements; the caption is, “Unfortunately, for the process illustrated, the design iterations are never confined to the successive steps”. This paper is convincing evidence that we’ve known since the inception of waterfall processes that feedback is useful. And that feedback can be expensive.

The most interesting part of this discussion is deciding how this observation drives today’s software development.

The Agile models answer unanimously: Do a very small part of the project, show it to the customer, get feedback “early and often”, and repeat. Feedback is an important component of the Agile models.

For those of us not using Agile, a useful answer is to divide the project into a series of shorter efforts, each of which completes one development cycle. Perhaps doing more development cycles will make some methodology processes or products excessive; we might find we can streamline those excesses to everyone’s advantage. There is perhaps a reasonable minimum cycle time advised by Agile: one to four weeks; if your organization gets to that point, it has done lots of the change needed to do all of Agile.

This post reviews feedback; feedback is good. We use it in many activities of our lives; in some parts of our lives, we use it without noticing. It is a valuable part of all software development. May we all have and use lots of feedback in our software development!

Iterative Deployment or Big-Bang?

Let’s say your CEO asks for input: “Should an up-coming software development project use an iterative development process?” (Let’s take the question as assuming an alternative of deploying the software product all at once.)

Iterative development means that the software development team team delivers the product functions in a series of small efforts adding up to the whole rather than as a single effort. It can mean (but doesn’t have to) that the end customer sees several deployments.

The Agile answer is clear; the pre-Agile answer may surprise some Agilists. Let’s do pre-Agile first.

Capers Jones released a highly respected book on software cost estimating in 1998; he released a second edition as Estimating Software Costs: Bringing Realism to Estimating (ISBN 978-0-07-148300-1) in 2007. His book (p 479) says that systems and commercial software requirement tend to change at an average rate of “about 2 percent per month from the end of initial requirements until start of testing.” It further says to expect “12 percent for Agile projects”. My experience cannot support contradicting this book.

Maybe that rate of change results from those specifying requirements learning more about the environment in which they’ll use the tool. Maybe that’s a change rate in business. Maybe it measures the amount humans tend to change their minds. Whatever it is, it seems to happen in lots of projects (maybe some of yours!)

Two percent a month doesn’t sound like much, right? Even using processes that ask the user to commit to no further requirements change after establishing a requirements baseline, the development organization ought to be able to be that flexible, right?

How long is your pre-Agile project? Let’s say it is a year, with three months at the start for requirements and two months at the end for testing and deployment. Dr. Jones’ book is telling us that on average, 14 percent of requirements will change in the seven months between the end of requirements and start of testing. And that for an 18-month project, 26 percent of requirements will change. By any chance, do you feel like shortening the project yet? Maybe it makes sense to do a small project with only part of the product function; maybe a six month project or even shorter?

That argument has been around for quite a while. Companies that scheduled projects for long durations didn’t hear it or didn’t find it persuasive.

If the CEO asks me that question, I’d suggest starting with a short project. And since nothing in that logic depended on the current process in use (Agile or otherwise), I’d almost always suggest that. (And it would be easy to say I’d “always” suggest that; it’s just that “always is a very strong word”!) That’s my pre-Agile answer.

The Agile answer: Oh! It’s like they saw this coming!

The Agile answer is to start with a “project” the length of one iteration (commonly, two to four weeks). Agile teams ask their users to make no change to user stories in the iteration once the iteration starts. (Geez! That sounds a lot like the pre-Agile request to let the programmers work on unchanging requirements. And … it makes sense in both places, though it sounds easier to implement in Agile!) Holding user stories unchanged for two to four weeks (and then not all user stories; only those in the iteration) seems lots easier to ask of our customers. At the end of the iteration, Agile teams look forward to showing products to their users and to getting feedback “early and often”.

So … what’s the difference in responding to that CEO question now that Agile is growing into so many of our work processes? Maybe in the pre-Agile days, we wouldn’t have felt comfortable proposing so short a project, though we would have suggested something shorter than “all at once”. Maybe Agile is, in this respect, the direction we were heading anyway!

(By the way, Chapter 7 of Dr. Jones’ book covers “Manual Estimating Methods Derived from Agile Projects and New Environments”. He presents data on many processes, including Scrum and Extreme Programming. It’s good reading.)

Anticipation and Adaptation in Design

A familiar criterion among my clients for choosing between agile software development processes and more traditional processes is a natural tension in all our software projects between anticipation and adaptation in design. Jim Highsmith discusses the issue in his 2010 book, Agile Project Management, Creating Innovative Products, Second Edition, for example at pages 218 to 219. (There’s lots of value in this book, by the way. This is but one small example.)

Designers anticipate frequently. Perhaps based on long experience with a product or based on team discussions about future direction of a product, they create a design for the current cycle of work that includes all capability planned for this cycle and also includes some allowance for potential future direction.

And … designers adapt frequently. Perhaps because of change in the business environment, they create a design for the current cycle of work that supports a previously unacknowledged direction for the product. They’re “turning on a dime” to meet a new and current customer need.

When product teams guess future directions well, anticipation proves valuable. When the current cycle of work completes and the anticipated feature need is clear, the next design has less change associated solely with that next cycle of work. If there’s time, the design produced here may anticipate again, this time betting on needs for a third cycle of product development. Some of our peers suggest that a design strategy that includes at least some component of anticipation is most appropriate for this case because it lengthens the time the team has to consider and even test the design changes.

When product teams guess future direction poorly, adaptation proves valuable. Once work on the new feature starts, prior attempts at design for this feature don’t exist; designers have the maximum flexibility in design. Some of our peers (particularly those fond of agile) suggest that a design strategy based heavily on adaptation is most appropriate for this case. At worst, anticipatory design can make the next design cycle more difficult because anticipated direction may be counter to needed direction in some way. In this case, designers and developers (“Oh! The expense!”) may spend time maintaining the previously predicted need whether or not it still has potential. And that costs them time and design simplicity.

If you accept all the above, the choice between design based on anticipation or adaptation might be simple if we have high confidence either that we can predict future needs well (tending to our selecting anticipation) or high confidence that we cannot predict well (tending to our selecting adaptation). Perhaps many of us have less than high confidence either way; that makes the decision more difficult.

One of the often-heard suggestions among those recommending agile is that any time we don’t have high confidence in our predictions of future direction, experience leads them to believe best results come from tending toward adaptation. (Well … okay. Some of them propose abandoning anticipation entirely.) Teams that intentionally train themselves to be highly capable of responding promptly are nimble (“agile” even!; pun intended) about moving the product as needed. They deliver more value earlier to their customers, on average.

Another often-heard suggestion among those recommending agile is that because of short project cycles, each successive design can represent a small step in a product direction, giving all parts of the product team the greatest chance to consider impact of change, and to provide and absorb early feedback. The best design “emerges” from a series of small efforts and from frequent feedback. If there’s any disadvantage to creating a series of small designs changes, the advantages of feedback provided “early and often” clearly offset the disadvantages.

What does your experience make you tend to believe? What are the most persuasive points on each side here? All comment very welcome.

A Model of a Project

Some people used to say, “Agile is so different and so much better than the past that we’re best to un-learn all we learned before Agile.” (I haven’t heard this sentiment for a while. Good riddance!) On the other hand, when we pursue agility right, we:

  • give the customer the option to implement working product earlier and to benefit from that product longer.
  • strengthen interdependency relationships between service organizations and the business organizations we serve.
  • add valuable new tools and results to our profession’s toolkit.
  • lower the time we spend on code our customers won’t use.
  • create code with fewer defects and lower life-cycle costs.

(All that is good!)

But how can we compare “agile” models to “waterfall” models? The desire to meaningfully compare got me to this model of a project.

The Model

The model is simple, really. The core of it is a 5-by-5 grid representing 25 units of work necessary to deliver the product of the project. The top row represents the five pieces of “requirements” work, one piece of work for “Function 1” (part of scope) and one piece for each of four other functions. Succeeding rows represent other common work (“design”, “development”, “testing”, and “deployment”, successively) each row including each of the Functions 1 through 5. The left column represents the five pieces of work necessary for “Function 1” (requirements, design, …); succeeding columns represent one Function each, including each of the five work types.

Figure 1: A Model of a Project

Pretty simple, eh?! (As promised!)

As a broad generality, this project is complete when a team does these 25 tasks. (In the specific, some of the agile people would observe that they don’t do these tasks. I’ll get to that level of specifics in future blogs. There’s value in the thought; this generality is reasonable to support the point here.)

The Waterfall Version of Our Project

If a project team doing this project guides itself by the “waterfall model”, the project looks like this.

Figure 2: A “Waterfall” Team’s View of our Project

They seek to do all requirements work early in the project (sometimes before doing any design work). They might call that time of the project the “Requirements Phase”. After that comes the “Design Phase”, etc.

They might claim this is “the right” way to do the project because any incomplete work in one phase might affect work in the next phase, and projects are hard enough without the avoidable challenges. (They might say, “We might well select a different design if we know about an additional requirement. We must know all requirements before we design.”)

The agilists might observe that

  • some of these waterfall projects they’ve served felt too documentation-centric and too slow.
  • for decades, studies have shown that something like 70 percent of functions in software we produced are “never used” or “seldom used”.
  • for decades, studies have shown that shorter projects are more likely to be successful.
  • it is increasingly accepted that it is impossible to know all requirements in advance.

There has to be a better way!

The “Agile” Version of Our Project

If a project team doing this project guides itself by the “agile model”, the project looks like this.

Figure 3: An “Agile” Team’s View of Our Project

They seek to identify some sliver of function (“Function 1”) that they can work meaningfully through, all the way to demonstrating working code to the customer. In the first “iteration”, they do all aspects of assigned function (“Function 1”). Optimally (but not necessarily), the code is ready at the end of the iteration to deploy if the user wants it deployed. Each iteration results in a demonstration to the customer; later iterations add more functions.

One view of the “agile” team is that they do many short projects: one per iteration. Each iteration might be as short as one week. Within an iteration, “user stories” define iteration scope and those in the iteration must not change during the iteration; product owners are generally welcome to change user stories not assigned to the iteration. And agile teams generally rejoice that they can be so responsive and flexible as to accept the changes in those stories. The parallel waterfall statements are that requirements define project scope, that many waterfall methods seek to freeze all project requirements before design starts, and that many waterfall teams seek to “control” “scope creep”.

The agilists tell us their communication with the customer (“early and often”) significantly increases the customer’s emotional investment in the product and also increases the customer’s sense of ownership of that product. And, if the demonstration at the end of any iteration is for a function the customer will “never use” or “seldom use”, the team knows it far earlier than in the waterfall model. (All that is good!)


Of course, all other things equal, after completing ten “blocks” of work, the waterfall team will finish “design” (the second row). The “agile” team completes a second demonstration of working software for the customer (“Function 2”; the second column) and they have a second chance to get valuable feedback.

With this simple model, everyone can understand one major difference in pursuing agility: agility models and waterfall model do substantially the same work, but do it in a different order. I hope you’ll keep coming back as I explore more detail!

Community Discussion: What’s a Good Direction for Change on This Blog?

So … as of Sep 2012, I’m new to this blogging thing, other than reading some now and then. It’s kinda like having a newspaper or a magazine column in that among my writings, I write what comes to mind and wait to see whether it builds a readership. It’s unlike those more familiar outlets in that I’m the editor and publisher, too. So … I don’t have a trusted colleague with ready advice and shared goals. (Well … my readers … you, of course …)

So … and this goes throughout the life of this blog: How can this blog better serve you? Prove more interesting? Are the posts “too long”? Are they “too [something else]”?

All feedback welcome … always …

If We Have a Problem, We’ll Find It. Early is Better. Still.

Let’s consider here a project with a significant problem we haven’t found. If the project’s product is to be a chair, maybe we have cut a leg too short. If the project’s product is software, maybe a function we designed with two parameters actually needs three (or reports answers in units different than a consuming function assumes it reports, maybe miles per hour instead of meters per second). This discussion is not about a misspelled word in a quick and informal email message, for example.

Do we get to choose between finding that problem or not? Well … (he he) … since our discussion is about significant problems, we’ll find it. And as one of the many corollaries to Murphy’s Law says, “If you have a problem, you’ll find that problem at the worst possible time.” (There’s something irresistibly pessimistic about that Law and all its Corollaries!)

You’ll find that problem just after you’ve spent lots of effort with the short chair leg in a lathe, getting sanded, and getting just the right finish (all of which was a waste of effort, of course, but you didn’t know that yet). It’ll happen just after you or your tester tell your executive that your code is 99 percent through testing, that the results look good, that you expect to complete ahead of schedule, and that the profits for this product are likely to start early. Or your customer will find the problem. (That would be embarrassing!) You’ll find that problem. It’ll hurt.

Let’s do something about it. And let’s focus now on creating software. Barry Boehm published once or twice (tongue-in-cheek) in a long and continuing software engineering career. In 1981, he wrote Software Engineering Economics, which contains the famous Constructive Cost Model (COCOMO) used for estimating the size and duration of software development efforts. It also contains (page 40) a graph showing the increasing cost to fix a problem introduced in requirements. The chart makes the point that if the problem costs roughly $15 to fix during requirements, it costs something like $50 in design, $100 during construction, $200 during development test, $500 during acceptance test, and $1,800 during operation. Many other sources of similar age make similar arguments and cite other cost studies. Their numbers vary, but let’s agree: cost to fix grows rapidly with time.

One message from those studies is: projects need a strong sense of urgency about finding problems we must fix because of the rapid increases in cost.

Another message is that a key question in managing a project is, “How much is the optimal spending now (in dollars, hours, process, whatever) to detect more problems earlier and get the greatest benefit from the current lower price of fixes?”

Sound good? Well … it certainly sounded good enough to form an orthodoxy in our professions. (Perhaps, anyway. It felt like that!)

From the current perspective, is there a problem?

Well … from today’s perspective, many of us would feel the data collection seems to presume use of the waterfall development model. Fewer projects use a waterfall model today.

And … many in our industry now have experience working projects without spending lots of time in review of past work. Many of us feel that the right answer to the question above is spending no time specifically looking for past problems.

And we achieve great results.

And our stakeholders love the new relationships with us as service providers.

(Well … not every time. New project models aren’t “the silver bullet”. And there are lots of other reasons projects don’t do as well as we want; many still need work.)

I refer, of course, to the development models advising us to strive for market agility (Scrum, Lean, Kanban, and Extreme Programming are commonly-cited examples). I intend to write more about these in future posts. For now, I’ll say: Much of the market has already moved one (or more) of these directions.

And what about the advice derived from the statistics Dr. Boehm cited? I’ll say projects need the same sense of urgency about finding errors; we’ll find problems differently than we most commonly thought about then. Projects using today’s agile models probably expect to discover those errors during their frequent interaction with customers (“early feedback”). And we expect to fix problems during the first iteration in which the customer wants us to fix them. And that advice sounds great … Why would anyone oppose?

And what about Dr. Boehm’s career after writing the book I mentioned? Well, in 1988, he published an often-cited description (info) of his “Spiral Model” of iterative software development. Both his works cited here influenced thought leaders who contributed to other models later collectively called “agile”. He is now an AIAA Fellow, an ACM Fellow, an IEEE Fellow, an INCOSE Fellow, and a member of the National Academy of Engineering.

He is one thought giant on whose shoulders we stand today. May we all learn …

//Later edit: Fixed a broken link.

They Don’t Know What They Want Until You Give Them Something Else

Steve Grandfield is a Blue Cross Blue Shield of Nebraska executive. (You might know the company as Nebraska Blue.) I heard him speak to Omaha Agile Development last week. Oh, my! How times have changed! Among other things, he said something very close to the title above, speaking about business customers.

I need to digress. (Sorry!) I want to send well-earned kudos to Steve, his company, and his company’s employees for the open house last week. It was an impressive display of talent, organizational change, and community interest. And as long as I’m throwing out kudos, Client Resources earned them for sponsoring and supporting the event. And Omaha Agile Development earned them for another in their continuing series of interesting and helpful presentations. (Can they possibly manage to do another good one next quarter? Join them to find out!) Well done, all around! And … back to the subject …

In prior days (“yesteryear”, in Garry-speak), the above title statement could well have been introduction to some boo-hoo about how tough it is to serve customers. “We toiling slaves in (fill in the blank with your favorite service organization; Steve talked last week about information technology, IT) have it so rough!”, the diatribe starts. “We’ll build anything our customers want. Wouldn’t it be nice if just once, they’d tell us what they want?! Well … they do … but they usually tell us late in the project when we’re panicked about meeting deadlines. And they tell us we have it all wrong. And they’re usually yelling at the time. Oh! Woe is us.”

Well … of course that boo-hoo-er spoke truth. They accurately described the environment from a partisan IT perspective (you picked up on the embedded “us vs. them” attitude, right?) They reflected IT frustration. Of course, there was similar frustration for the business folks. Many of them longed for the days when IT would be more affordable. (Funny thing, IT costs seem to be coming down faster than lots of people planned for … maybe that helps explain the timing of this subject. But I digress …) Many business folks saw IT as simultaneously skilled and inflexible; they wondered sometimes whether IT was more effort than its value.

Steve Grandfield’s message wasn’t the above; that would have been so … well … yesteryear. (Yes. It was fun expressing it that way!) His point was that none of us must feel trapped in the condition the boo-hoo-er describes. He points to market agility as a viable alternative goal. (Scrum, Lean, and Kanban are examples of advice to achieve agility. There are others.) He suggests the condition described as a prime motivation for changing.

Steve says that because the user cannot tell us what they want, we’ll do well to change our ways. I see him as suggesting pursuit of one of the common values among agile frameworks: get feedback early and often. That is, build a little working code. Then, inform the business user how the demonstration code is similar to the product you think they want (and how it’s different), and ask the business user what they think. Many business users, having described their ideal product, see this demonstration and can suddenly better describe the ideal. (That would be the subject title, right?!) The agile frameworks advocate using that condition to the service provider’s advantage. They argue that service providers would rather hear from business customers early than late that, “That’s good and I’d love it if you could also make it do …” And they argue that the earlier service providers hear that feedback, the more time they have time to adjust. The agile frameworks give service providers a better chance to deliver (really) the product that delights. “Delight” is not just an empty ideal any more.

I don’t recall that executives of yesteryear called for these processes. More and more of the market is adopting these ideas.