Zero Finished or Done; No In-Between

You’re working a project (it could be “waterfall” or “agile”). The project involves drilling a series of holes through the long axis of something rectangular (maybe a piece of aluminum or glass). This piece is critical to your project, so everybody and their boss wants to know progress.

The “agile” folks could observe that this is a great opportunity for an “information radiator”: a poster everybody and their boss can see just by walking through that informs about that particular progress. It radiates and broadcasts the information; no one misses it. That’s a good suggestion for waterfall, too.

Our reporter goes to the shop and watches preparation of the raw material. Maybe it’s cut to size and smoothed. It meets exit criteria for a requirement, “Prepare rectangle.” or for a user story, “As a sailboat operator, I want a sturdy rectangular piece with twenty holes drilled in a line so that I can use a pin to effectively change the length of the rope attached to this rectangle.” (People experienced in sailboats are probably rolling their eyes and saying, “This guy doesn’t know anything about boating!” True, true. No denying it. Hang in, please, for the real point. It’s coming! And it’s not about boating.)

The rectangle met exit criteria (which we ought to have in both waterfall and agile; often we have them, but they’re not formally written). We can continue the project knowing that task or user story is complete. Next … the drilling.

The project has decided that the drilling will occur under a task, “Drill holes in the rectangle.” or under a user story, “As a sailboat operator, I want a series of holes in the previously prepared rectangle so that I can use a pin to effectively change the length of the rope attached to this rectangle.” (Both examples could be improved with more specifics, perhaps with the number of holes. True, but please accept this for the point I’m making!)

The shop drills the first hole. All’s well. Our reporter needs to file a progress report.

Key question: If the design calls for 20 holes, does our reporter inform everyone that the task or user story is 5 percent complete?

For decades, some in projects have argued against that report. They might argue that the only criteria for reporting progress is fully meeting exit criteria: you’re either “done” or you don’t take credit for the work. They might argue that if the piece breaks along the line of the holes we’re drilling, we were never any percent complete; we just didn’t know that, yet. They might argue that our teams function better if they know they have to complete all the defined work to get credit; we don’t want to operate with the ambiguity of taking credit when not really done.

One very funny cynic (I wish I could give proper credit) observed that, “The second 90% of the project is the tougher part!”

Of course, in all the decades people argued for reporting only completions, they received feedback that they weren’t much in tune with management realities. There has to be feedback on progress to management, investors, and others. There is often lots of pressure to bend on this perceived minor point.

Many agilists have weighed in on the side of reporting only completed exit criteria. They suggest that the Product Owner has sole responsibility for determining “done” and suggests the Product Owners use only exit criteria for their determinations. Often they suggest that user stories get their marks for “done” primarily (or only) during the “Product Review” ritual (there are lots of equivalent names in use) just before the end of the iteration. They suggest that if a team demonstrates that work is “done all except for …”, a Product Owner serves the team best by not allowing them credit for that work during the current iteration.

Some would argue that the best response in the waterfall world is to break the drilling task into a series of smaller tasks, each of which have appropriate exit criteria on which we can report. And … the size of the work then described? It’s very similar to the size of work that works best for user stories in the agile world! (Not a surprise if we think about it!)

And what’s my position? (Not that it really matters …) Given the opportunity, I advise executives and investors to ask for complete or not; nothing between. With any luck, they’ll also ask the project teams to drive analysis down to small units of work (like agile does with user stories). I seldom get that opportunity; there is usually more important change to advocate in my clients’ processes and I choose those suggestions carefully. Having made that decision, the project can report steadily increasing progress each period; I know there’s measurement error in the report, but no one has ever asked to know that. Teams file the reports and do what it takes to make real progress. It’s practical. And when the project completes with functions around which we created expectation at the cost and time for which we created expectations, no one cares that progress reports had measurement error along the way.

And … this question matters: What do you think?

(An entirely different subject: I don’t have a visit counter on this blog; there’s probably a way, but I haven’t looked for it. As a result, I have no indication anyone is reading. Ever. So … following the technique of another blogger I saw … if you’re alive and reading this, please be so kind as to comment here or send me an email at! I mean … if no one’s reading, at some point, I’ll quit investing the time!)

Good Renewal Experience

Last week (Thu and Fri), I spent some time at the UNO IT Academy. I had a great experience. They’re a good source of professional renewal.

I attended two sessions from this version of the Academy.

Dr. George Royce (who I’ve known about for years because of his high profile position at Mutual of Omaha) taught “Lean IT – Save Money, Reduce Time to Deliver, Focus on Your Business”. His slides stepped through a number of transitions he and Mutual of Omaha have taken to complete more work with existing resources, resulting in great value to that organization. During his four hours, he asked his attendees to engage in two workshop exercises that supported their learning quite well. There was material about Lean (one of the “Agile” models), but his intent was to cover a great deal more information than that. If you need to streamline business processes in your organization and if you need to provide IT support for business processes your organization is streamlining, you might well find that a later presentation of this material will help you along. Dr. Royce did a nice job responding to the particular needs of the group I sat with. Good session.

Dr. Abhishek Parakh and Matthew Battey taught “Security Challenges and Opportunities in Cloud Computing”. I considered this an interesting session because I understand both the potential cost savings cloud computing can give us (the shared resources argument) and low tolerances to the vulnerabilities of storing one’s data in a shared environment. Going in, I expected that “someday”, our industry will find solutions to the shared storage problem. Walking out, I perceive we are much closer than I was aware to solving that problem. The hands-on portion of this session involved firing up a virtual server at Amazon Web Services and setting up Porticor, an encryption appliance, to protect all data on the AWS server. Porticor claims to “encrypt the entire data layer in minutes”; I’ll vouch for the quick setup part. If you’re an “educate myself” kind of person, I’m betting all the information to do what we did is on the AWS and Porticor web sites; it’d be a good exercise. Use the option for the smallest server on AWS (which is large enough for a SharePoint instance) and the experience should be free to you. Dr. Parakh clearly has lots to offer the data security community. Matt Battey helped us fire up an instance of SharePoint on our AWS server and use SharePoint protections to further protect data there. I deeply respect a consulting skill I see in Matt: He talked about highly technical subjects at a layman’s level and never approached making the attendees feel he was “dumming down” his discussion. He is clearly a technical expert well beyond the level at which he spoke; he expertly spoke to his audience. Well done by both presenters.

This version of the Academy was in October; they apparently run them three times a year. In 2012, their sessions were in April, August, and October. So, the next one might be in April; dunno. If you want to join their mailing list, you might consider contact with Dr. Deepak Khazanchi at UNO College of IS&T. Or ask me to send you the invitation I received for this session; I presume it’ll have all the information you’ll need for that purpose.

A Model of a Project

Some people used to say, “Agile is so different and so much better than the past that we’re best to un-learn all we learned before Agile.” (I haven’t heard this sentiment for a while. Good riddance!) On the other hand, when we pursue agility right, we:

  • give the customer the option to implement working product earlier and to benefit from that product longer.
  • strengthen interdependency relationships between service organizations and the business organizations we serve.
  • add valuable new tools and results to our profession’s toolkit.
  • lower the time we spend on code our customers won’t use.
  • create code with fewer defects and lower life-cycle costs.

(All that is good!)

But how can we compare “agile” models to “waterfall” models? The desire to meaningfully compare got me to this model of a project.

The Model

The model is simple, really. The core of it is a 5-by-5 grid representing 25 units of work necessary to deliver the product of the project. The top row represents the five pieces of “requirements” work, one piece of work for “Function 1” (part of scope) and one piece for each of four other functions. Succeeding rows represent other common work (“design”, “development”, “testing”, and “deployment”, successively) each row including each of the Functions 1 through 5. The left column represents the five pieces of work necessary for “Function 1” (requirements, design, …); succeeding columns represent one Function each, including each of the five work types.

Figure 1: A Model of a Project

Pretty simple, eh?! (As promised!)

As a broad generality, this project is complete when a team does these 25 tasks. (In the specific, some of the agile people would observe that they don’t do these tasks. I’ll get to that level of specifics in future blogs. There’s value in the thought; this generality is reasonable to support the point here.)

The Waterfall Version of Our Project

If a project team doing this project guides itself by the “waterfall model”, the project looks like this.

Figure 2: A “Waterfall” Team’s View of our Project

They seek to do all requirements work early in the project (sometimes before doing any design work). They might call that time of the project the “Requirements Phase”. After that comes the “Design Phase”, etc.

They might claim this is “the right” way to do the project because any incomplete work in one phase might affect work in the next phase, and projects are hard enough without the avoidable challenges. (They might say, “We might well select a different design if we know about an additional requirement. We must know all requirements before we design.”)

The agilists might observe that

  • some of these waterfall projects they’ve served felt too documentation-centric and too slow.
  • for decades, studies have shown that something like 70 percent of functions in software we produced are “never used” or “seldom used”.
  • for decades, studies have shown that shorter projects are more likely to be successful.
  • it is increasingly accepted that it is impossible to know all requirements in advance.

There has to be a better way!

The “Agile” Version of Our Project

If a project team doing this project guides itself by the “agile model”, the project looks like this.

Figure 3: An “Agile” Team’s View of Our Project

They seek to identify some sliver of function (“Function 1”) that they can work meaningfully through, all the way to demonstrating working code to the customer. In the first “iteration”, they do all aspects of assigned function (“Function 1”). Optimally (but not necessarily), the code is ready at the end of the iteration to deploy if the user wants it deployed. Each iteration results in a demonstration to the customer; later iterations add more functions.

One view of the “agile” team is that they do many short projects: one per iteration. Each iteration might be as short as one week. Within an iteration, “user stories” define iteration scope and those in the iteration must not change during the iteration; product owners are generally welcome to change user stories not assigned to the iteration. And agile teams generally rejoice that they can be so responsive and flexible as to accept the changes in those stories. The parallel waterfall statements are that requirements define project scope, that many waterfall methods seek to freeze all project requirements before design starts, and that many waterfall teams seek to “control” “scope creep”.

The agilists tell us their communication with the customer (“early and often”) significantly increases the customer’s emotional investment in the product and also increases the customer’s sense of ownership of that product. And, if the demonstration at the end of any iteration is for a function the customer will “never use” or “seldom use”, the team knows it far earlier than in the waterfall model. (All that is good!)


Of course, all other things equal, after completing ten “blocks” of work, the waterfall team will finish “design” (the second row). The “agile” team completes a second demonstration of working software for the customer (“Function 2”; the second column) and they have a second chance to get valuable feedback.

With this simple model, everyone can understand one major difference in pursuing agility: agility models and waterfall model do substantially the same work, but do it in a different order. I hope you’ll keep coming back as I explore more detail!

Standard. Methodology.

Professions establish vocabularies. After achieving consensus about definitions, professionals can communicate more precisely and quickly. I offer some candidates to our professional consensus here on terms for software development projects. What would you change here?

Let’s agree with guidance from the American National Standards Institute (ANSI®), quoting a joint standard of the International Organization for Standardization (ISO®) and the International Electrotechnical Commission (IEC®), that a standard is, “A definition or format that has been approved by a recognized standards organization or is accepted as a de facto standard by the industry. (As defined in ISO/IEC Guide 2:2004)”.

I suggest we consider a methodology to be the specific set of training, development processes, tools, templates, and controls a particular software development team uses to structure its work. A methodology includes reference (as needed) to specific tools, communication channels, approval meetings (and lists of attendees appropriate for each), document templates, development languages, and responsibility assignments. Harold Kerzner used the phrase “forms, guidelines, templates, and checklists” during his May, 2007 Omaha presentation advocating effort to detect and document best practices.

There is power in the statement that by this definition, a methodology is a company standard, with corporate leadership taking the role of “recognized standards organization”. Following the lead of the ANSI guidance about standards, I focus here on industry standards.

That definition doesn’t specify that a methodology is documented. (This is the “you cannot not have a methodology” argument. You might not have a repeatable or consistent methodology…) As team sizes and project sizes grow, my impression is that few experienced professionals on either side of the waterfall/agile discussion would recommend no documentation of the methodology.

Many of the agility models (the Agile Manifesto®, in fact) advise caution about excessive documentation. I agree. I have seen methodologies that seemed to have everything “plus the kitchen sink” in them. They were hard to understand and so difficult to use that teams did only those parts that were well enforced. After some point of growth, it makes sense to require that for everything added, something must come out of the methodology. Simple is better. Focus is great!


  • The Project Management Institute (PMI®) publishes the PMBOK® Guide as a standard. At one point in its history, many project managers considered it a standard because they accepted PMI as “a recognized standards organization” (from the definition above). I hope that’s still true today (grin). Many people outside PMI probably consider it more powerful today that ANSI listed the same document as a U.S. standard (ANSI/PMI 99-001-2008®) and that IEEE® listed the same document among its standards (IEEE Std 1490-2011®).
  • A company’s methodology is the collection of policies requiring particular practices (perhaps requirements to use particular templates for documents; perhaps requirement to use a particular inter-project corporate-level shared resource for change management or version control or lessons learned (or other); perhaps in a waterfall environment, requirement for a phase-end tollgate or phase review (or any of many similar terms); perhaps in a Scrum environment, a requirement to use a Daily Standup and a Product Backlog).


  • No company’s methodology is an industry standard, if only because it depends on tools, databases, training, and past shared experience not available outside the company.
  • No standard is a methodology. (Ever read a job ad stipulating that successful applicants will “use PMI methodology”? At one level of precision, that sentence means little!)
  • “Agile” and “agility” are not methodologies. (Recently I heard Susan Courtney (of Nebraska Blue; Apr 2012 AIM Institute Technology Leader of the Year) forcefully and committedly advocate agility and suggest (about software development life cycles, SDLC) that companies don’t understand agility if they want to “implement a new SDLC called ‘Agile'”. I agree with her; a company needs to change more than the SDLC to increase agility. Agility depends on a wider and deeper partnership than that. And provides more benefits than just an SDLC change.)

Between the concepts of standard and methodology is a set of guidance generally not as accepted as a standard or not as specific as a methodology; some call this group models or methods or frameworks or processes. There is considerable variety in the terms used (example). Let’s use model here, though it’s not a perfect label for this group.

Models help us select methodologies. Though the following is a (very!) partial list, examples include:

  • The original paper about waterfall. In my experience, it’s very common to hear reference to “the waterfall model”. (Interestingly, Royce didn’t use the word waterfall, though his central diagram clearly resembles a waterfall. This paper makes a strong point about the need for adequate documentation in a software development effort.)
  • Mary and Tom Poppendieck’s 2003 book, Lean Software Development, An Agile Toolkit (ISBN 0-321-15078-3). (This is a valuable contribution to software development thought and arguably had the most pivotal early impact in associating the concepts of Lean manufacturing with software development.)
  • The PMBOK Guide. (Though also a standard, it provides much of the detail typical of this middle group and allows considerable latitude in implementing company-specific details. Far more than not, agility models are consistent with the PMBOK Guide. The Fifth Edition of the PMBOK Guide is due out in 2012/2013; I’m betting agility models will have lots of impact on the new edition.)
  • The Scrum Alliance® summary of Scrum. (By the same token, I acknowledge the power to the statement, “That’s a standard! The Scrum Alliance is the “recognized standards organization”.)

How best can we make use of these concepts? Here’s one vote for our teams (maybe our companies; in the ideal world, teams have a role) using guidance from standards and models to define everything we need in our methodology (and not a bit more!) And applying the specificity that distinguishes a methodology from a standard.

Community Discussion: What’s a Good Direction for Change on This Blog?

So … as of Sep 2012, I’m new to this blogging thing, other than reading some now and then. It’s kinda like having a newspaper or a magazine column in that among my writings, I write what comes to mind and wait to see whether it builds a readership. It’s unlike those more familiar outlets in that I’m the editor and publisher, too. So … I don’t have a trusted colleague with ready advice and shared goals. (Well … my readers … you, of course …)

So … and this goes throughout the life of this blog: How can this blog better serve you? Prove more interesting? Are the posts “too long”? Are they “too [something else]”?

All feedback welcome … always …

If We Have a Problem, We’ll Find It. Early is Better. Still.

Let’s consider here a project with a significant problem we haven’t found. If the project’s product is to be a chair, maybe we have cut a leg too short. If the project’s product is software, maybe a function we designed with two parameters actually needs three (or reports answers in units different than a consuming function assumes it reports, maybe miles per hour instead of meters per second). This discussion is not about a misspelled word in a quick and informal email message, for example.

Do we get to choose between finding that problem or not? Well … (he he) … since our discussion is about significant problems, we’ll find it. And as one of the many corollaries to Murphy’s Law says, “If you have a problem, you’ll find that problem at the worst possible time.” (There’s something irresistibly pessimistic about that Law and all its Corollaries!)

You’ll find that problem just after you’ve spent lots of effort with the short chair leg in a lathe, getting sanded, and getting just the right finish (all of which was a waste of effort, of course, but you didn’t know that yet). It’ll happen just after you or your tester tell your executive that your code is 99 percent through testing, that the results look good, that you expect to complete ahead of schedule, and that the profits for this product are likely to start early. Or your customer will find the problem. (That would be embarrassing!) You’ll find that problem. It’ll hurt.

Let’s do something about it. And let’s focus now on creating software. Barry Boehm published once or twice (tongue-in-cheek) in a long and continuing software engineering career. In 1981, he wrote Software Engineering Economics, which contains the famous Constructive Cost Model (COCOMO) used for estimating the size and duration of software development efforts. It also contains (page 40) a graph showing the increasing cost to fix a problem introduced in requirements. The chart makes the point that if the problem costs roughly $15 to fix during requirements, it costs something like $50 in design, $100 during construction, $200 during development test, $500 during acceptance test, and $1,800 during operation. Many other sources of similar age make similar arguments and cite other cost studies. Their numbers vary, but let’s agree: cost to fix grows rapidly with time.

One message from those studies is: projects need a strong sense of urgency about finding problems we must fix because of the rapid increases in cost.

Another message is that a key question in managing a project is, “How much is the optimal spending now (in dollars, hours, process, whatever) to detect more problems earlier and get the greatest benefit from the current lower price of fixes?”

Sound good? Well … it certainly sounded good enough to form an orthodoxy in our professions. (Perhaps, anyway. It felt like that!)

From the current perspective, is there a problem?

Well … from today’s perspective, many of us would feel the data collection seems to presume use of the waterfall development model. Fewer projects use a waterfall model today.

And … many in our industry now have experience working projects without spending lots of time in review of past work. Many of us feel that the right answer to the question above is spending no time specifically looking for past problems.

And we achieve great results.

And our stakeholders love the new relationships with us as service providers.

(Well … not every time. New project models aren’t “the silver bullet”. And there are lots of other reasons projects don’t do as well as we want; many still need work.)

I refer, of course, to the development models advising us to strive for market agility (Scrum, Lean, Kanban, and Extreme Programming are commonly-cited examples). I intend to write more about these in future posts. For now, I’ll say: Much of the market has already moved one (or more) of these directions.

And what about the advice derived from the statistics Dr. Boehm cited? I’ll say projects need the same sense of urgency about finding errors; we’ll find problems differently than we most commonly thought about then. Projects using today’s agile models probably expect to discover those errors during their frequent interaction with customers (“early feedback”). And we expect to fix problems during the first iteration in which the customer wants us to fix them. And that advice sounds great … Why would anyone oppose?

And what about Dr. Boehm’s career after writing the book I mentioned? Well, in 1988, he published an often-cited description (info) of his “Spiral Model” of iterative software development. Both his works cited here influenced thought leaders who contributed to other models later collectively called “agile”. He is now an AIAA Fellow, an ACM Fellow, an IEEE Fellow, an INCOSE Fellow, and a member of the National Academy of Engineering.

He is one thought giant on whose shoulders we stand today. May we all learn …

//Later edit: Fixed a broken link.

Who’s Your Competition?

Bo Pelini is the head football coach at Nebraska. That first hyperlink is a bio that starts with, “It is about the process.” The phrase expresses a mindset for Bo, judging by how often the public has heard that message from him. Sportscasters say he tells the players that, “They are their own barometers.” Or, “They’re their own standard.” Or not to worry about the other team; if Nebraska players execute the plays right, the game results will take care of themselves. (That last one demonstrates considerable confidence in his coaching staff, eh?! He’s a leader to them, too!)

Is your mindset non-competitive? Do you compete with peers? Or with an idea?

My experience suggests non-competitive mindsets are the least common. We commonly teach aspiring members of our workforce that we and our organizations achieve more if we and they set goals and track progress toward achievement. The PMBOK® Guide devotes one of the largest of its five “Process Groups” to “Monitoring and Controlling”. At the start of each Sprint (or Iteration) in Scrum, teams set goals for the Sprint in a Sprint Backlog. At the end of each Sprint, teams focus for a time on what went well (so we’ll be more likely to do it again) and what didn’t go well (so we’ll be more likely to try something else). Maybe goals and competition with them are intrinsic parts of the human condition.

Some problems in our organizations relate to competition. On a sports team, two players at the same position “compete” for playing time and for approval of team, coaches, and fans. There are parallels in the workplace. It’s too common that people react with jealousy, sabotage, and other attributes and acts we’d rather not see in our organizations. (Or … in ourselves!)

At the same time, some benefit to our organizations relate to goal-setting. It is one way people notice when there’s a better way to complete a process; they consider (and sometimes choose) to improve that process. It is one way people widen the breadth of their thinking and base decisions on more information. (They avoid “rearranging the deck chairs on the Titanic”, or what some mathematicians call “suboptimization” or “local optimization”.) Goal-setting is one way to introduce a feedback loop into our decision processes. And each point this paragraph makes about people applies also to our organizations.

Is Bo Pelini competitive? It’s just my opinion, but I’ll say the veins sticking out on his neck after something doesn’t go Nebraska’s way is powerful evidence! I’ll say … he’s competitive.

I conclude Bo Pelini is teaching Nebraska football players to compete against an idea: the “perfect” game in which each player executes every plan as assigned and adapts as intended. Because Bo establishes that emphasis, players at each playing position focus on each other as teammates rather than as competitors; they work hard to develop the skills they contribute to the team. The players learn to think simultaneously at a position level (“Who am I supposed to block this play?”) and at a team level (“This play is supposed to achieve [I don’t know–a running back running the ball through a particular gap; whatever]; if the player I’m to block is out of the play for any reason, I’ll find another way to contribute to the larger goal.”)

I suspect Bo chooses this coaching mindset in part because it helps his players focus on cooperation with teammates and skills they contribute to the team. He believes teams work better when team members cooperate.

I suspect Bo feels the team is stronger if all players focus on doing all they can to help the team win. And avoid focus on comparing themselves to teammates. He seems to want them to feel that if they focus on being the best player they can be, the playing time will take care of itself.

I see parallels for those of us serving less visible teams than Nebraska football. My examples come, of course, from software engineering projects.

  • A clear and accepted team goal (say, completing the software tool a customer needs to implement their planned new business process) is valuable in helping team members make supporting decisions and independently adapt to unexpected conditions. It encourages initiative and free thinking. (Some in the U.S. military use the terms “Command Intent” or “Commander’s Intent“.)
  • Teamwork may be achievable, but it is more difficult if a team “knows” that two teammates want a promotion (or raise, bonus, or recognition award) only one can get. Maybe those two people sabotage each other. Maybe those two people are exemplars of good behavior, but teammates suspect otherwise.

NCAA Division I athletics (and software engineering projects) are tough enough without optional obstacles! I like Bo’s coaching mindset.

Hmmm … college athletics are supposed to be education, right?! Maybe they’re working beyond the teams! Thanks, Bo!