
The Responsibility for Professional Skepticism, Part 1 of 3
Professional Skepticism—present, or absent—has made the difference between the success, or failure, of uncountable projects, products, meetings, events…efforts of all forms.
- The antithesis of Professional Skepticism is too-frequently “Political Complicity.”
To assure every effort’s success, in the Plan On a Page (POP) Approach, “Professional Skepticism” is one of the most significant responsibilities of the Effort’s Leadership. So strong is the need for Professional Skepticism, that it is designed into POP’s basic system in ways specifically engineered to avoid—or at least, make visible—Political Complicity.
Four aspects of Professional Skepticism, and its converse, Political Complicity, merit every professional’s special attention:
- Estimates,
- “Best Practices,”
- Extent of Shared Understanding, and
- Degree of Commitment.
1. Estimates
“Why do the actuals vary from the estimates?” is—when critically examined—one of the most oft-repeated, yet low-value, questions in many efforts (which is—in a light-hearted, yet quite serious, way—”banned” in the POP-Approach).
Another way to express this all-too-common question is, “Why does the fact vary from the fiction?”
All “actuals” are (if accurately recorded) “facts.”
Which means…
All Estimates Are…
It is helpful to recognize and acknowledge that all estimates are—to varying degrees—wrong. Estimates are “fiction.”
Understanding that estimates (probably doubly-so when Political Complicity is a key factor) are wrong leads to insights regarding any effort. For example: Far more useful questions than above, include “Why is the estimate invalid? What’s missing? Different? What assumptions were invalid?”
Then it’s easier to address the actionable question, “how can we adapt the effort to the facts?”
- Let me hasten to say, this question—though eminently rational—can be politically uncomfortable.
2. “Best Practices…” Rarely Are
Assertions to use so called “Best Practices” merit heightened skepticism. Skepticism not for what is expressed…skepticism regarding what is not expressed.
Best Practices tend to (by omission or commission) omit description of key contextual details (and other dependencies) required to make the Practice relevant and effective.
An Example
One (of many) illustrations. Not too long ago, I advised a government agency client regarding a critical, high-visibility IT project. The project was based on use of Agile Best Practices (e.g., SCRUM). Approximately a year earlier—after a competitive, fixed-price bidding process—the organization hired a consulting firm (with the lowest ‘conforming’ bid) to construct the desired system.
When I met with the client, the schedule was ~60% exhausted. The budget was ~70% consumed. The affected business units were in near-revolt because of the amount of time their SMEs (Subject Matter Experts) were required to spend each week with the contractors…yet the project deliverables were, well, (surprise!) desultory, at best.
The POP-Approach emphasizes avoiding preventable problems. This client’s problems were absolutely avoidable…yet the certainty of failure was assured before the effort started…before contracting, before bidding. Failure was assured from the effort’s first moment…because of the selection of a “Best Practice”…without regard for its context & dependencies.
Why? While Agile and SCRUM are well-proven practices—they are based on constant adaptation (e.g., “grooming” of business priorities) and high-degrees of user (e.g., SME) involvement (e.g., clarification & elaboration of ‘User Stories;’ as well as deliverable evaluation and acceptance).
Which means…the flexible nature required for successful Agile/SCRUM efforts is in direct conflict with the inflexible nature of a “fixed-price” contract.
Is this a case where the “Best Practice” Agile/SCRUM failed? Or, a case where a [so-called] Best Practice was chosen without fully understanding its context and dependencies…key elements which lead to the “Best Practice’s” success?
You Decide
Have you ever seen a failed effort where unfounded (probably Politically Complicit) estimates were set before understanding expectations or implications?
Or, where a [so-called] “Best Practice” was adopted/stipulated without skepticism? Without understanding the implications of the Practice’s context & dependencies?
If so, how did those efforts work out?
In the Next Post
In the next post…the Responsibility for Professional Skepticism, Part 2 of 3…what is the [real] extent of shared understanding, and what is the [actual] degree of commitment for the effort?
Get the Book
Get the book and learn more about developing and applying Professional Skepticism and avoiding preventable problems.