The Errors and Hazards of Technical Debt

By On June 20, 2012 At 12:01 pm

Introduction on Technical Debt

The topic of technical debt or the down-stream costs of careless development is one of the fastest-growing software measurements.  However, as most widely calculated technical debt is alarmingly incomplete.   Pre-release quality costs are usually omitted from technical debt calculations.  Even worse, the very high costs of projects that are cancelled and never delivered have zero technical debt.

Historically the cost of finding and fixing bugs or defects is the largest single expense element in the history of software.  Bug repairs start with requirements and continue through development.  After release bug repairs and related customer support costs continue until the last user signs off.

Over a 25 year life expectancy of a large software system in the 10,000 function point size range almost 50 cents out of every dollar will go to finding and fixing bugs.  Unfortunately technical debt covers only a small fraction of the true costs of poor quality.

Technical Debt as a Software Quality Metric

The concept of technical debt is the newest of software quality metrics, having first been described by Ward Cunningham in a 1992 paper.  From that point on the concept went viral and is now one of the most common quality metrics in the United States and indeed the world.

The essential idea of technical debt is that mistakes and errors made during development that escape into the real world when the software is released will accumulate downstream costs to rectify.

As a metaphor or general concept the idea of technical debt is attractive and appealing.  For one thing it makes software quality appear to take on some of the accumulated wisdom of financial operations, although the true financial understanding of the software industry is shockingly naive.

A major problem with technical debt is that it ignores pre-release defect repairs, which are the major cost driver of almost all software applications.  Ignoring pre-release defect repairs is a serious deficiency of technical debt.

Second, what happens after software is released to the outside world is not identical to the way software is developed.  You need to support released software with customer support personnel who can handle questions and bug reports.  And you also need to have maintenance programmers standing by to fix bugs when they are reported.

This means that even software with zero defects and very happy customers will accumulate post-release maintenance costs that are not accounted for by technical debt.  Let us assume you release a commercial software application of 1,000 function points or 50,000 lines of Java code.

Prior to release you have trained 2 customer support personnel who are under contract and you have 1 maintenance programmer on your staff assigned to the new application.  Thus even with zero defects you will have post-release costs of perhaps $15,000 per month.

After several months you can reassign the maintenance programmer and cut back to 1 customer support person, but the fact remains is that even zero-defect software has post-release costs.

The third and most serious flaw with technical debt concerns the 50% failure rate of large systems in the range of 10,000 function points or 500,000 Java statements in size.  If an application of this size is cancelled and not released at all, then technical debt will of course be zero.  But a company could lose $25,000,000 on a project that was terminated due to poor quality!

Yet another omission from the calculations for technical debt are the costs of litigation and punitive damages that might occur if disgruntled clients sue a vendor for poor quality.

Here is an example from an actual case.  The stock holders of a major software company sued company management for releasing software of such poor quality that the shareholders claimed that poor quality was lowering the stock price.  Technical debt does not include legal fees and litigation costs.

Clearly the defects themselves would accumulate technical debt, but awards and punitive damages based on litigation are not included in technical debt calculations.  In some cases litigation costs, fines, and awards to the plaintiff might be high enough to bankrupt a software company.

This kind of situation is not included in the normal calculations for technical debt, but it should be.  In other words, if technical debt is going to become a serious concept as is financial debt, then it needs to encompass every form of debt and not just post-release code changes.  Technical debt needs to encompass the high costs of cancelled projects and the even higher costs of losing major litigation for poor quality.

To illustrate that technical debt is only a partial measure of quality costs, table 1 compares technical debt with cost of quality (COQ).  As can be seen, technical debt only encompasses about 13% of the total costs of eliminating defects.

Note also that while technical debt is shown in table 1 as $86,141 right above this cost are the much higher costs of $428,625 for pre-release quality and defect repairs.  These pre-release costs are often excluded from technical debt!

Just below technical debt are costs of $138,833 for fixed overhead costs of having support and maintenance people available.  These overhead costs will accrue whether maintenance and support personnel are dealing with customer calls, fixing bugs, or just waiting for something to happen.  Even with zero-defect software with zero technical debt there will still be overhead costs.  These overhead costs are not included in technical debt, but are included in cost of quality (COQ).

Code defect potential 1,904
Req. & design def. pot. 1,869
Total Defect Potential 3,773
Per function point 3.77
Per KLOC 70.75
Defect Prevention Efficiency Remainder Costs
JAD 23% 2,924 $37,154
QFD 0% 2,924 $0
Prototype 20% 2,340 $14,941
Models 0% 2,339 $0
Subtotal 38% 2,339 $52,095
Pre-Test Removal Efficiency Remainder Costs
Desk check 25% 1,755 $19,764
Static analysis 55% 790 $20,391
Inspections 0% 790 $0
Subtotal 66% 790 $40,155
Test Removal Efficiency Remainder Costs
Unit 30% 553 $35,249
Function 33% 370 $57,717
Regression 12% 326 $52,794
Component 30% 228 $65,744
Performance 10% 205 $32,569
System 34% 135 $69,523
Acceptance 15% 115 $22,808
Subtotal 85% 115 $336,405
Defects delivered 115
High severity 22
Security flaws 10
High severity % 18.94%


Even worse, if a software application is cancelled before release due to poor quality it will have zero technical debt costs but a huge cost of quality.

An “average” project of 10,000 function points in size will cost about $20,000,000 to develop and about $5,000,000 to maintain for 5 years. About $3,000,000 of the maintenance costs will be technical debt. But if a project of the same size is cancelled, they are usually late and over budget at the point of termination, so they might cost $26,000,000 that is totally wasted as a result of poor quality. Yet technical debt would be zero since the application was never released.

The bottom line is that the use of technical debt is an embarrassing revelation that the software industry does not understand either basic economics or quality economics. Cost of quality (COQ) is a better tool for quality economic study than technical debt.

Summary & Conclusions

Technical debt has become a very popular topic in software quality circles. However as commonly calculated technical debt only covers about 13% of the true costs of poor quality. In order to evolve from a novelty into an effective metric, technical debt needs to encompass all quality costs and not just post-release code changes. In particular technical debt needs to recognize the high costs of projects terminated prior to release because of poor quality.

For more information on how this data was calculated, visit

Capers Jones

7 Steps to Pay Down the Interest on Your IT Technical Debt
Learn how to handle “Technical Debt”

In 2011, there were more IT system failures, outages and data breaches occurred than any previous year since the dawn of the age of technology. And while some of these issues could be traced to intentional and malicious undermining of systems, many, if not most, had some measure of application software failure or weakness at their root.

Your Information will be kept private and secure.

Add your comment

XHTML : You may use these tags : <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

This is a Gravatar-enabled website. To get your own globally-recognized avatar, please register at


  1. I can’t agree with what you wrote. First of all, PRE-RELEASE bugs are most certainly included in the definition of technical debt.

    Second, even going by your numbers, $3,000,000 of technical debt out of the $5,000,000 of maintenance is 60%, not 13%. Likewise, $86,141 out of $138,833 is 62%. If you accept my position that bug found after the team says they’re “done” are still accountable towards technical debt, these numbers go up.

    Third, technical debt “charges” an interest rate paid every time you touch the code – this includes maintenance. This means that a portion of the cost of post-production change requests are also part of your technical debt.

    The percentage goes further up.

    In short, I’d say that technical debt is insufficient only for your insufficient definition of technical debt. What you call Cost of Quality is just a newer buzzword to describe the same thing. Tomato, tomatoe.

    Your conclusions are only right in as much as describing what technical debt should be. Thing is, it already is just that.

     — Reply
    • Ashaf,

      Thanks for your comments.

      Every one of my clients measures technical debt in a different way – like the blind men and the elephant.

      Cost of quality (COQ) is not newer than technical debt. It was first published in 1956 and is a standard quality metric for many companies. It also has a substantial literature.

      In fact that way you define technical debt is pretty much the same way cost of quality defines it.

      My own quality measures start with requirements defects and include design defects, code defects, documentation defects, bad fixes or secondary defects, and also defects in test cases which sometimes outnumber bugs in the code.

      In addition I collect data on duplicate defects (sometimes more than 10,000 reports of the same bug come in), and also invalid defects, which are not actually caused by the software itself but still accrue costs for processing.

      I also measure things like test case design, test case construction, test case execution, defect logging and routing, defect repairs, defect repair testing, and defect repair integration.

      For pre-release inspections and static analysis I measure preparation, execution, defect repairs, defect repair testing, and defect repair integration.

      I have data on around 13,000 projects.

      Among the companies I work with post-release defects are far more commonly classed as technical debt than pre-release defects.

      Until there is an international standard that defines technical debt everyone will use it differently. The way my blog discussed it is very common. The way your comments worked seems rare, but sensible.

      Capers Jones

       — Reply
  2. “The concept of technical debt is the newest of software quality metrics, having first been described by Ward Cunningham in a 1992 paper.”

    Really? There hasn’t been any new in quality metrics since 1992?

     — Reply
    • Bob,

      Other metrics used to normalize quality include cost of quality from 1956; lines of code from the late 1950’s, function points from 1975 in IBM and 1978 outside of IBM, defect removal efficiency (DRE) from IBM about 1973; six-sigma from Motorola, and more recently story points, use case points, RICE objects, and a dozen or so function point variations.

      I regard defect removal efficiency as the most important quality metrics because it has the largest impact on both delivered defects and customer satisfaction.

      The U.S. average for defect removal efficiency is only about 85%. The best in class quality groups are around 99%.

      There are also subjective measures such as customer satisfaction and semi-subjective measures such as the 5 levels of the capability maturity model integrated (CMMI) by the Software Engineering Institute.

      For subsets of quality there are a dozen or so forms of test case coverage and there is also measurement of cyclomatic and essential complexity.

      The older Halstead measures are not used much today.

      Capers Jones

      Capers Jones

       — Reply
  3. Technical Debt is NOT a software quality metric. TD is a concept that encapsulates the many types of deficiencies that occur during software development. Trying to calculate a dollar amount for TD requires more effort than it’s worth. Put the effort into building better software instead.

     — Reply
    • Vin,

      Technical debt is a analogue of financial debt. If you don’t use dollars what does the word “debt” mean?

      The idea is that early mistakes are costly later on, and they can be quantified in dollars.

      Second, why do you think that high quality software costs more than poor quality software? Data from thousands of projects prove that high quality software is cheaper.

      Poor quality software usually runs late and starts losing money when testing starts with so many bugs that the test interval stretches out to two or three times more than planned.

      That is also one of the reasons that projects are cancelled: they have so many bugs and are so late the ROI turns negative.

      Capers Jones

       — Reply
  4. Unlike the previous retorts I fully agree. I have yet to work for a company that understood the cost of a defect. Another area that might be missing from your piece (you do cover pre-release defect removal) is a decreased ability to respond to customer needs or the difficulties of on boarding new team members. And perhaps to a lesser, lost capacity due to diminished performance (or the hardware cost to compensate). Great article, thanks.

     — Reply
  5. Curtis,

    Thanks for the kind words.

    Here are the costs for four different systems all of about 10,000 function points.

    High quality with > 97% defect removal efficiency:


    Average quality with > 90% defect removal efficiency:


    Poor quality with < 85% defect removal efficiency:


    Canceled project with unknown defect removal efficiency:


    I'm curious to see how the group would calculate technical debt for the canceled project which was late and over budget when terminated, but never delivered.

    Another question: what would the group use as the starting point for calculating technical debt? The average project, the best project, or some hypothetical project with zero defects?

    Best Regards,
    Capers Jones

     — Reply
  6. Technical Debt is a metaphor developed by Ward Cunningham to help us think about this problem. According to the metaphor, doing things the quick and dirty way sets us up with a technical debt, which is similar to a financial debt. Like a financial debt, the technical debt incurs interest payments, which come in the form of the extra effort that we have to do in future development because of the quick and dirty design choice. Thanks for sharing

     — Reply
    • As a metaphor technical debt is interesting and useful.

      However as calculated by some of my clients, it is not complete.

      What is the technical debt of a project that is canceled due to poor quality and never released?

      What about consequential damages? One of my clients had to restate prior year earnings and lost millions of dollars due to a bug in a financial package. What about damages to customers from poor quality?

      Technical debt has too narrow a focus and does not actually address many financial losses from poor quality.

      Capers Jones

       — Reply