Management performance – Do the right thing well

The Management Performance Matrix is an excellent tool for keeping your organization focused on doing the right thing and doing it well

The Performance Matrix is a simple and valuable tool. It was introduced in a 2011 Harvard Business Review article by Thomas J. DeLong. The horizontal dimension of this matrix (see the graphic above) is a measure of the “rightness” of what you are doing. Are you doing the “right” thing or the “wrong” thing? The vertical dimension is how well you are doing it.

This performance matrix can be applied to nearly everything that is going on in an organization. I’ve found it useful to step back from the swirl from time to time and think about what my team is doing in this context. Software architecture, software development process, security, people management, vendor management, and product management are some of the areas where this can be applied.

Beware of the Danger Zone

The top left quadrant is the “danger zone” because everything seems to be going well and you may not realize you are in trouble. You are executing to a plan, implementing decisions that have been made, and your team is productive and happy. But you might be on the wrong path. It’s not uncommon to have started doing the right thing but something changed in the environment to put you on the wrong path. It is important to recognize this as soon as possible and get moving over to the right.

One example is choosing to use a language or framework that is the latest “shiny object.” Often it doesn’t live up to its hype and support for it eventually fades away. It might have seemed like a good idea at the time, but didn’t turn out to be a good bet. At some point, you may need to move away from it.

Doing the right thing

When you you come to the conclusion that you are doing the wrong thing, it is often difficult to jump directly to the top right (doing the right thing well). Often, you end up starting off doing the right thing not so well (or “poorly”) and then work to get better at it. One example is switching your organization to a new software development process they are not familiar with. You can expect it will take some time for them to become proficient at it and move you from the bottom to the top where you want to be. One thing to note is that there really isn’t a binary “poorly / well” or “right / wrong,” there is a continuum of “worse” and “better.”

Avoid complacency

Arriving at the promised land in the top right quadrant (doing the right thing well) doesn’t mean you are done. There is danger in complacency. The environment can change. There may be better solutions emerging. Periodic management performance matrix checkups in key areas can be highly beneficial to avoid the complacency trap.

Software complexity – Is it worth measuring?

Software is among mankind’s most complex creations. How do we measure software complexity and is there value in doing so?

Software is among mankind’s most complex creations. How do we measure software complexity and does it even make sense to do so?  When I think of the most complex structures we have created, the arts (especially music), and … software come to mind. From an economic perspective, we consider software complexity to be a bad thing because the more complex the software, the more time and expense it takes to build and maintain.  But sometimes it needs to be complex in order to solve complex problems.  There can also be an aesthetic beauty to software that only developers seem equipped to appreciate. I recall once looking at code that controlled a spacecraft and being blown away not only by its complexity but also how by how very well written it was. 

Music is different in that how complicated it is seems to affect people in different ways.  The high level of complexity in a Mozart symphony seems to contribute to its immortality. On the other hand, simple popular music tunes have a wide fanbase.

Complexity of Music

Fractal

In his 1933 book Aesthetic Measure1, preeminent American mathematician George David Birkhoff proposed a mathematical theory of aesthetics. In the course of writing the book, he spent a year studying art, music, and poetry of various cultures around the world.  He developed a formula to measure the aesthetic quality of an art object (e.g., a work of music) as being the ratio between its order and complexity.  Since that time, researchers have built upon his work to come up with other ways of analyzing complexity of music. Mandelbrot’s protégé Richard Voss together with John Clark applied fractals to mathematical analysis of music2April Pease and her colleagues extended this work by searching for the presence of crucial musical events based on an analysis of volume and using this as a measure of complexity3.  I find it interesting that in music complexity, the performance is measured, not the static sheet music (or electronic equivalent).  Music played by a computer reading sheet music has been found be less complex than a performance by accomplished musicians!

Software Complexity

The software profession has struggled with how to measure software complexity for decades.  Thomas McCabe came up with the idea of using Cyclomatic Complexity to measure the number of logical paths through code4.  But this has been shown to not be any better than just counting source lines of code (SLOC).  Two methods currently in use are a set of six metrics proposed by Shyam R. Chidamber and C.F. Kemerer specifically designed for object-oriented code5, and a different set of six metrics proposed by Maurice H. Halstead6.

Comparison of Software Complexity Metrics

Chidamber and Kemerer Metrics
Note that these metrics are per class so you would sum them up for all classes in the program
Halstead Metrics
WMC – weighted methods per class is the sum the complexities of each class method but they used a complexity of 1 for each method so this is really just the number of methods in a class Program Vocabulary n = n1 + n2
n1 – number of distinct operators
n2 – number of distinct operands
CBO – coupling between object classes is the number of other classes which are coupled (using or being used) Program Length N = N1 + N2
N1 – total number of operators
N2 – total number of operands
RFC – response for a class is the number of methods called by each class method summed together Calculated Estimated Program Length = n1 log2 n1 + n2 log2 n2
NOC – number of children is the sum of all classes that inherit this class or a descended of it Volume V = N x log2 n
DIT – depth of inheritance tree is the maximum depth of the inheritance tree for this class Difficulty D = (n1 / 2) x (N2 / n2)
LCOM– lack of cohesion of methods measures the intersection of the attributes used in common by the class methods Effort E = D x V

Measuring Software Complexity

Measurement

So why should we care about measuring software complexity?  Here are some claims, in many instances being made by companies that are selling complexity measurement products or consultants that will help you figure out how to use them.

Better estimates on software maintenance effort

I suppose if you had enough empirical data to somehow relate complexity to maintenance cost, this might be useful.  There certainly have been a lot of studies on this.  The problem is that there are other significant factors that affect software maintenance costs like

  • the number and type of new or changed user requirements concerning functional enhancements
  • the amount of adaptation that needs to be done to support a changing environment (e.g., Database, Operating System)
  • the amount of preventative maintenance that needs to be done to improve reliability or prevent future problems

Monitoring complexity so as to keep it lower, saving cost and reducing risk

So, do we have the Software Development Manager tell developers that their LCOM or DIT is too high and they need to fix it?  Really?  I suppose this could be an indicator that could be used to focus code reviews (i.e., spend more time reviewing code that has higher complexity) but I don’t see that there would be much value in doing this especially if your team is already doing effective code reviews with good coverage.

Using complexity as criteria for deciding to refactor or rewrite software

The suggestion here is that a software development manager would monitor the complexity across the codebase and when a module gets above a certain threshold, a decision to refactor or rewrite would be considered.  My experience is that the development team already knows which sections of code are the best candidates for refactoring based on the effort required to maintain them.  I’d trust that measure much more than a complexity metric. Another consideration is how often complex code needs to be touched.  I’ve worked in organizations where we had a large, complex legacy codebase that we didn’t touch, just wrap it with a façade or adaptor.

My view on software complexity is that there is little if any value in measuring it outside of academia.  There is a hospital sketch in Monty Python’s Meaning of Life movie where doctors call for the operating room to be filled with the most expensive equipment in order to impress the administrators should they drop in for a visit.  John Kleese specifically asks for staff to bring in “the machine that goes ‘Bing’.”  A software complexity dashboard would seem to have equivalent utility.

1 George D. Birkhoff, Aesthetic Measure, Harvard University Press, 1933.
2 R.V. Voss, J. Clarke, 1/f Noise in Music and Speech, Nature, 258 (1975).
3 April Pease, Korosh Mahmoodi, Bruce J. West,Complexity Measures of Music, Chaos, Solitons and Fractals 108 (2018) 82–86.
4 McCabe (December 1976). “A Complexity Measure”. IEEE Transactions on Software Engineering (4): 308–320.
5 Chidamber, S.R.; Kemerer, C.F. IEEE Transactions on Software Engineering Volume 20, Issue 6, Jun 1994 Page(s):476 – 493.
6 Halstead, Maurice H. (1977). Elements of Software Science. Amsterdam: Elsevier North-Holland, Inc. ISBN 0-444-00205-7.

Featured image of Barcelona Cathedral Copyright © 2019 Steve Kowalski

Software evolution – Software is never done … it is abandoned!

“Software is never done … it is abandoned.” Software evolution is something to be understood (Lehman’s Laws) and embraced

I’m not sure who should get credit for the aphorism “Software is never done … it is abandoned,” but it seems a corollary to Lehman’s laws of software evolutionMeir “Manny” Lehman worked at IBM’s research division from 1964 to 1972. Lehman’s studies of software development lifecycle provided a foundation for his early recognition of the software evolution phenomenon. After IBM, he became Professor and Head of the Computing Department at Imperial College London and then Professor at Middlesex University. I’ll discuss three of his eight laws that resonate the most with me, and their implications.

Functional content must grow

The functional content of a software system must be continually increased to maintain user satisfaction over its lifetime

This is a good thing!  People like using your software!  They will find ways to use it that you hadn’t thought of. They will have wonderful ideas on how it can be more efficient and more comprehensive.  But if you don’t keep releasing new features and enhancements to keep up with their requests, they may become dissatisfied and move on to something else. 

Complexity must be managed

As a software system evolves, its complexity increases unless work is done to maintain or reduce it

This is in reference to increasing software entropy.  As new functionality is added, the software will eventually become more complex and more disorganized as it departs from its original design.  At some point, it may well be time for a redesign.  That in no way means that the original design was a failure, just that the system has evolved, which is a good thing!  This concept of software entropy is orthogonal to technical debt. Taking on technical debt may lower complexity when easier short-term solutions are selected over better longer-term solutions with higher complexity and longer implementation times.

Quality may appear to be declining

The quality of a software system will appear to be declining unless it is rigorously maintained and adapted to operational environment changes

The environment that our software operates in is likely to be ever-changing. There will be new platforms, new operating systems, new devices, new frameworks, new protocols, new databases, new APIs, and new resource constraints and unconstraints. “Adapt or perish, now as ever, is nature’s inexorable imperative.” – H.G. Wells

Software evolution is not the enemy; it is the consequence of a successful system.