Archivi categoria: software engineering

The Chimera of Software Quality

by Les Hatton, in IEEE Computer, August 2007.

“Nobody knows how to produce a fault-free program. Nobody even knows how to prove it even supposing one we were magically provided. I teach my students that in their whole careers, they are unlikely ever to produce a fault-free program and if they did, they would never know it, they could never prove it and they could not systematically repeat it. It provides a usefully humble starting point.

[…] I’ve analysed enough failed systems in my time to know that there are two classic symptoms of a system on its way to the fairies. First, no independent audit is allowed and second, talking heads tell you everything is fine when the ultimate users tell you the opposite.

[…] The Linux kernel is now arguably the most reliable complex software application
humanity race has yet produced, with a mean time between failures reported in tens and in some cases, hundreds of years. Poetically, the development environment of Linux, which leverages the contributions of thousands of Web volunteers who give their spare time for the public good, breaks just about every rule which software process experts hold dear.”

Parnas on abstractions

Communications of the ACM, June 2007, p.7

“Use the Simplest Model, But Not Too Simple

Jeff Kramer’s view, expressed in his article “Is Abstraction the Key to Computing?” (Apr. 2007), that abstraction is indeed a key concept in computing, especially in software design, is correct but far from new. It’s a lesson I learned from the late E.W. Dijkstra 40 years ago and underlies every software development method proposed since then. Dijkstra said many useful things. Among them is the most useful definition of “abstraction” I know: “An abstraction is one thing that represents several real things equally well.” This positive definition is more useful than the more typical ones Kramer quoted that emphasize the elimination of information. Dijkstra’s clarifies what must remain.

Dijkstra’s definition allows us to distinguish between an abstraction and a lie. When a model makes assumptions that are not true of a real object (such as infinite memory), these assumptions are often defended by saying “It is an abstraction.” Using Dijkstra’s definition, such models are not abstractions. Rather than represent several things equally well, they represent nothing at all. Because they embody unrealistic assumptions, one cannot trust the conclusions that might be drawn from them.

Models that are not abstractions in Dijkstra’s sense may provide insight or understanding but can also mislead. Programs based on them may not work, and theories based on them may yield results not relevant in the real world.

Dijkstra’s work showed that two distinct skills are related to abstractions:

  • Being able to work with a given abstraction; and
  • Being able to develop a useful abstraction.

Mathematics courses teach us how to work with abstractions but not usually how to develop appropriate ones. Many researchers I know can analyze formal models, deriving properties and proving theorems, but do not seem to notice (or care) when a model is based on an impractical design or makes assumptions that are not true in reality. Both skills are important, but teaching the second is much more difficult and is the essence of design.

Many computer science courses fail to teach students how to develop abstractions because they use models that are not abstractions but lies. Students must be taught the implications of an idea often attributed to Albert Einstein: “Everything should be as simple as possible but not simpler.” Finding the simplest model that is not a lie is the key to better software design.”

David Lorge Parnas
Limerick, Ireland

Theoretical Reflections on Agile Development Methodologies

Sridhar Nerur e VenuGopal Balijepally, “Theoretical Reflections on Agile Development Methodologies”, Comm ACM 03-2007 . Con una lista di riferimenti eccellente.

“The progression of thought in software development parallels the maturation of design ideas in architecture and strategic management. The traditional mechanistic worldview is today being challenged by a newer agile perspective that accords primacy to uniqueness, ambiguity,
complexity, and change, as opposed to prediction, verifiability, and control. The goal of optimization is being replaced by flexibility and responsiveness.

The tenets of agile methods depart from the traditional orthodoxy of software development. This shift in philosophy is not unusual, as similar patterns of intellectual evolution have emerged in other disciplines. A look at architecture and strategic management reveals that
the progression of ideas in them is remarkably similar to conceptual pattern shifts in software design.”

Glass – Standish Chaos Report

Communications of the ACM, October 2006

Robert L. Glass The Standish Report: Does It Really Describe a Software Crisis?

Most academic papers and guru reports cite the same source for their crisis concern—a study published by the Standish Group more than a decade ago, a study that reported huge failure rates, 70% or more, and minuscule success rates, a study that condemned software practice by the title they employed for the published version of their study, The Chaos Report.
So the Standish Chaos Report could be considered fundamental to most claims of crisis.

Several researchers, interested in pursuing the origins of this key data, have contacted Standish and asked for a description of their research process, a summary of their latest findings, and in general a scholarly discussion of the validity of the findings. They raise those issues because most research studies conducted by academic and industry researchers arrive at data largely inconsistent with the Standish findings.

Let me say that again. Objective research study findings do not, in general, support those
Standish conclusions.

Standish, please tell us whether the data we have all been quoting for more than a decade really means what some have been saying it means. It is too important a topic to have such a high degree of uncertainty associated with it.