Today I read a succinct and instructive article by Professor Robert L. Glass, published in Communications of the ACM, Volume 51, Number 6 (2008). Professor Glass is a widely respected expert in the Software Engineering area, and his prose is always very eloquent and a pleasure to read. The specific article is Software Design and the Monkey’s Brain, and it attempts to capture the nature of software design. By the way, if you enjoy that article, you may also like a book by Professor Glass: Software Creativity 2.0, in which he expands on the role of creativity in software engineering and computer programming in general. Essentially, the article Software Design and the Monkey’s Brain deals with two intertwined observations:
- Software Design is a sophisticated trial and error (iterative) activity.
- Such iterative process mostly occurs inside the mind (at the speed of thought).
In the following, I’ll present my own appreciations on this topic. Regarding the first observation, I think that trial and error (I’ve also found the expression trial by error) is the underlying problem-solving approach of every software engineering methodology, like it or not. Alas, there is no algorithmic, perfectly formalized framework for creating software. In his classic book Object-Oriented Analysis and Design, Grady Booch says:
The amateur software engineer is always in search of magic, some sensational method or tool whose application promises to render software development trivial. It is the mark of the professional software engineer to know that no such panacea exists.
I totally agree. Nevertheless, some people dislike this reality. Referring to Software Engineering, a few (theorist) teachers of mine rejected calling it “Engineering”. These people cannot live without “magic”. Indeed, there are significant conceptual differences between software practitioners and some (stubborn) computer scientists, with regards to Software Engineering’s nature. These scientists are not very fond of the trial and error approach. In his article, Professor Glass presents some past investigations which verified that designing software was a trial and error iterative process. He also reflects on the differences in professional perceptions:
This may not have been a terribly acceptable discovery to computer scientists who presumably had hoped for a more algorithmic or prescriptive approach to design, but to software designers in the trenches of practice, it rang a clear and credible bell.
I like to think of software construction as a synthesis process. Specifically, there are two general factors in tension: human factors and artificial factors. The former, mostly informal, the latter, mostly formal. From the conflict, software emerges. Let’s remember that the synthesis solves the conflict between the parts by reconciling their commonalities, in order to form something new. It’s the task of the software designer to conciliate the best of both worlds. Software designers have to evaluate different trade-offs between human and artificial factors.
As a problem-solving activity, software construction is solution-oriented: the ultimate goal of software is to provide a solution to some specific problem. Such solution is evaluated by means of a model of the solution domain. But before arriving to such solution domain model, we have to form the problem domain model. The problem domain model captures the aspects of reality that are relevant to the problem. Later, designers look for a solution, as told, by trial and error. Additionally, the resources available to the designer, including knowledge, are limited. More often than not, empiricism and experience lead the search for a solution. This has an important consequence: software construction is a non-optimal process; we rarely arrive to the best solution (and which is the best solution?).
On its side, knowledge acquisition is other interesting process. During the entire cycle of development, designers have access to an incomplete knowledge. Gradually, designers learn those concepts pertinent to the problem domain model. And, when we are building the problem domain model, it often occurs that the client perspective of the problem changes, and we have to adjust to the new requirements. Interestingly enough, knowledge acquisition is a nonlinear process. Sometimes, a new piece of information may invalidate all our designs, and we must be prepared to start over.
According to Professor Glass’ article, the software design activity comprises the following stages (bold text in the below list is literally taken from Professor Glass’ article):
- Develop a complete understanding of the problem to be solved. It comprises formulation and analysis of the problem, which in fact, should complete the problem domain model.
- Build a mental model of a proposed solution to the problem. It corresponds to a search for the solution. Thereby, the final product should be the solution domain model.
- Mentally execute the model on a sample input to see if it does indeed solve the problem. Clearly, we are validating our solution domain model.
- If the output of the execution is incorrect (as will often be the case in the early stages of design), expand the model to correct the deficiencies, then execute again. We detected an error in the trial, therefore, we will alter our model in order to conform to the problem specifications.
- Once the output of the execution of the model is correct, choose a different sample input and repeat the process. Keep validating the solution domain model.
- Eventually, the expectation is that a strongly enhanced mental model will be able to solve all of the sample inputs considered. Thereafter, we can specify the solution.
Software Design Considerations
During all these trial and error iterations, software designers have to keep plenty of factors in their minds in order to solve the tension between human and artificial factors. Software design offers plenty of opportunities, mostly opportunities for being wrong. The following are some of such factors that may force us to readjust our hypothesis in the light of new facts:
- Heterogeneity – Software runs over heterogeneous resources. Software design must take into account the variety and difference in hardware, networks and other support software systems. Some of those “external” systems may be very mutable on specifications and implementations.
- Design Envelope – A design envelope is the framework on which the solution is defined. Essentially, this framework comprises all the design decisions and trade-offs. As such, the framework also establishes the bounds for the solution. A good design should try its best in order to anticipate change. Nevertheless, foreseeing any possible modification to the system’s specification is totally impossible. For instance, plenty of specification’s errors can only be discovered when users start working with the system. There may be several other causes for modifications, such as technology changes, market’s dynamics, performance or usability problems, implementation constraints, and so further. Sometimes, we are lucky enough and we can introduce changes to our design, with relatively few problems, within the current design envelope. However, more often than not, modifying an existing design can be quite an arduous task, which might require a significant or total redefinition of our design framework. A clearer explanation of the notion of Design Envelope can be found in Facts 19 and 44 of Facts and Fallacies of Software Engineering, also by Professor Glass.
- Information Hiding – Building software implies describing a system at different levels of abstraction. By using abstraction, we hide any unnecessary internal details. Regarding such information hiding, David L. Parnas, in his classic paper “On the Criteria to Be Used in Decomposing Systems into Modules“, suggests that modularity, understood as an assignment of functional responsibilities to system’s modules or components, helps software design to keep a bigger flexibility and comprehensibility, and promotes a lowering of costs. Each system’s module should hide some design decision. Nevertheless, information hiding introduces an interesting paradox, which also involves the design envelope. When designing, we proceed to decompose the system into modules, assigning each some responsibility. For handling complexity, we would like small software pieces, and ideally, modifying any of such pieces should have a minimum impact in other parts of the system. That is, we strive to arrive to a clean design framework. But, at the same time, such well-structured framework will likely turn into an obstacle to accept future changes!
- Security – Primarily, a software system manages and is comprised of information. Such information may have a high value to the software’s users. Therefore, software design must consider the security risks. I recall that security in software systems has three parts: integrity (preventing data corruption), confidentiality (preventing access to unauthorized users) and availability (preventing that access to the system may be maliciously interfered). However, it’s impossible to think in advance of all possible hostile acts against the system.
- Failure handling – Good software design always consider potential scenarios of failure. That’s easier said than done. Frequently, even detecting the failure may be a hard task. That’s other reason why design should always favor software construction based on low-coupled components: theoretically, it should be easier to isolate and identify the part at fault. Now, if a failure occurs, what will the system do? Mask the failure? Inform the user about the failure and ask her for directions? Try to automatically recover from failure? Nice questions, even prettier core dumps.
- Usability – Pretty systems are prettier. However, “pretty” depends on subjective appreciation, and ultimately, on the target market’s characteristics. Personally, I think that the most usable systems are those that just behave like expected. Albeit a simple observation, it means a lot for designers. For instance, usability means that system’s resources should be accessed using consistent operations and procedures. Here’s an informal metric that I love to use to assess our design quality: when, during the software cycle, we have to expand our design to comprise new functionality, and suddenly the consistency of our operations is lost, we can be sure that something is very wrong.
- Reusability – Clean, modular software design almost automatically leads to software reuse. And software reuse offers plenty of benefits: quality improvements, cost and effort reductions, and a very important (yet sometimes neglected) opportunity: rapid prototyping. Building on reusable components, we can quickly develop some prototypes, and receive early feedback from our clients. Further, prototypes may help to uncover previously unnoticed requirements.
- Knowledge Sharing – Software is a multicolor creature. Besides being a product, a good software design is also an invaluable learning tool for new and future designers. Good software design is immortal. We should be fully aware of such fact. Further, knowledge sharing also considers communication among project’s members. Typically, designing software systems requires dealing with a lot of complexity, too much information to be handled by any one person. Development requires collaboration of people from different technical backgrounds and knowledge domains. Therefore, a proper handling of communication paths is vital for the success of the project. Yet, we should proceed carefully, as there is other confirmed observation regarding communication: albeit staff size always affects development time, it must be realized that, the more the staff, the more resources will have to be assigned to handle the communication paths in order to distribute the information. And remember that an uncontrolled communication path is also an opportunity for problematic misunderstandings.
- Maintainability – Software is rarely immutable. Sooner or later it has to be modified for correcting errors, adapting to new hardware or software interfaces, improving performance, or extending its functionality. A good design should allow for lower maintenance costs.
- Marketability – I’d love not to think about markets. Nevertheless, markets are the typical reason why software is built. Even the design of non-profit software is ultimately influenced by the market’s trends. Therefore, most of design decisions should always be studied in the light of market’s nature and expected behavior.
What about the monkey?
In Software Design and the Monkey’s Brain, the “Monkey” refers to a January (2008) news about a group of scientists in the U.S. and Japan which have successfully used a monkey’s brain activity to control a robot, transmitting the signals over the Internet. You can read the full news here. The most interesting achievement of that research is twofold: first, there is the ability to separate mental activity from its natural actuators, and then we have the possibility of transmitting the mental signals (of the monkey) to remote robotic articulations. In a future, this kind of research is expected to arrive to beautiful results: it could be possible to restore mobility to paralyzed people, by enabling them to use their thoughts to control auxiliary movement structures (artificial exoskeletons).
It’s obvious that thoughts are what propels every human activity, naturally including software design. Thinking allows us to model the reality of the problem and that of the solution. However, the iterative trial and error process required for creating software occurs inside the mind, at the speed of human thought. And the mind is much faster than any “external” tools and procedures we may use for software design. And that’s a BIG problem for software design: first we have to “extract” all the ideas, rationale and models from our heads, and thereafter translate them to several common visual and written languages. That takes a lot of time and effort.
What if we had a device like the one used with the monkey? The device would recover our brain signals, converting them automatically into models and code. Wonderful. Rationale management (registering all our design assumptions and context) would be also magically recorded. This scheme would indeed be very helpful, and will likely await for us in the long (?) road. Nevertheless, I see an interesting problem: Peopleware. Most systems are too complex to be understood by just one designer. Communication is mandatory. How would we coordinate all those mental signals? Such signals would travel from mind to mind? I think that sending a signal from the mind to an exoskeleton is a very different thing to sending the signal from mind to mind.
But it will be quite interesting to know how Software Engineering will adapt to the new “mental” schemes.