Metaphysical Developer

The Failure of Code Reuse

Posted in Software Development by Daniel Ribeiro on September 30, 2009

Using pre-made solutions as parts of solving a bigger problem is a common motif on several areas of human knowledge. At first, it makes perfect sense not to waste valuable time making something that was already done. Encompassing this ideal, a lot has been said about doing this with software, what is commonly referred as code reuse.

There are three canonical types of reusing code, as listed by Erich Gamma and Ralph Johnson: software libraries, Design patterns and Frameworks. There are other kinds of code reuse such as software platform (similar to both a framework and collection of libraries ), services or even using a complete application and building on top of it. Each and every one of these types of code reuse differ in its learning curve, in its flexibility (in the sense of how many different new applications, libraries, platforms or frameworks can be built from it) and the rate of functionality/design embedded in it.

But things are not so simple as putting Lego bricks together. Whenever looking for reusable piece of software, how to pick the most suitable one? Even if there is only one piece of one kind, you still have to figure out if it is suitable, or how much work it will be needed to make it suitable (which is the most common case).

This is because the piece can be too complex, not well documented, not fulfill all the desired properties, not be robust enough, without binds to the language/platform you are using, not have a permissive enough license, not be robust enough or make trade-offs that are not really what you would expect (such as choosing CP instead of CA on CAP), not be supported by a big company, and so on. And this gets a lot more troublesome when you have lots of piece of software to choose from. Since evaluating these things take time, a more agile approach is to make the trade-off of how much time to spend evaluating such pieces in order to gain information on them, in order to reduce the risk of making an inappropriate decision (always weighted by how bad can it be to make one, somewhat similar to swot). Even then, on your core domain, reusing big pieces of code is hardly advisable.

However not all software is reusable. At least not easily reusable. The reasons lies on the difficulties of having a reusable design (such as those stated by Uncle Bob Martin in his SOLID series, or by Neil Bartlett on his Component Models Tutorial) and the fact that making it easy for others to use requires a bit more support (such as documentation, tutorials and/or screencasts). Not to mention that most business are not about launching frameworks or application libraries, and therefore, are not interested in this work (which can be good or bad, depending on the cost of making it reusable, and the money made/saved of doing it). Which ensues that a lot of software is not reusable. All of this could suggest that making reusable bits of software requires a Big Design Up Front. However, agile practitioners suggest making code reusable as it is actually needed (a last responsible moment approach). This has actually worked well in practice, as the framework Ruby on Rails was extracted from other projects from 37 Signals, and Apache Hadoop was extracted from another project: Apache Nutch.

Code reuse is not a simple matter, and should not be taken lightly. It involves decisions that can be more relevant from a economical/business perspective than from a purely technical perspective. In the end, the real failure of reuse is the failure to realize this.

Quality of Code

Posted in Software Development by Daniel Ribeiro on April 20, 2009

Just the other day I was skimming through a list of topics to post here that I had crafted, and just realized that several of them had a very common, yet simple, motif: improving quality of code. This can sound quite sensible and simple to a lot of people, yet its an idea as powerful as it is controversial. And when I speak of quality of code, it is important to stress that this is the code for humans, as a machine doesn’t really care about it. Setting up this theme, Martin Folwer’s statement feels quite appropriate:

Any fool can write code that a computer can understand. Good programmers write code that humans can understand.
-Martin Fowler et al, Refactoring: Improving the Design of Existing Code, 1999

This statement is almost a Quality of Code Manifesto. More precisely, it is about readability, but both concepts are very intertwined. Readability is essential to write code with good internal quality (quality as measured by the programmers). Kent Beck, on Extreme Programming Explained, also mentions how high external quality (quality as measured by the users) can’t be maintained with a low internal quality:

Temporarily sacrificing internal quality to reduce time to market in hopes that external quality won’t suffer too much is a tempting short-term play. And you can often get away with making a mess for a matter of weeks or months. Eventually, though, internal quality problems will catch up with you and make your software prohibitively expensive to maintain, or unable to reach a competitive level of external quality.

Martin Fowler explains these interactions on a article called Technical Debt. Summing it up, code with poor quality is hard to maintain, hard to change, and therefore, costs a lot more in the long run than code with a good internal quality. Not to mention that it lacks transparency, and this can go really bad even outside software programming.

Donald Knuth also made his statement about quality of code on his work with Literate Programming, which had an impact on Kent Beck’s work.

This is such an old issue, that I felt a bit insulted when I read the following (from this post): Java (the programming language) is Turing complete so no new language feature can enable us to solve a problem with Java that would be unsolvable without the language feature.

All of this sounded me absurd, since languages are tools to express code. If being Turing complete was enough, C would be all everybody needed, regardless of being a terrible language to express intention, and therefore, to make readable code. However, the author continues: Therefore any new language feature is at best a convenience to those reading and writing the code. Such features of convenience are not to be lightly dismissed, as they allow developers to increase the signal to noise ratio of our source code.

Therefore, in order to create good code, several practises, tools and concepts were created to helps or warn us: design patterns, DSLs, peer review, object orientation, functional programming, architectural patterns, agile methodologies, refactoring, pair programming, encapsulation, inversion of control, automated tests, code generation, anti-patterns, continuous integration, logic languages, fluent code….

And several of these topics are to encompass this metaphysics.

Tagged with: ,
Follow

Get every new post delivered to your Inbox.