Metaphysical Developer

The Issue with Static Typing

Posted in Languages by Daniel Ribeiro on June 30, 2010

“Shouldn’t we use Scala?” is a recurring question my peers make me. I think it is fair, since I have advocated in the past that Scala has a lot of strong points compared to Java. Furthermore, this question is usually made in contrast to dynamic languages, usually Ruby or Python.

Since I don’t want to discuss that common confusion of  strong with static typing, I’ll try to be very clear about what I’m talking about. By static type I mean that the types of all data must be known at compile time. By dynamic I mean that the types may not be known until runtime. Of course, this ignores languages that have mixed typing, such as Groovy, Objective-C and C#4.0 (as it includes the dynamic keyword), but for argument’s sake, these can be ignored as the issues of static typing will apply to these as well whenever you are using it, and will not whenever you aren’t. The common examples of languages are Smalltalk, Clojure, Erlang, Ruby, Python, Lisp, Lua, Javascript, for dynamically typed, and Java, Haskell, Scala, C#, C++ for statically typed.

Statically typed languages have as usual claimed benefits that they are faster than dynamic languages, their types provide documentation of the methods and functions, static analysis tools can be more comprehensive and yield better results, and automated refactoring is a lot easier to accomplish and can also give better results. All of these reflect more the current state of implementation and tooling of such languages than of intrinsic property of the type of the language. Steve Yegge has argued this about the speed property, but one can argue for dynamic language that its tooling could use type information from runtime sources, such as unit-testing, to yield similar results .

But some hidden complexities take place when using static typing, which may overthrow any possible benefits coming from it:

  • The biggest one is type coupling. For instance, in dynamic languages, renaming an interface is trivial as most interfaces are not even declared, just documented (such as “this must implement less than”). Structural typing can ease this, so can type classes. However, even languages that support these can have problems with other refactorings, such as adding methods to an interface. Paul Graham comments about this a little (on the essay Hackers and Painters):

Everyone by now presumably knows about the danger of premature optimization. I think we should be just as worried about premature design– deciding too early what a program should do.

The right tools can help us avoid this danger. A good programming language should, like oil paint, make it easy to change your mind. Dynamic typing is a win here because you don’t have to commit to specific data representations up front. But the key to flexibility, I think, is to make the language very abstract.

  • Some DSLs can’t be built. A common example is the XML DSL. This is because you can’t have in a pure statically typed language a type that accepts any single method call, returning any single possible return value, given any amount of any arguments of any type. In fact, if you do have this type, you in fact have the dynamic type from C#. Dynamic languages usually support this through a mechanism called method lookup alteration and interception, such as those provided by Smalltalk’s doesNotUnderstand, Ruby’s missing_method and Python’s __gettattr__ method.
  • Natural complexity. By this I mean that every statically typed languages is a proper superset of a dynamic language of itself. This is easy to see from the theory behind types, as the untyped lambda calculus is just the typed lambda calculus with one type (the brave ones can find more about this on Physics, Topology, Logic and Computation A Rosetta Stone).
  • False sense of safety. The types do not guarantee that the implementations maintain the invariants of a type (such as those described by Baraba Liskov). The common example is a comparator interface in Java. Just because a class implements the interface, it doesn’t mean that the compare method is transitive, as required by the documentation. James Iry recently commented on languages with more complex type systems (he actually said a lot more about type systems in general), that contain theorem provers in their compilers, such Agda and Epigram, which can solve this issue. However such languages have other limitations, such as not being Turing Complete. This essentially means that testing practices are still as needed as for static languages (even if you have theorem provers, you cannot be sure without some form of acceptance tests that the problem you are trying to solve is actually the one the customer had in his mind).
  • Type variances. This is actually two problems: if your languages does not support them, you have to fall back to type casting, which actually violates type-saftey and static typing in general. If your language supports it, you have to know one more concept, and when to apply it. It may not be always clear when a type should be co-variant or contravariant. This is further complicated by the fact that types with side effects (such as mutable objects) have special rules about this (further about this can be found under Variance of Mutable Types, from Programming Scala). Also this concept is so common that functions are naturally covariant on the return value and contravariant on the parameters. Scala’s two argument function documentation shows this explicitly.

The last two issues are very diminished for languages without subtyping, but these have deeper problems as, in order to relinquish subtyping, you also have to give up on any possibility of code reuse possibility from ad-hoc polymorphism.

It might seem that some points were left out, but these are usually the problem of a particular language, which are commonly and mistakenly considered to be a problem of static typing in general:

  • Lack of metaprogramming support.  Of course C++ has templates, which means that most people know that this is false for all statically typed languages. However, a variant of this issue is: type safe safe metaprogramming. This is also not true. Haskell, for instance, has a type-safe macro system. Note that you cannot make type-safe runtime metaprogramming in general. For instance: even though some languages allow you to create interfaces that do not exist on compile time, the only way to invoke methods from these is through non type-safe ways (such as reflection).
  • Verbosity. Scala is the canonical counter-example (Scala can be as terse as Clojure in some ways). The more type-inference you have, the less type annotation you have to write. This doesn’t necessarily make reading it easier, but IDEs can help here. On the other hand, it will never be harder to read than dynamic languages.

Going back to the question that originated this discussion: static types can come in handy, and they do have better tools these days. But they do bring complexities way beyond having to type a few extra characters. And these should not be taken lightly when considering to express code in a statically typed language.


4 Responses

Subscribe to comments with RSS.

  1. Wayne Conrad said, on July 20, 2010 at 2:51 pm

    Re correctness: A static type system gets the compiler to do some testing for you. Unfortunately, as type checking can only catch a small fraction of errors (or every Java program would be correct as written) you need tests to catch the other errors. Here’s the thing: I think that once you have tests to catch those errors that type checking cannot catch, you find that those tests necessarily would catch type errors as well. Therefore, once you have good test coverage, the type checking system becomes redundant (for correctness, at least). Once you don’t actually need it, you start to notice just how much it gets in your way: How many of the patterns in GoF are just ways to keep a colicky, statically typed compiler from spitting up perfectly good code?

  2. Matthieu Sozeau said, on August 16, 2010 at 2:57 pm

    A few counter-arguments:
    * Type coupling: but the nice thing about working with declared interfaces is that you can use the compiler to tell you where you should change your client code when the interface changes. Less debugging, especially no need for runtime debugging. Of course working with abstract interfaces is always a good idea, no matter the setting.
    * Some DSL’s can’t be built/Natural complexity.
    To build such dynamic DSL’s as the XML one, you indeed have to use a dynamic type of some sort and work in this language (kind of a DSL in a DSL, the DSL inside the dynamic language inside the static language). But this dynamic language DSL could be just as easy to use as a proper dynamic language, so your argument reduces to the fact that dynamic programming inside a static language is not necessarily well supported. But it’s unclear to me that it is more complex to have a statically typed language in the first place. There is a type checker built in the evaluator of a dynamic language after all.
    Also, if you have an expressive enough type-system, you can define this DSL as a library. Metaprogramming is just programming in Epigram or Agda.
    * Need for testing. I wholeheartedly agree that there is some form of testing/review needed to assert that specifications/types actually reflect what your program should do correctly. The nice thing is, theorem provers can give you total correctness guarantees that you’ll never attain with testing alone.
    * About subtyping and variance. I’m not sure I understand. Haskell has no subtyping, no variance issue but ad-hoc polymorphism through type classes.

    • Daniel Ribeiro said, on August 16, 2010 at 3:30 pm

      @Matthieu Sozeau
      * On the Type coupling: So can dynamic languages, given runtime info. With techniques like TDD, most people notice that the type checker gives barely any sense of extra safety (Martin Fowler commented on this 5 years ago: )

      * Yes, if you have optional dynamic typing, this does not apply (I did mention them on the beginning of the post). Neither if you use external dsls. The point of dynamic languages with method lookup alteration and interception is that you don’t need to. You can, but you have easier options. And having easier dsls, mean having more dsls, which means you are more prone to use higher level languages, being more expressive and having a higher Code Quality, being more productive, and so on.

      * Agree with the good point of total correctness (when it can of course, as Agda and Epigram can’t, for instance, write a compiler of itself, let alone prove it correct).

      * About haskell subtyping: It has less problems, as Haskell is purely functional (no side effects, except through type checked Monads). But, If you can’t say a List of a List of Integers is a List of List of Numbers, (given Integer is a subtype of Number) you have an expressive issue. This is because any language with types actually define at least a conceptual subtype: If a type A can accept all functions that type B, then A is a subtype of B.

      Look on the structural typing link I posted for more information (I repeat the link: On the example given on the link, we could say that Dog is a subtype of Callable, and Callable is a subtype of an empty interface (which is the mother of all types). Wikipedia goes on very thoroughly on such abstractions (

  3. roger said, on February 23, 2011 at 9:00 pm

    The biggest argument has gotta be the speed…for whatever reason jruby just isn’t as fast as its compiled cousins…

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: