Contributing to Chromium

I always wanted to work on a web browser. Probably because I’m both a front-end focused web developer and a low-level loving C++ programmer, and web browsers are among the few types of software that require such a combination.

Moreover, I recently felt the urge to lay hands on large, established code bases. I believe that working with existing code will take me further in my quest of becoming a better programmer than starting yet another small project. So I decided to contribute to Chromium, the open source project behind Google Chrome.

Why Chromium?

Well, mostly because it’s my favourite browser, and because it is in my opinion the one that pushes the web the most right now. I also think that a few more non-Googlers working on it would be good for the project. Not that I distrust Google on this, but I believe that an open source project, especially such an important one, needs contributors from all over the industry. A too dominant player can, even with good intentions, harm a project in the long run: Remember what has happened after Oracle bought Sun.

Another reason why I picked Chromium was that I thought it was a small, modern, lightweight piece of software, unlike Firefox, whose code base appears to be an ancient monster. Guess what? Chromium’s code base is a monster too, at least in terms of size. Including dependencies, we’re talking about several gigabyte of source code here.

The code

Nonetheless, the code quality looks good to me. Google’s (partially weird) code style is followed consistently and the code I’ve seen so far looked clean and polished. I don’t understand the complete architecture yet, but what I’ve seen and read seems promising. I have only two problems with the code:

Monolithic build

I found it difficult to work on just one area, in my case the UI. You have to statically link all of Chromium (which takes about 10 minutes on my, admittedly slow, netbook) in order to see even a small change. I would prefer it if there was some way to fiddle with design and layout with a faster turnaround. Some people on the #chromium IRC channel seem to be unhappy with this as well.

Redundant UI code

Considerable parts of the UI code are effectively duplicated for each supported platform: Windows, Mac and Linux. The Windows code is written with WinAPI and WTL, the Mac code with Cocoa and Objective-C and the Linux code with GTK. I’m sure they have their reasons, one of which is very likely performance, but I still think there could be more code reuse among the platforms. I have seen essentially the same code on different platforms, written in a slightly different way, probably by a different person. As is the problem with code duplication like this, some bugs get fixed only in some versions of the code. The status bubble (the thing that appears if you move the mouse cursor over a link) is a good example of this: It slides perfectly on Windows, flickers a bit on Mac and disappears completely on Linux.


In order to find something to contribute, I had a look at the Chromium issues marked as GoodFirstBug and picked one that concerned the sliding of the status bubble (see above) on Linux. After identifying the problem and fooling around with the code, I noticed that this was a non-trivial GTK problem, the kind of thing I’d like to avoid with my first patch, not having used GTK before. So I picked an issue with the downloads tab which was all in all about one hour of work, naturally not counting compilation time. To my delight, the download tab’s UI was written in HTML/JS, which made the UI work a no-brainer.

It was fun to work with the code, but the compile times were pretty annoying. My desktop box at home is quite fast, but I rarely use it ever since I became a dad. So I worked on the train, which means on my netbook. I usually don’t have any problems programming on it – even Eclipse is usable – but Chromium development was tough. A trivial change meant that I would have to wait for 10-15 minutes. A git pull meant that I had to compile over night. It was mind-numbing.

Was that worth it? Yeah. Sure, my issue wasn’t a big deal, but now I can tackle more difficult ones, maybe even become a committer if I stay motivated. That would allow me to shape the future of the web, which is a considerable level-up for a web developer. Not sure if I can keep my motivation when having to work on that slow netbook though. Maybe I can use some of the 20% time at work for this, I guess Chromium development would be much more enjoyable on a powerful iMac.


My company being awesome, I was indeed able to hack on Chrome during 20% time. I implemented Kiosk mode for Mac. As expected, it was more enjoyable with the iMac. I have a lot of 20% projects going on, but I’m definitely planning to contribute more in the future.

Eclipse Color Theme

About half a year ago, I began to work on what has become my most popular open source project up until now: Eclipse Color Theme, a plugin that makes it possible to use colour themes in Eclipse. I thought this was a good time to talk a bit about the history and future of the project.

The black on white ages

It all started with me being fed up with Eclipse not supporting colour themes in any reasonable way. Since Eclipse was mandatory at my old job, I was forced to stare at it all day. I don’t like to stare at black on white text all day, so I had to find a way to use colour themes.

Before I continue, you need to know that Eclipse preferences are a mess. A complete and utter mess. Every plugin can store arbitrary key value pairs of data, and there is no way to export or import preferences selectively, it’s all or nothing.

There are no central colour settings, so every plugin stores their own, in the format of their choice. In practice, this means that even if you actually change all the colours for the Java editor manually, you will have to do it again for the JavaScript editor. And the XML editor, and the JavaScript editor, and whatever other editor you want to use. This is highly inconvenient.

And because Eclipse preferences are such a mess, all you can do to share a colour theme you created is to export your preferences and have someone else import them. And since your preferences contain all your Eclipse settings, it will completely mess up the other person’s settings. Bah.

So I guess it’s safe to say that using colour themes in Eclipse without going insane was impossible. The only solution I saw was to create a plugin that would take care of changing each editor’s preferences according to a standarised colour theme format, without messing any other settings up.

I created a prototype that supported just the Java editor and a single hard coded colour theme to see if this would work. It did, so I added another colour theme and published version 0.1 of Eclipse Color Theme on the Eclipse Marketplace.

The colour themes revolution

I never thought that many people would be interested in having colour themes for Eclipse. Probably just a small bunch of geeks like me, coming from Vim and Emacs. Searching the Internet for Eclipse colour themes revealed just a handful of people sharing their exported preferences or asking about theme support, but it were really few. There was even an ancient ticket about colour theme support in Eclipse’s bug tracker, for which about 4 people voted during all that time. So I thought I’d create the plugin more or less for myself.

Turns out I was wrong, I received many emails of appreciation, even a couple of donations. So I was quite motivated to improve the plugin and add support for more editors, at first XML, HTML, JavaScript, CSS and C++. At that time, my own needs where met, but I kept adding things that were requested via email or GitHub.

At some point, Roger Dudler contacted me and told me that he was planning to create a website where people could create colour themes for Eclipse, and whether I wanted to join forces. I did, so we both worked on plugin and website together (well, I did only a few things on the website so far), and was born, allowing users to create their own colour themes with a WYSIWYG editor.

Did I tell you how wrong I was about nobody being interested in colour themes for Eclipse? I was. Within a few weeks, Eclipse Color Themes climbed to the top 4 of Eclipse Marketplace with thousands of installations, and hundreds of themes on

This huge demand created a constant flood of emails, asking us to support new editors or reporting problems. We decided to make the plugin more modular and easier extendable, using Eclipse plugin features like extension points. Roger, who had some experience with Eclipse RCP development, did that conversion mostly by himself.


Maintaining the plugin is not too much work, so Roger and I are able to concentrate on other projects. Our own needs are long met, especially since I don’t use Eclipse on a daily basis anymore. (At my current job, everyone uses IntelliJ IDEA – whose theme support is only slightly better than Eclipse’s – and I’ve switched back to Emacs for C++ and JavaScript development.) Nonetheless, my ambition is to release Eclipse Color Theme 1.0, preferably this year, and there are still a few issues to be solved and improvements to be made.

One interesting topic for the future is Eclipse 4, which will introduce a new, themable UI. If I understood it correctly, plugins can either use the old or the new UI technology, which means it will probably take a while until all important plugins make use of it. Maybe it will make sense to support Ecilpse 4, we’ll wait and see.

Since Eclipse Color Theme is (in theory) modular, i.e. support for new editors can be added by other plugins, it would be nice to split it into multiple plugins, e.g. one for each Eclipse plugin package (JDT, WTP, CDT, PDT, …). Ideally, the developers of the package would also maintain the colour mappings, but I guess that’s wishful thinking.

If there is one thing I learned in this project, it’s how remarkably motivating it is to have lots of users and lots of feedback. Thanks to all of you writing emails, creating themes and donating money, you’re a great source of motivation.

Test-driven development in the shipyard

As regular readers might recall, I began to use test-driven development for my private projects a couple of months ago. At my new job, I finally get the chance to use it on larger codebases, and there is one particularly useful technique I learned so far: In the absence of a better name, I’ll call it the shipyard.

Photograph: Tup Wanders


In TDD, before you write any code, you write a test case for it. The test doesn’t have to compile, you can go ahead and build your dream world of how classes, methods or functions should interact with each other in order to achieve your goal. In the second step, you create stub implementations to make the test compile. The nice thing about this is that you get to design an API that cleanly communicates its purpose. However, you still have to think upfront about where to put the code, and who should get access to it. If you’re, like me, trying to keep the scope of everything reasonably small, you’ll have to put some serious thought into this. So you’re stuck where it hurts most; writing your first test case.

This is where the shipyard comes into play: Instead of putting the code you are about to write in its final place, you deliberately keep all of it in the same file as your test case. That way you don’t have to think about whether it will become a utility or part of the code that needs it, and you don’t have to jump around in your codebase all the time during development. As soon as the code does everything you wanted it to, it can become part of the bigger picture, just like a completed ship leaves the shipyard to fulfil its purpose.

Photograph: Official U.S. Navy Imagery


If you put the code into place upfront, not only do you have to put in some extra effort, you will also have to move it somewhere else if your initial choice was a poor one. Another benefit is that in the shipyard you can commit your progress into version control without exposing not yet finished code to the rest of the application.

However, I see one potential problem of this approach: You could write nicely tested code that does exactly what you want, but when you are about to integrate it, you notice that it isn’t really needed, or not in that form. That hasn’t happened to me so far though.


It’s been a while since I last posted to this blog, mostly because I’ve been incredibly busy being a dad, moving to a new appartment and a new job (I’ll blog about that one).

Nevertheless, I recently found an excellent excuse to finally work on my basics, so I got myself a copy of Introduction to Algorithms (not the Knuth, but still extensive) and began to deepen my knowledge. I’m enjoying it so far, and there’s lots to learn since the topic wasn’t covered in such detail at my university.

Nearly everybody I told about my endeavour said I’ll never need that knowledge. I respectfully disagree. Sure, I probably won’t need to implement sort or matrix multiplication algorithms in my day job, unless that day job involves low level systems programming. But algorithms are everywhere, and studying them doesn’t mean to learn all the existing ones by heart, but mostly to learn how to design efficient algorithms for any problem. The most useful skill is in my opinion to transform real-world problems into problems for which a proven efficient solution already exists, graph problems for instance are everywhere, visible only to the trained eye.

Since all mathematical proofs and no code make Felix a dull boy, I decided to implement every single algorithm explained in the book on I’m also using this as an opportunity to make my first pure HTML5 site and to try Node.js.

The one thing I’m not happy with is that I had to use in-place algorithms in order to visualise the sort process. For instance, my implementation of merge sort originally looked like this:

function merge(array1, array2) {
    var array = [];
    // Merge array1 and array2 into array
    return array;
function sort(array) {
    if (array.length == 1)
        return array;
    var middle = Math.floor(array.length / 2);
    return merge(sort(array.slice(0, middle),

I find this a lot more readable than what I have now. But since this is a recursive implementation, I wasn’t able to regularly send the whole array from the worker to the main script in order to display the process. The current algorithm modifies the array directly, not returning anything, which clutters the function calls with various indizes and bloats the code. On the other hand, this is very close to the pseudo code in the book, and the performance is likely better. Still, let me know if you can think of how to update the progress with a functional implementation.

Packaging binaries for Linux

I had the pleasure to prepare a binary (including dependencies) for Linux a few months ago. It’s been a while since I had to do that, and I forgot how difficult it is. Here’s some advice:

Compile libraries yourself

This is probably the most important point. You can either link statically or link dynamically and ship your shared objects, the latter usually being a lot less problematic. If you ship the shared objects prepared by your distribution, you will introduce lots of unnecessary direct and transitive dependencies, forcing you to add tons of additional libraries. For instance, here goes an ldd of the core SDL library on Ubuntu 10.10: =>  (0x00007fff64bce000) => /lib/ (0x00007f6888cfe000) => /lib/ (0x00007f6888afa000) => /usr/lib/ (0x00007f68888f5000) => /usr/lib/ (0x00007f68886b4000) => /lib/ (0x00007f6888497000) => /lib/ (0x00007f6888113000)
/lib64/ (0x00007f6889236000) => /usr/lib/ (0x00007f6887ec7000) => /lib/ (0x00007f6887cbf000) => /usr/lib/ (0x00007f6887abc000) => /usr/lib/ (0x00007f6887786000) => /usr/lib/ (0x00007f688756b000) => /usr/lib/ (0x00007f6887361000) => /usr/lib/ (0x00007f688715a000) => /usr/lib/ (0x00007f6886f55000) => /usr/lib/ (0x00007f6886d37000) => /lib/ (0x00007f6886b2c000) => /usr/lib/ (0x00007f68868c7000) => /lib/ (0x00007f6886684000) => /lib/ (0x00007f688647f000) => /usr/lib/ (0x00007f688626c000) => /usr/lib/ (0x00007f688605c000) => /usr/lib/ (0x00007f6885e59000) => /usr/lib/ (0x00007f6885c52000) => /lib/ (0x00007f6885a38000) => /usr/lib/ (0x00007f68857ee000) => /usr/lib/ (0x00007f688531e000) => /usr/lib/ (0x00007f68850f2000) => /usr/lib/ (0x00007f6884eeb000)

That’s quite a lot of libraries, some of which you can’t even reasonably ship (I’m looking at you, pulseaudio). The solution? Build your required libraries yourself, and do it on the oldest and humblest distribution you want your software to work on. I’m building our current C++ game project and its dependencies on Debian Lenny, this makes it work on all reasonably current popular distributions. Furthermore, if you build the libraries yourself, you can usually configure it to compile only the features you need, reducing size and dependencies even further. This is an ldd of the core SDL library I built: =>  (0xf77a7000) => /lib32/ (0xf76d3000) => /lib32/ (0xf76cf000) => /lib32/ (0xf76b5000) => /lib32/ (0xf755a000)
/lib/ (0xf77a8000)

See the difference? Also make sure to ship a 32 bit version, preferably both 32 and 64 bit.

Check distribution compatibility

There are lots of Linux distributions out there, and you’ll probably want to support as many as possible. If you compile libraries yourself, you’re definitely on the right track, but you should also check for compatibility issues and see what you can do about them. You can either get dozens of distributions and check if it works, or you can use the excellent Linux Application Checker tool:

Reduce dependencies

Reducing your dependencies is good, but do you know what’s even better? Get rid of them. You really shouldn’t overdo this, but look at each of your dependencies and think about whether you really need them. Should you include a large library if you need just one function? Is there a more coherent alternative? Would it be possible and perhaps faster or easier if you just wrote the required functionality yourself?

Regular expressions

Earlier this month, I expressed my astonishment about the fact that the majority of software developers I’ve worked with in the last seven years doesn’t know the first thing about regular expressions:

fhd%3A It's amazing how many actual developers don't know regular expressions. It's like carpenters who don't know about hammers.

As you might have guessed, I regard regular expressions as a fundamental element of every programmer’s toolbox. However, I’m not good with metaphors, and I don’t know the first thing about carpentry, so hammer missed the point. Thomas Ferris Nicolaisen found a better analogy:

tfnico%3A %40fhd I would rather say it's like carpenters who don't know circle-saws %3B)

He’s right: Regular expressions are a specialised way to work with text, mostly relevant to programmers – not everyone who works with text in general.

Most of the other replies I got indicated that although they knew (or had once known) regular expressions, they rarely used it nowadays. I think that’s a shame, so I decided to share what I do with regular expressions on a daily basis, maybe you’ll find it useful. I do use them in code occasionally, but what I do all the time, whether I use an editor or an IDE, is searching and replacing. If you’re not at all familiar with regular expressions, I suggest this reference to make sense of the remainder of this post.


I sometimes mention that I grew up with UNIX and that’s true. One of the first things I learned about programming was how to use the tools of the Linux command-line, like grep, which is a command that allows you to search the contents of one or more files with a regular expression.

I can’t come up with a convincing example because I mostly use regular expression searching in conjunction with replacing, rarely alone. But imagine you’re trying to search for a specific string in JavaScript, but forgot which String delimiter (‘ or “) you used. Here’s the grep command:

grep -R "[\"']Some string[\"']" /path/to/your/webapp

Naturally, you don’t have to grow a beard and become a CLI geek to harness the power of regular expression searching, here’s how you do the exact same thing in Eclipse:

Regular expression search in Eclipse


As mentioned above, I use regular expressions mostly for searching and replacing, a very powerful technique that saved me countless hours of mind-numbing, repetetive typing. Have you ever heard a co-worker make the same keyboard sound many times in a row? Like moving the cursor to the next line, placing it at the beginning and pressing CTRL+V? I’m a lazy person, and I can’t stand repetetive typing tasks. Fortunately, you can avoid the majority of these with regular expressions.

Here’s an example of how regular expression search and replace speeds up refactoring. We had a whole lot of test cases that looked like this:

assertThat(RomanNumerals.convert(1), is("I");
assertThat(RomanNumerals.convert(5), is("V");
assertThat(RomanNumerals.convert(10), is("X");

Too much duplication, so we created a method assertRomanNumeralEquals() to get rid of that:

private static void assertRomanNumeralEquals(String roman, int arab) {
    assertThat(RomanNumerals.convert(arab), is(roman));

Eclipse was able to extract the method for us, but it wasn’t able to make all the assertThat() invocations use the new method instead. So that’s where regular expression replacement comes in handy even in a sophisticated IDE. I replaced the following expression:

assertThat\(RomanNumerals.convert\((.*)\),\ is\((".*")\)\);

With this:

assertThatRomanNumeralEquals(\2, \1);

This is how it looks in Eclipse (select the lines to which you want to apply this before opening the find/replace dialog):

Regular expression search and replace in Eclipse

The expression might look a bit intimidating if you’re not used to regular expressions, but you will be able to write down something like this in no time if you practice them.

In case you’re wondering, this is also possible on the command-line, with the sed command.


Regular expressions are a powerful tool for processing and editing text, automated or interactively. If you use them habitually, you will have learned something for life, because every reasonable editor and IDE supports them. However, regular expressions are not standarised, so there are slight differences between Perl, Java etc. You might have noticed that there  are also some minor differences between grep and Eclipse in the first example above. This is sometimes good for a few short confusions, but it has never hurt my productivity notably.

Speaking of productivity; although regular expressions will probably not make you write code faster, they can significantly increase refactoring speed, a task on which I find myself working most of the time. How much time do you spend actually writing code down? And how much time do you spend editing existing code? I think the ratio is at least 1:10 in my case. If you are able to refactor fast, you will refactor more often, which is likely to improve design and maintainability of your code.

If you, however, decide to ignore regular expressions until you find a situation in which you really need them (that might never happen, you can always find a workaround), you are entering a negative feedback loop: You are not very familiar with them, so if you are faced with problems, they don’t come to mind and you don’t use them. If you don’t use them regularly, you will never become familiar with them. Searching and replacing is an ideal way to break that loop, so I suggest you try it.

Fighting RSI (Part 3)

It’s been a while since I last experienced RSI related pain as described in my Fighting RSI series of blog posts, and that’s probably why I focused on other things and never got round to writing part 3. But I’ve been asked to do so several times in the last few months, and I might as well wrap the whole issue up right now, so here it is. It’s not bad that I waited so long though, because by now I’ve been using my new input devices intensively for several months.

Kensington Orbit with Scroll Ring

In part 1, I investigated ergonomic mice, trackballs in particular, and ended up buying the Kensington Orbit with Scroll Ring.

It’s a good trackball, but I’m not so sure if it’s really better for my hand than a regular mouse. When using it for several hours in a row (mostly while playing games), my hand does get a little tired. But other than that, I’m very happy and I don’t have much to add to part 1. I got pretty good at using middle mouse button emulation (i.e. pressing both the left and the right button at the same time), works all the time now.

Kinesis Freestyle

Since I wasn’t convinced that the mouse was the sole cause of my pain, I looked into ergonomic keyboards in part 2, and decided that the Kinesis Freestyle was the best in my price range. I ordered one directly with Kinesis and soon got my hands on it.

A very nice keyboard. The material feels durable and expensive – the difference to cheap keyboards is vast. The keys have just the right resistance and make just the right noise (neither silent nor annoying) and it looks pretty slick. So far, so, good – I didn’t expect a cheap piece of hardware for this kind of money.

One of the reasons I decided to buy the Kinesis was that it is a modular keyboard (I love modular stuff), which means that you’ll buy a base keyboard to which you can add accessories that will let you position the keyboard in an ergonomic way.

I tried to be cheap and ordered just the base keyboard at first, which turned out to be a bad idea. The Freestyle without any accessories is not an ergonomic keyboard, although it’s still a pretty cool keyboard if that’s not what you’re looking for. I tried to figure out which accessory to buy by arranging the keyboard with books, but it didn’t really work. I eventually went with the most flexible and popular option, the VIP:

It features wrist rests which are absolutely essential (trust me on this) and makes it possible to adjust the keyboard’s angle. There are only two settings, but that’s enough for me:

With the VIP accessory, the Freestyle is a wonderful ergonomic keyboard. You can easily adjust it to a comfortable typing position at any time. If a non-touch typist insists on using your computer, you can just move the parts together:

You can also move the parts far apart, I can’t think of any other keyboard that lets you do that. I’ve even seen one guy who ordered a Freestyle with an extra long chord and mounted each part on each side of his ergonomic chair – quite impressive. You can really go nuts with this, e.g. place your mouse between the parts:

I usually have them close to each other, but when I’m typing with my baby son in my arms, I move them a bit further apart.

I decided to buy the US version of the keyboard because the German version was only available from German resellers which sold it at ridiculously high prices. That worked pretty well for me: Programming with the US layout makes much more sense (and fun), and thanks to the US international layout, I can still type special German letters effectively. It was a bit difficult at first to use a German layout at my laptop and at work and an US layout at my desktop computer, but I can mentally switch layouts seamlessly by now.

The Freestyle doesn’t have a num pad, which was actually one of the reasons why I decided to buy it. The num pad consumes valuable space on the desk, forcing me to either not center the keyboard in front of me or to reach unreasonably far for the mouse. As a touch typist, I hardly used the num pad anyways, so I was glad to get rid of it. The Freestyle does have an Fn key that will make a couple of other keys function as the num pad keys, but I’ve never used that, except accidentally.

Speaking of which, there are a couple of special keys on the left:

I never use those, but I guess they just had some free space there, I don’t mind. All of these are hardwired to key combinations, so they work on pretty much every OS. I thought that, as an Emacs user, I could make good use of the copy and cut key (C-C and C-X, see?) but they are too difficult to reach from the normal typing position.

As you can probably tell by now, I’m pretty happy with the Freestyle, and it was definitely worth its money in my case, because the pain disappeared after a short while.

One last piece of advice: If you are going to buy such an expensive keyboard, make sure to invest the additional $10 to buy a cover for keeping it clean:


Please don’t mistake me for an RSI expert (in the unusual event that I make that impression), I’m just a geek with pain, which is a dangerous combination. I’ll admit that I suspected that my pain was mostly caused by the mouse, a cheap piece of hardware, yet I looked into the considerably more expensive ergonomic keyboards, mostly because I think they’re cool :). It turned out well for me, because the pain was indeed caused by the keyboard. I was lucky with my choice and the pain did go away. However, I guess the best thing to do with RSI pain is to go see a physician and figure out what’s causing the pain, then solve that problem.

A matter of style

I’ve had a few interesting discussions about coding style lately, which inspired me to blog about my views on the issue. I’ll start with an unattributed quote:

You can write Fortran in any language.

This roughly translates to: You can write crappy code in any language, no matter how sophisticated or en vogue it is. Really, you shouldn’t, but this leads to the conclusion that the programming language does not influence code quality as significantly as some people might think. Does this mean that once you’ve learned to write code, you should stick to your style and adopt it to every language you use? A resolute no from me.


I personally think that every programmer should be fairly confident in a few languages, if possible of different paradigms. I’m currently trying to stay capable in C++, Java, JavaScript and Lisp (mostly Clojure right now). My reasons are:

  • You will have the right tool available for the problem at hand. Every language and platform has its strengths: System programming in Java is pretty pointless, just as web development is in C++. It is possible, but there are better tools for the job. If you know only one language, you will probably not even see that there is a superior alternative. If the only tool you have is a hammer, you tend to see every problem as a nail.
  • It’s good for you. Looking into different ways of thinking and learning new things will make (and keep) your brain flexible. Eric Raymond once said that “Lisp is worth learning for the profound enlightenment experience you will have when you finally get it; that experience will make you a better programmer for the rest of your days, even if you never actually use Lisp itself a lot.” I agree.
  • You probably won’t treat your language like a religion, because religions are usually mutually exclusive.

I’ve done considerable stuff with Python, PHP and others as well, but I’m willing to let my skills with these rust in order to get as much practice as possible in my languages of choice. The paradigms are what’s really important in my opinion though, not the languages. (Although drastically different syntax might help to stay open-minded.) I’m currently covering structural programming, object-oriented programming, functional programming and generic programming.


It’s probably a bit cheesy, but I’m going to compare programming to martial arts. (Sorry, couldn’t resist; I’m comparing everything to martial arts since I started to read The Book of Five Rings.) Just as in martial arts, there are numerous styles and forms of programming, with no ultimate consent on what the best one is. It often depends on the problem at hand. Another similarity is that martial arts (at least Asian ones) usually employ a philosophy at their core, just like many programming languages do. C, for instance, is very concerned with the KISS principle and the UNIX philosophy.

If you take the philosophy and programming style from C to Java, your code will be a mess. Not because Java is a bad language, but because Java was not intended to be a language to write C in. It has different purposes. You can’t master Karate and then claim to do Aikido while punching people in the face. You will have to accept that Karate is just not appropriate in an Aikido dojo, so no matter how good you are at Karate, you’ll have to invest energy into learning Aikido.


Another cheesy analogy: How would you go about moving to a different country? I would try to learn the language and behave in a way that is considered to be polite by the locals. Of course, I could just keep talking in my language and do whatever I always do, even if it offends everyone around me. So why would anybody use goto extensively in C++? It’s possible, but it would offend almost every C++ programmer I have ever known.

I think it is appropriate for a journeyman (and if you’re learning a new language, you are one in at least one community) to learn from the respective masters and try not to be ignorant. Everyone is free to disagree (especially constructively), but I think you should at least give it a try – you will have an easier time being accepted by the community and hence an easier time learning. That’s at least my experience.


So how do I avoid the problems mentioned above? I adopt the style of notable authorities of the language. I write C like Kernighan & Richie, C++ like Stroustrup, STL and Boost, Java according to the Sun conventions etc. These people are certainly not inerrable geniouses, but they are the ones creating the core community around and probably thinking the most about the language.

I don’t use camel case in C++ although it has less issues than underscore notation. When I wrote some C# recently (Mono of course), I actually started method names with an upper-case letter, although I don’t like it. It’s all about the consistency, can you honestly look at code at code that mixes different programming styles? I can’t.

This is not only about brace placement placement and naming: When working with Java, I write accessor methods, but I prefer structs with public member variables in C++. Partly because it’s more common, but also because there are some good arguments for accessor methods in Java, and few in C++. (That being said, I largely avoid public fields and accessor methods alltogether because they are rather evil.)

I usually come to the conclusion that the respective language’s conventions make sense, so I’m not sure how I would treat one with bogus conventions. I probably wouldn’t use it.

All of this is only relevant for people in charge of code style (e.g. because they are working alone). When working on a team, it is in my opinion of the utmost importance to use consistent conventions, even if they’re weird.

Falling in love with Git

Ever since watching Linus Torvalds’ talk about two years ago, I was excited about Git. However, it was only half a year ago that I first used it for one of my own projects during an internet outage. I’ve been working with Git extensively for a while now, and I’m still impressed.

Here’s what I love about Git:

  • Distributed. I didn’t know about distributed version control before watching Torvalds’ talk, but it all made sense to me immediately – especially when thinking about open source projects. Clone a repository, work with it, create your own branches, collaborate on something with someone else and contribute back to the project by having them pull your changes or by creating a patch with the built-in command. Just brilliant.
  • Powerful. I’ve never seen a source code management system as powerful as Git before. You can create and merge branches easily (git checkout/branch), you can store your current changes, work on something else and restore them later (git stash), you can employ a Subversion-like workflow by pushing to one or more remote repositories (git remote/push/pull), you can use Git to work with CVS (git cvsimport), Subversion (git svn) and lots of other SCMs, you can create and apply patches (git format-patch/apply), you can do regression testing (git bisect) and more. Despite all its features, I find it remarkably easy to use, probably because I grew up with Unix, and Git is Unix philosophy at its finest.
  • Fast. It’s hard to believe how fast Git is. Cloning a whole repository, including the complete history of all branches is a matter of seconds for my typical project size. Plus since you’re working locally 99% of the time, you hardly ever have to wait on the network. When switching branches for the first time, I thought something went wrong because it took just one second.
  • Github. Github is definitely one of the great things about Git. Besides hosting your projects conveniently, you can follow other projects and developers to see what’s happening in a Twitter-like timeline and you can fork other projects at the click of a button. For instance: I just cloned a project from Github and made some local changes. Since I had only read access to that repository, I simply forked it on Github, added it as a remote repository in my local repository and pushed my changes to it. I then removed the original repository from the list of remotes and I could continue to work as if nothing had happened. And all that in about one minute.

You can see how amazed I am. However, I have little hope of introducing Git where I work. The Eclipse plugin EGit is simply not there yet, nor is Windows support, and considering how difficult it was to replace CVS with Subversion, it seems surreal to think about using Git there. At least I’m Git-only for all of my private projects by now.

But if you can chose your own tools, I suggest you give Git a try even if your team is using a different source code management system. If you’re a Subversion user (I’m not a Subversion hater like Torvalds, I still like it in fact) I can point you to a very nice introductory series of screencasts demonstrating how you can collaborate with Subversion users.

Test-driven development

I’m probably horribly late for the party, but I’m currently exploring test-driven development after reading Kent Beck’s classic Test-driven development: By example. Although most developers claim to know TDD, I have only met a few who knew what it was actually about – the majority seems to confuse it with unit testing.

Here’s the surprise: TDD is not a testing technique. It’s a development technique, and a good one at that. Here’s why I like it:

  • It forces you to carefully think about what you want to implement before you dive into the how, leading to simpler APIs.
  • It pushes you towards loose coupling and modularisation – virtues in object-oriented design.
  • It makes you focus your energy on solving actual problems, away from just-in-case code and bloated, prophetic designs.

This is how TDD works in a nutshell:

  1. Think about what you want to create and hence what you want to test. Make a TODO list of all the test cases you plan to write.
  2. Pick a test case and implement it. Use imaginary classes and methods – it doesn’t have to compile.
  3. Make the test case compile as fast as possible. Create empty classes, implement methods that return constants, just make it compile.
  4. Run the test case. It will most likely fail. Now make it pass as fast as possible, using constants and duplication is explicitly allowed.
  5. As soon as the test case passes, refactor the code. Remove constants and duplication, refactor your dreamt API to match reality, write Javadoc etc. Run all test cases often and make sure that they still pass once you’re done. If you find a problem that isn’t identified by the existing test cases, add it to the TODO list and ignore it for now.
  6. Pick the next test case to implement. If you are unsure whether the code will only work with the specific test case you just wrote, write a similar one with different data. Beck calls this triangulation and it is supposed to increase the confidence in your code. TDD is all about confidence.

I’ve investigated TDD with two problems so far:

I first tackled an interview-style question about combinatorics. Although I was not a hundred percent sure what I was doing at all times, I was able to create a working solution astonishingly fast. Whenever I was in doubt (quite often), I would write another test case and fix the code until all test cases passed again. Now I know why it’s called test-driven; it does indeed drive development, e.g. by breaking down complicated problems into smaller, solvable ones. I never had to stop and agonise – if I couldn’t make a test case work, I threw it away and wrote one that made a smaller step towards my goal.

That really aroused my interest in TDD, but it was a neat little isolated problem, something for which TDD is known to work very well. How about a more realistic problem?

I recently started to port a Tetris clone I wrote in Java to GWT. There were performance problems because I drew the whole grid again after each change instead of just reacting to the changes, so I decided to rewrite the game logic. I figured that this would be just the right problem for my next TDD experiment, plus I was curious how it would fare in game development, so I began.

I just implemented the last test case from my list and I’m pretty happy with the results. However, it was more complicated than my first experiment, and there were many design issues. For instance: In the game (I assume you know Tetris), new pieces are placed at random horizontal positions. I could safely ignore this fact for a while, but as soon as I had to write test cases for moving pieces horizontally, things got complicated.

My test cases asked the pieces for their horizontal position and tested movement based on that information. I asserted that the moved piece did move left, unless it was on the left edge of the grid, in which case it shouldn’t move. I noticed that this test case would sometimes pass and sometimes not, based on the random horizontal position of the piece (e.g. if my code was broken and didn’t move the piece at all, the test case would only pass if the piece was positioned on the left edge of the grid).

I was not willing to tolerate a test case that would sometimes pass and sometimes not, because I wouldn’t be able to have confidence in the test cases anymore. Should I run all test cases a hundred times in a row after each change, just to make sure that each situation is tested? I had to get the randomness out of the game logic code. I did that by creating a class that would generate random numbers with methods like createRandomPiecePosition(). I then created an interface for this class and a mock implementation that returned fixed values which could be set by the test cases as required. This solved the problem, and it’s really nice in terms of modularisation.

I really enjoyed refactoring whenever I felt like it – rerunning my test cases gave me confidence that I didn’t break anything. I also noticed that I was surprisingly fast, even though I spent a lot more time on design issues than anticipated, because of the fast workflow: I could write some code and instantly see whether it works. When I worked on the Java version of the game, I had to create a specific situation (e.g. game over) by actually playing the game, which was annoyingly time consuming. Furthermore, the more unexpected bugs I fixed, the less confidence I had in the code, making me test more often. With the TDD workflow, I noticed an unexpected bug, wrote a test case to identify it and made it pass. I never had to think about the bug again, because I knew that there was a test case that would identify it should it occur again.

All in all, I’m really impressed by TDD, and I plan to use it for the game logic of my latest project.