80 columns

I tend to wrap most code I write at 80 columns. Many programmers I interact with (especially those contributing to my projects) don’t really see the point of that, so I here’s why I do it:

1. It works well with UNIX tools like less, grep etc. at standard terminal sizes. Sure, you can resize your terminal emulator, so I guess it’s not a very strong argument in this day and age.

2. It works well with pretty much every diff and review tool. That’s a better argument, because many of these tools have some kind of line limit, and 80 is certainly the lowest common denominator that works well everywhere.

3. Most lines are shorter than 80 columns anyway, this rule is mostly about how to deal with exceptionally long lines. Java code is a notable exception here, wrapping that at 80 columns is pretty hard. I’d still argue for a line length limit in Java code, 100 or 120 columns seems to be popular.

4. From my experience, short lines result in more readable code overall. Less nested expressions and more (reasonably named) temporary variables always appear to make things easier to follow.

5. You need less horizontal space for your editor. So you can show other windows (browser, terminal, other files, …) next to it.

I came up with only one downside of wrapping at 80 columns: You have to think about how to wrap long lines, or preferably refactor them into multiple shorter lines. It’s definitely some extra effort, but as I noted above, I strongly believe this aids the overall readability of the code, so I’d say it’s time well spent.

Why asm.js is a big deal for game developers

I’ve been following Emscripten for a while now. It was pretty solid months ago, and it keeps getting better. Now that asm.js is beginning to show pretty impressive results and Chrome is apparently considering to implement optimisations for it, things are looking pretty good for native code in the browser these days.

So, why would anyone want native code in the browser? Legacy code, you might think, but that’s not really it. Games is the biggest reason.

C++ is the single most important language in (professional) game development, and before that it was C. Before that various assembler dialects. See the pattern? Many game developers, especially indies, use engines where they can work in higher-level languages, but you still need to run these on everything that calls itself a gaming device. And they’re of course almost all written in C or C++. So there’s really no way around native code execution if you want games on your platform, that’s why every notable mobile platform has added them, sometimes reluctantly.

Why C++?

There are numerous reasons why most of the game industry is still using C++. One is that it’s pretty much the only language you can use on every single platform that runs games, from desktop to handheld consoles. Another is that, as opposed to desktop applications, games usually have the exact same user interface across platforms. So rewriting it in the platform’s native frameworks, as is a popular approach to mobile apps these days, does not make much sense. You usually want one code base that runs everywhere. C and C++ are the only reasonable options for that right now. And even though many atrocities have been committed with C++, you can actually write nice code in it – look to the likes of Stroustrup and Carmack for that.

Why not JavaScript?

But couldn’t people just use JavaScript and ship their game with an app that embeds WebKit? Hardly. Browsers and mobile browsers are becoming faster, but the bleeding edge is still barely able to deliver the kind of games that were running fine in Flash ten years ago. Yet current desktops and smartphones are capable of so much more than that. WebGL could arguably fix this, but even if mobile browsers weren’t so slow to implement it, and Microsoft wouldn’t refuse to adopt it, you’ll still end up with significantly more overhead and stricter hardware requirements than if you just make a native game.

I’ve thought long and hard about whether I should make web-based games. I’ve been developing web applications during my day jobs for more than five years now, and JavaScript has been my main language all the time, so it seemed like a no-brainer. But I want to use proven libraries like SDL and Box2D, I want to support older operating systems and hardware, and I don’t want to spend my time optimising every little algorithm I write (which is what I ended up doing for pretty much all browser games I worked on so far). It’s C++ for me.

So, why is asm.js a big deal?

Because the browser is suddenly a feasible platform for game developers, most of which previously ignored it. It’s not a huge investment anymore, you can still ship the same code base to desktop, mobile and consoles. You can even feasibly have a web-based demo and then deliver the full game as a native application, or make an MMORPG free to play with a limited browser client, and have people pay money for the gorgeous native client. We’ll probably see all the big third party engines compile to asm.js in no time, Unreal Engine having already taken the lead. Browsers will either support WebGL or lose users. I’m betting my money on the former. This is very likely the dawn of the browser as a proper gaming platform.

Working remotely: the best thing I’ve ever done

It’s been six months since I switched jobs and began to work from home most of the time. It’s downright amazing, the best thing I’ve ever done. Seeing my kids grow up and being able to support my seriously overworked wife (our boys are 2 and 1, that’s quite the challenge) is great for all of us.

As a programmer, I’m in a very good position to work remotely: We tend to be quite good at communicating online (especially those with an open source background), we can improve the tools we use or build our own, and we’re in high demand right now – many of us can just leave for greener pastures when their company doesn’t let them work from home. Even if you think your company is awesome and it’s reasonable of them to ask you to be in the office every day, it’s probably not. What this really says is that your boss thinks you won’t work properly when not being watched. Do you really want to work for someone like that? I sure don’t. (Sometimes it’s really difficult: my last job for example involved programming for huge, expensive parcel machines. But that seems like a rather unusual case.)

How we do it

My company employs some people in Europe and some in the US, and most of us don’t work fixed times, so synchronous communication is not always possible. We have an IRC channel where we usually announce when we’re there and when we’re gone, and have quick technical discussions, but it’s not really enough. So a lot of our communication was happening via email, which can get messy. We have recently set up Discourse for internal asynchronous communication, and it’s working much, much better.

We use Trello to communicate what we’re working on. It’s voluntary, but most people do it. It is quite important to communicate what you’re working on when remoting. But the single most important thing we do to make remoting work is mandatory code reviews. We use Google Code Review (it’s basically Rietveld on a Google Apps account), but there are other promising options like Gerrit or GitHub pull requests. If you’re not used to doing code reviews, I really think you should try it. I’ve never seen any other process increase code quality and communication like reviews do. And it’s not really much overhead, even a review containing one or two days of work is usually done in 10 to 30 minutes. And time spent understanding the code base is not at all wasted time in my book.

Since we’re a distributed team, we need all that to work. You can get away without tools for asynchronous communication or code reviews in an office, but I’m convinced setting these things up is quite beneficial even there. What if your developers are sitting in more than one room? In my experience, there’s a lot of walking and phoning and meetings going on, that’s highly distracting. Stuffing lots of people into a single big room is a common solution, but that’s a change from bad to worse: I found it insanely distracting to work in a room with more than two other people in it.

Not all of my new colleagues work remotely, but almost all of the developers, incuding the lead developer and CTO. The remaining people usually work in the office, and everyone living close to the office is encouraged to join the weekly status meeting there. I’m almost always going, it’s really good to get together and see some people in person or do some brainstorming every now and then.

All the remotees have a call the day after the status meeting in the office, to have someone who’s been there talk a bit about what’s going on in the rest of the company, and for everyone to share their status.

How I do it

Before I started my current job, I had to leave home almost every day for about 20 years straight. So I was a bit nervous that this big change in my life would confuse me, or that I wouldn’t be motivated enough to do a good job, or that I just wouldn’t like it. Turns out none of that happened, it’s been the happiest time in my life so far. Getting to see my wife and kids whenever I want to, not missing any milestones and doing my share of parenting is brilliant beyond words. If you’re a parent, please consider trying this if you have the chance.

Even after six months, I haven’t fully gotten used to this vast new freedom yet. I just need to do about 40 hours of work per week, it doesn’t matter when or where. Yet I still usually work about 4 hours straight, have a lunch break and then work another 4 hours, just as I used to do it in an office. But when the weather is really amazing, like when it first snowed last year, I do leave for a while. Or when one of us needs to go see the doctor, that’s much less stressful now then it used to be.

I’m not really working in a fixed place anymore, I’m frequently moving from the study to the living room, from the table to the sofa, or onto the balcony when the weather is nice. I also sometimes work at a cafe close to the office when I arrive early for the status meeting (didn’t feel the need to ask for a key so far). I think if I didn’t have a family, I’d be working in a cafe on a daily basis. Even if there’s lots of people in it, it’s quite different from a crowded office. Don’t really know why, maybe because I can be sure that nothing that’s being said is relevant to me.

I also noticed changes in the way I consume information. I hardly read Twitter as thoroughly as I used to anymore, and I’m seriously behind on my RSS feeds. I’m not on a crowded train with a crappy internet connection anymore, I’m here at home where all the things I like are. Why read Twitter for 10 minutes when I can use the time to play with the boys, or play video games, or work on a private project? The same goes for procrastination – why surf aimlessly around when I need a break from work? There’s so many better things I can do.

All in all, I’m really enjoying the remoting, I’m quite determined that I won’t work in an office again, if I can help it. And given how the number of remote offers for developers seems to be rising steadily, I don’t see why.

Reimplementing the wheel

Assuming you’re a programmer, I bet that you’ve heard the phrase reinventing the wheel before. I usually cringe when I hear someone say it, for two reasons:

1. It’s actually reimplementing the wheel

The phrase is almost always used when someone is implementing something that has been implemented by a library or framework. (That’s at least what my anecdotal evidence suggests.) So it’s reimplementing, not reinventing. Reinventing would be someone implementing a sort function, but completely ignoring all research done in that area, trying to come up with their own algorithm. I don’t think that’s common.

But there are valid reasons for implementing something that has been implemented before, here just a few:

  • You want to actually understand what’s going on, and be able to tune freely, because it’s an important part of your application
  • Libraries age far more quickly than research, and are not available in all environments
  • You just want to keep things simple (note that I’m not using simple synonymously with easy here)
Of course, there are valid reasons against doing that as well, for example:
  • It’s faster
  • You and your colleagues don’t need to learn and remember the theory
  • Depending on the library and programmers in question, often higher reliability
So it’s sometimes the right thing to do, sometimes not.

 

2. It’s condescending

It might be said with the best of intentions to safe someone else from wasting time, but it’s still saying they’re actually wasting their time. Without having a clue about why they chose to implement something themselves instead of using an existing implementation, that’s downright condescending.

Making assumptions about something you don’t know is generally not a good idea. Why not assume people actually know what they’re doing unless evidence suggests otherwise?

Unit tests versus code quality

Do unit tests improve code quality? Some famous consultants might disagree, but I think they don’t. Testable code isn’t automatically better code. Depending on the capabilities of your language, it’s probably worse.

Now don’t get me wrong, unit testing is a good thing. But I think we need to realise that we’re often making a trade-off between simplicity and testability. To me, simplicity is the most important factor of code quality, but many people lean towards testability and are very successful with that. Maybe a complex, well tested system is better than a simpler system with less test coverage, I can’t answer that. But you can’t have both, at least not in static languages.

Let me explain what I mean: In a unit test, you’re testing a small part of your system in isolation. You’re ensuring that a single module, class or function, works as expected, now and in the future. You only test the unit as it can be used from the outside, and that’s good, because its implementation details shouldn’t matter to the rest of the system. But it’s also a problem.

It’s a problem because you often have to actually change the unit to make it testable. There are plenty of examples for this, but I’ll stick to one in this post: You’re testing a unit that uses a random number generator. Since the behaviour of the unit will be different every time you use it, you need a way to take control of that random number generator in order to test it reliably.

If your language supports object-oriented programming, the common approach is to introduce an interface for the thing you need to control and inject it from the outside. Let’s say we’re creating a RandomNumberGenerator interface and pass an instance of it to our unit. You can then create a fake implementation that does just what you want, and pass that one to the unit from the tests. Now you can make sure that your unit works fine for various random numbers.

However, we have just added to the system’s complexity. We have created a facade for a random number generator, which is very likely already available in your language’s standard library. Anyone working on your code base will now have to know that your facade has to be used instead of the standard method. We have also introduced an interface that doesn’t make much sense right now: Ignoring the tests, there is only one random number generator – why have an interface if there is only one implementation of it? That’s nothing but unnecessary code other poor souls will have to wrap their head around. Maybe you even introduced a dependency injection framework – this is going to make your code base a lot more complex.

Languages that support monkey patching (most dynamic languages, e.g. JavaScript) are an entirely different matter: You can simply rebind the dependencies in your tests. I think that’s how testing is supposed to be: We should just write simple and clean code and be able to test it, without having to think about how to test it and what trade-offs to make. But static languages are still around, popular, and the only option for many applications, so I guess we will need to make such trade-offs for quite some time. Let’s at least be honest about it: It sucks.

TimTim 1.3

Remember TimTim? I haven’t worked on it for a while but I noticed some issues on the Galaxy Nexus I got earlier this month.

While I was at it, I ended up fixing a couple of other issues, here’s the change log:

  • Improved rotation algorithm.
  • Fixed alarm/vibration issues on Android 4.0 and 4.1.
  • Added xhdpi assets to support WXGA.
  • The dial now moves continuously during countdown.
  • Selecting a preset now works when the timer is already active.
  • The soft keyboard is shown when adding/editing a preset.

I’m especially happy with the new rotation algorithm. It finally feels just right: smooth and usable.

I still haven’t added an indication that the timer is ticking. That’s probably the number one complain I get, but I’m not happy with the solutions I’ve come up with so far, so I’ll brood some more on that one.

The next thing I’ll do with TimTim is port it to iOS, now that my wife has an iPhone.

If you’ve got an Android, check out TimTim and TimTim Free on Google Play. Otherwise, stay tuned.

Why we used Clojure and ClojureScript for Flurfunk

Why have you decided to use Clojure and are you still happy with your choice?

This question has been asked more than once now, and although I answered it in a Google+ comment, it seems you can’t link to those, so here’s a blog post.

I wouldn’t call myself an “old Lisper” (as Thomas did), but I had some experience with Clojure and other Lisps and thought it might be a good choice. We went with Clojure in particular because it runs on the JVM, and we wanted to integrate well with our company’s Java environment.

Clojure code tends to be succinct, readable and easy to change, which was useful since we didn’t have a very clear picture of where Flurfunk should go when we started, nor did we have much time.

ClojureScript, which was only a month old when we began to work on Flurfunk, is a different matter. Wasn’t exactly hassle-free. I think we might have been faster if we had used JavaScript, plus I wouldn’t have been the only one working on it. But I’ve seen ClojureScript get better every month, and was pretty productive after the initial problems were solved. A rewrite wouldn’t have paid off so far.

So to answer the question: Yes, we’re happy with our choices. I don’t think that we couldn’t have done it without Clojure, but it certainly played its part. If nothing else, it kept me motivated. On the other hand, I think we would have seen more collaboration (both in our team and now that it’s open source) if we had picked (J)Ruby and JavaScript.

Those decisions are way too hard and I don’t think there are really right and wrong choices. We both like Clojure, it probably comes down to that.

New job

So this is it, my last week at Viaboxx.

I’ll really miss them, more than any other company I’ve worked for. I had a great time, a great team and learned a whole lot.

If you’re looking for a fine company to work for in the Cologne/Bonn area: they’re hiring to replace me. You can drop me a line if you have any questions.

But all good things must come to an end, and I got an interesting offer from Eyeo, where I’ll mostly be working on Adblock Plus, its various ports and the infrastructure supporting it.

I don’t hate ads – I’m earning some money with ads myself. What I don’t like are overly obtrusive ads, so I’m finding Eyeo’s new acceptable ads concept very promising. I’m excited to work on something that helps millions of people experience a better web and enables website owners to avoid ad-blocking by using non-obtrusive ads at the same time.

Another exciting aspect is that I’ll be working from home full-time, something I’ve been longing to do for a while now. Viaboxx was awesome and allowed me to work from home one day per week, but more than that wouldn’t have worked out. Their core product is software for large, expensive machines – not something you can reasonably work with from home. Anyway, I’m very happy to spend more time with my lovely kids soon – they’ll only grow up this once.