Linux on the Desktop 2021 (and 2022, and 2023, and 202x)

März 9, 2021

For almost 20 years, I usually have two operating systems running. Linux for development stuff, and a Windows installation for everything else. A long time ago it was a dual boot system, but nowadays I usually have a separate notebook with Linux for development.

Every once in a while I consider switching to Linux as my main desktop operating system. And every time there are some significant showstoppers that prevent me from doing so.

From 2010 to about 2020, the main issues was hardware video acceleration in the browser. You know, watching Youtube and stuff and not having your notebook constantly run at 100% CPU or choke at Full HD videos. Battery running low. This stuff.

Way back, Youtube was based on Flash video, and Adobe just gave up on trying to create anything accelerated due to the fsck*** mess of various video acceleration interfaces in Linux. There is VDPAU (nvidia), VaAPI (intel) and XvBA (amd).

> "The good thing about standards is that there are so many to choose from." – from the book Computer Networks by Andrew S. Tanenbaum

I thought the situation would improve when Flash went away and everyone switched to HTML5. And right I was, fast forward ten years, and now Firefox 80 ships with optional GPU acceleration. Well at least for vaapi, but not for vdpau; so if you have say an Nvdia GPU for machine learning you’re out of luck. Also it’s optional, you have to manually activate it. And not available in Chromium.

Just ten years, so I guess sometimes you just have to be patient and problems will eventually solve themselves.

Other showstoppers for me right now:

  • a USB microscope (DinoLite) not working anymore. Apparently there was some regression w.r.t. USB or camera drivers which were removed… I didn’t dig very deep here, but it suddenly stopped working.
  • A Ricoh Afficio SP300DN not working properly in Linux. I did extensive research before buying this one, looking up various magazines and the net. One magazine (c’t) reported compatibility with Linux, as the MAC driver ships with a PPD. The PPD installs, and printing PDFs works 99 out of 100 times. Just once in a while, the printer suddenly stops with a Postscript error and goes in berserk mode: It’ll feed in every sheet of paper that is in the tray until the tray is empty and print one line of garbage on the sheet (so that you can’t reuse it anymore). Probably some issue how poppler creates PDFs… no clue. The printer works fine with the PPD in macOS though.
  • the most read entry of my blog is actually the one about my fight with getting WiFi working on a Thinkpad E470. Again I did extensive research before buying this, but couldn’t find any warning w.r.t. the WiFi chip in advance…
  • Aggressive Link Power Management (ALPM) is not working. On my development notebook this means that the SSD is always at full power. This not only reduces battery time, but also make the left palm rest (metal) really hot. Whereas the flash chips of the SSD should not be affected, I do worry about the controller chips in the SSD and the overall life-span of the SSD. There is experimental support for ALPM, but this is deactivated by default in most distributions due to potential data loss. However, I really don’t want to test out myself on my production machine wether I am affected and my machine has subtle and unexpected data loss…
  • There is no proper echo cancellation in Linux. If you are using Skype or some other conference system, and using the built-in microphone and speakers (and not a headphone), you’ll have annoying echo and latency issues when video-conferencing. Now for work I of course use a headset, but when we family talk to my parents this is an absolute no-go. Again there seems to be no progress for several years now.
  • My tax software is not available for Linux. In fact no tax software is available for Linux in Germany. This is the least issue of course, as I could use another system for taxes, dual-boot or use a VM. But still, this is annoying.
  • There are other small tools which are not available. Exact Audio Copy. DVDShrink. IdaPro5. There are however workarounds with Wine or equivalent tools.

Trying to categorize the above issues, there are mostly two major points:

  • Driver support
  • Availability of Commercial Software

Unfortunately looking back twenty years, the situation was exactly the same, just details vary. Back then printers were an issue – anyone remembers GDI printers? Sound was an issue, too – anyone remembers OpenSoundSystem or that you could only play one sound at a time in Linux, and i.e. would not receive notification sounds from your instant messenger when you were simultaneously listening to music? WiFi – anyone remembers ndiswrapper?

Uh, and there was also no tax software.

But then again, with everything moving to the browser, the software issue is indeed becoming slightly less important.

But why is the overall situation not improving?

In my humble opinion, it is due to political reasons. The market reality and the roles and responsibilities of product development are not taken into account. And unfortunately this is why I think the situation won’t change anytime near in the future.

Let’s take the viewpoint of a device manufacturer. You’ve designed and build your device, and now you have to develop a driver and ship the product. You have a tight schedule as your competitors also have a product in the pipeline. First you target Windows, as this gets you 90% of the operating system market share:

  • You license some generic driver package for some of the chips you use in your device. The chip manufacturer provides these drivers under a commercial license.
  • Based on that generic driver package you have your developer create a Windows device driver. You estimate that it will take about three month for your developer to program the driver.
  • After that you get your driver signed as WHQL to make sure your driver works flawlessly in Windows. You estimate another three month for this process. Microsoft used to charge a small fee for WHQL testing (neglectable), but nowadays it’s even free. There is also a small fee for the driver signing certificate. Again neglectable if you take into account the overall cost of product development.
  • You finish on time. The driver meets all quality standards, and the static program analysis tools from Windows like Static Driver Verifier, Code Analysis for Drivers, CodeQL and other test tools make sure that your driver works stable and without bugs.
  • You ship your product. The windows driver model changes very infrequently; you can be assured that consumers can use your product likely 5+ years, most likely even 10+ or more years, even with newer Windows versions – Microsoft cares a lot about backward compatibility.

Your device is a market success, now you also want to get another 1% market-share, and target the Linux desktop.

  • Shipping a binary driver is impossible, as there is no stable ABI. The only possibility would be to either have to do nasty things like write some abstraction layer like nvidia does. However the legality w.r.t. GPL is questionable. Also it’s a hassle for users, and due to the abstraction process frequent updates and testing is required throughout the life-cycle of the product.
  • Don’t use the generic driver package that you licensed and re-develop everything from scratch and open-source the driver. You fear however that some company from Far East will create a clone of your device and just copy your driver. Essentially that clone manufacturer gets a driver for free, and also probably quite some insight in how your device works by looking at the driver source code. You fear that they will beat you on the market, as consumers will buy the cheaper clone.
  • Develop a driver, pitch the driver to the kernel guys
  • Get rejected ‚cause your code doesn’t meet the kernel code style and quality guidelines
  • Do that back and forth until your driver is eventually included in the kernel
  • Now at some undefined future time some distributions will include the new kernel revision. You have no clue nor any control when this is going to happen. Your product is late to market, and nobody wants it anymore.
  • After all this is done, your product works perfectly. But then there is some regression in the kernel, cause someone decided to redo the USB stack. Due to some subtle bug, your device is not working anymore on one of the major distributions. You work with both the kernel guys and the distro maintainers to get your device working again.
  • You also have to test whether your device works on Ubuntu 18.04, Ubuntu 20.04, Arch Linux, Debian Testing and Unstable, Redhat Enterprise, Suse Enterprise and some more major distributions and versions to make sure your device doesn’t have subtle bugs in Linux.
  • It’s really hard to plan for all of this; there is no defined process for driver inclusion. You have to talk to a lot of people, like the kernel guys. And hope everything will work out eventually.

Hypothetical? Well, these are the issues that occurred w.r.t. the DinoLite (apparent regression) and the Ricoh printer. Granted, the drivers were not specifically written by the manufacturer but by third parties, but still – these are the issues you will run into.

As for software: Let’s say you’re and ISV and develop this particular software package. Let’s say it’s a software for some specific purpose, say some CAD tool for a very specific industry. The software is your product, so there really is no way open-sourcing it and sell support.

A software developer who want to ship a product

  • has not anything stable to ship against. There are various versions and releases of Gtk, Qt and other libraries in a bazillion of distributions. There is no stable ABI, like Win32 or Cocoa.
  • you have to package all libs by yourself. However since some of these libs are shipped by the distributions in different versions, you have to make sure that only your libs are loaded during program startup. You also have to test this for at least 20 different distributions and versions of distributions.

Hypothetical? I challenge you to ship a binary package of a small C++/Qt application for four to five major Linux distributions. I did it. It’s possible, but the effort is really huge compared to, say, Windows or macOS. Just google things like ‚Failed to load platform plugin "xcb"‘.

I have to say, Snap and Flatpak have changed the situation slightly for the better though.

In other words: The lack of a kernel driver ABI and an application ABI hinder the adoption of Linux to the desktop.

Same situation btw. is happening in Android. The chipset manufacturer ships a proprietary board support package, which definitely will not end up in the kernel. It is targeted to some specific kernel. Then freeze it. That’s their ABI. Just look at the Android kernel of your phone. My phone is running kernel 3.18.71. Apparently, an ABI is needed.

Due to lack of it, right now everyone just creates binary blobs and freezes the kernel. Google tries to abstract the hardware and create something of an ABI with Project Trebble and other initiatives. Let’s see what comes out of it.

What’s the kernel developers opinion on that? Well, there is this document called "Stable API Nonsense".

> You think you want a stable kernel interface, but you really do not, and you don’t even know it.

Well, they should talk to those Google guys, because it seems that they want a stable kernel interface after all and put millions of dollars into it. But seems these Google guys really don’t want one, but don’t know it yet!

Someone should tell them. They could save millions of dollars!!!

I mean, we just need to tell these Qualcomm, Broadcomm, and Mediatek guys that they should open source everything and create open kernel drivers! Easy, isn’t it?

In all seriousness, I think this "stable api nonsense" document is not very honest.

The technical argument of not creating a stable ABI is essentially: We don’t want to do it. Because it’s too much work. However, Microsoft and Apple show how it can be done. And Google shows how it can be done even with the Linux kernel.

And then there is some political argument, which is not mentioned in the document, but which is probably the major reason against a stable ABI from the perspective of the kernel guys and open source community:

Not having a stable ABI is intended to actively force vendors to honor the GPL license and upstream free code in the kernel. Prevent manufacturers to create closed source drivers and encourage them to create open source drivers.

And it kind of works. Especially for big iron stuff. Think of how Intel supports open source now, and how the situation was ten or twenty years ago. Intel does not do this out of pure goodwill though – there is $$$ involved. If some big iron data center wants to use Linux installations for heavy computation, it’s either Linux works with your stuff, or we will use AMD or some other vendor.

However it just kind of works. Nvidia is doing all this complex proprietary driver abstraction because customers want fast Linux drivers and pay for it. But still, this is apparently more economical for them than to create and ship open source drivers.

And for other devices from smaller manufacturers, especially those targeting the desktop, it is even less economical.

And this is why I will still maintain two machines in the foreseeable future: One for development/number crunching with Linux, and one for general purpose computing with Windows – being my main machine.

M.U.L.E. – Input Lag (Delay Testing)

März 11, 2020

If you’d ask me what the best multiplayer computer game is, then without doubt both StarCraft:Broodwar and M.U.L.E. come to my mind. 

M.U.L.E. is an incredible entertaining turn-based strategy game, originally devloped for the Atari 800, and then ported to various home computer systems. The C64 port is by far the most popular version of the game.

The game play centers around settling on the far-away planet IRATA, and producing four goods, namely food, energy, smithore and crystite. The first three are for direct consumption, and crystite is a luxury item. In each turn, a player can choose which of these goods to produce, how to increase his prodution capacity and so on.

M.U.L.E. also adds some real-time elements. In particular goods can be traded after each turn, similar to a trading exchange. Then four players negotiate in real-time with their input device for good prices and the best deal. Depending on the trades and what each player specializes in, prices can vary a lot. Thus observing what goods your competitors bet on, what they produce and how things will turn out is part of what makes this so fun.

Actually thinking of it, M.U.L.E. feels almost like a classical board game, augmented by the capabilities of a home computer:

  • Classical board games are usually turn-based, but have no real-time strategy aspect. Here, players have to act in a limited amount of time, and their jostick-skills come in to play. This results in focusing on your next turn way more than in a typical board game situation, and adds some nice tenseness. It often happens in some situations, that player sit silently in front of the screen, as one player focuses on his next moves.
  • Classical board games, say like Avalon Hill’s famous Civilization also often have trades of goods somewhere. But here the computer adds a lot; first the structured way trades a conducted in a time-limited manner – as mentioned, similar to a real stock-exchange – and second as the computer computes the resulting market prices in real-time via complex formulas. Such calculations would be too tedious to carry out manually in a board game situation.

I think this together with the fact that the game is simply extremely well-balanced  – rumor has it, that the developers spent an incredible amount of time beta-testing with friends at their private home – is the reason why the game has aged so well.

In fact, a few friends and myself regularly meet to play a game of M.U.L.E. For decades, this was at Matthew’s home, who owns several C64 computers and 1541 floppy drives, all still in working condition. It is however more and more difficult to keep C64s in working condition – excellent Youtubers Jan Beta and Adrian Black spent quite some time doing so. C64s have some fragile parts and design flaws: First, some chips are known to fail quite often, afaik that’s the CIA and SID chips. Another issue is the original power supply. It provides, among others, a 5 volt rail, that is fed directly to the chips. Unfortunately this power supply gets quite hot, and has no surge protection on that 5 volt rail. So in case of a fault, the 5 volt can quickly become 7, or 10 or 12 or more volt, and will fry several chips at once (Jan Beta has two videos on how to build your own replacement power supply. It’s easier than you think).

Another point is that some time ago Matthew bought an humongous Sony 4K TV – almost as if to compensate for something. In the old days, we would connect the C64 via it’s S-Video out, i.e. separate chroma and luma via a SCART adapter to an old fashioned tube tv. Actually it’s not really S-Video as the standard was defined in 1987, years after the C64 hit the market, but it still works (pins are slightly different). This resulted in quite good image quality – then again, a tube tv has quite average picture quality in the first place. The modern Sony 4K TV only had a composite input which resulted in a noticeably worse image. Also we noticed a huge input lag. Here with input lag I mean the time measured from pressing a button on the joystick until a visible change on the screen occurs.

All in all, this is why we looked into emulation and input lag. Here is the setup: Matthew used his iPhone and recorded a video. On that video I’d smash a joystick in one direction (or a key on the keyboard) and in the background you’d see the screen change. We then counted frames from the point in time where the joystick would be at it’s maximum angle, or the key pressed fully down until a screen change was visible. The C64 (the European PAL version that we use) has frequency of 50 fps. Matthews iPhone can record videos with 240fps, so that’s well beyond the Nyquist rate.

In fact, I’d show you the video or some frames of it, but Matthew voted against it, citing that his secret gay porn collection as well as details on his Grindr account are visible in the background. Which I can understand but:

Matthew, it’s okay. We like you the way you are.

In any way, I tend to rant too much, the table below shows the results. The Sony TV is a Bravia KD-65XE9005.

SetupLag (#Frames, Video @ 240fps)Lag (ms)
Joystick button, original C64 @ Sony4K TV via composite, Sony 4K TV in standard mode37.5156
Joystick button, original C64 @ Sony4K TV via composite, Sony 4K TV in game mode833
Keyboard button, original C64 @ Sony4K TV via composite, Sony 4K TV in standard mode



Keyboard button, original C64 @ Sony4K TV via composite, Sony 4K TV in game mode1249
VICE 3.1 (x64.exe) on Windows 10/Thinkpad R500, Joysticks connected via an e4you RetroFun! Twin USB adapter, Sony 4K TV in game mode~2083
8BitGuy’s measurement, C64 mini on unknown LCD-TV in game-mode 90
8BitGuy’s measurement, C64 maxi on unknown LCD-TV in game-mode 90

We have several sources of lag:

  • The joystick controller (probably negligible on a C64, but can be an issue with joystick connected via USB). We unfortunately didn’t manage to test my recently upgraded poor man’s joystick, I added one of these zero-delay encoder boards. I don’t expect much difference to the RetroFun Twin though.
  • Processing during emulation (i.e. lag caused by VICE or some other emulator)
  • Image processing by the TV

It’s unfortunate that we cannot establish a base delay, as noone has a tube tv anymore – buying one just for testing is simply overkill. On the other hand: If a tube tv runs a 50 Hz (i.e. 50 half-frames per second), and the cathode ray starts at the upper left corner at time point 0 and ends its run at 1/50 seconds in the lower right corner, we can roughly expect it to hit the middle of the screen at half the time, i.e. (1/50)/2 = 0.01 seconds, i.e. 10 milliseconds. Another question is how often the joystick ports are triggered by the implementation. In other words, there is likely an inherent delay, and not a zero-delay as the baseline.

Interestingly, for an SNES someone established that there is an inherent delay of 50 ms, that is even with an original SNES with a wired controller connected to a classical tube tv (I expect this not to be the case with a C64).

From the stats above and using this particular 4K TV, we can see that the baseline with original equipment is betwen 33 ms and 49 ms.

With emulation via VICE we are somewhere around 80 ms to 90 ms. That’s probably like having 3.6 on your dosimeter during a nuclear accident, not great, not terrible. I hoped something below 50 ms would be achievable. But then again, in the end, it’s always how you feel the delay, i.e. whether it impacts the game play. And we all agreed that this 90 ms were not noticeable. It felt original. I could even imagine playing Katakis with this setup.

Some comparisons:

  • Here they achieved 70 ms with Retroarch (SNES emulation) with Run-Ahead latency reduction set to 2 frames and a wired XBox controller.
  • Here is a screenshot and discussion from 8BitGuy’s video. Note that in our setup we didn’t measure but didn’t notice audio lag when using a wired connection. Bluetooth audio is incredible laggy though.
  • Here is a more detailed explanation of what lag to expect from a tube (i.e. CRT).
  • Some more latency analysis w.r.t. emulation and RetroPie. They report delays of 32 ms and 50 ms (original NES/SNES connected to tube), delays of 95 ms and 93 ms with the NES Classic / SNES Classic re-imaginations, and 122 ms and 143 ms with RetroPie NES/SNES, all on a Dell 2007FP Monitor (Delays on a Samsung TV in game mode were worse).
  • Here are some stats from RetroPie. They don’t mention the frame rate, but as all typical home computer systems and consoles of the past run with 50 fps in Europe, I assume 50 fps. With all optimizations turned on they measure an average frame delay of 5.51, resulting in a delay of approx 110 ms.

Assuming the USB controller lag from the RetroFun Twin cannot be improved (maybe we’ll compare with a gaming keyboard or with the zero-delay encoder from my home-grown joystick), the only other source of lag we could improve on is the choice of the emulator. But in our setup, VICE adds at most 60 ms delay (more realistically 40 ms to 50 ms). Measuring that in frames and assuming 50fps (20 ms / frame), we can state that in this setup VICE adds 60 ms = 3 frames @ 50 fps.

And then there is the Ultimate64, a C64 redone using an FPGA. Quoting from their homepage:

What are the frame delays of the digital HDMI port? None. There is no frame buffer, so there is no need to worry.

I didn’t buy one due to the price tag, but I probably should. Or maybe my friends and I can share the burden and put some money together…

Oh, and no blog post about M.U.L.E. is complete without mentioning World Of M.U.L.E., an excellent resource on M.U.L.E. and all it’s ports and remakes. There is a Japanese version. And there is even a physical board game.


März 2, 2020

ein paar kurze Gedanken zum Coronavirus. Manchmal verstehe ich meine Mitmenschen nicht so ganz. Zum Beispiel äußersten mehrere Kollegen – intelligente Menschen – sich in der letzten Woche im Stile von „Ich verstehe gar nicht, warum man da so eine Aufregung um das Coronavirus macht. Das ist doch nicht viel schlimmer als eine Grippe. An der Grippe sterben viel mehr Menschen in Deutschland.“

Alles intelligente Menschen, die irgendwie keine Prozentrechnung können.

Die Wahrscheinlichkeit, an einer Grippe zu sterben, liege bei 0,1 bis 0,2 Prozent, sagte RKI-Präsident Lothar Wieler am Donnerstag. Nach den bisher bekannten Zahlen liegt die Rate beim Virus Sars-CoV-2 fast zehnmal so hoch – bei ein bis zwei Prozent. 80 Prozent der Infizierten hätten nur milde Symptome, doch 15 Prozent erkrankten schwer an der Lungenerkrankung Covid-19. «Das ist viel», sagte Wieler.

Quelle hier.

Also dann rechnen wir mal. 15 Prozent aller Erkrankten entwickeln eine schwere Lungenentzündung, bei der der Sauerstoffgehalt des Blutes gefährlich sinkt, d.h. die Sauerstoff zusätzlich brauchen. Nehmen wir mal an, in einer Kleinstadt erkranken relativ zeitgleich bei einem Ausbruch 10000 Menschen. D.h. wir brauchen 10000 * 0,15 = 1500 Betten + Sauerstoff für die Menschen. 100 bis 200 Menschen werden sterben.

Und es trifft eben nicht nur notwendigerweise die alten und schwachen, sondern auch relativ junge Menschen, wie z.B. Li Wenliang.

Spätestens im Januar war aufgrund der Fallzahlen und R0=2.28 oder höher relativ klar, dass die Geschichte auch nach Deutschland kommen wird (das war der Zeitpunkt wo ich mal ein bisschen hamstern bin, d.h. ein paar Sachen so dass man mal zwei Wochen nicht aus dem Haus gehen muss, und ja auch Klopapier und viel Seife) gegangen bin und ein paar Masken bestellt habe. Und nein, ich bin kein Prepper. Aber vielleicht lese ich auch einfach zu viel ausländische Presse?

Und was hat man in Deutschland gemacht? Keine Ahnung, vielleicht viel hinter den Kulissen. Aber ansonsten im Wesentlichen nichts, außer beteuern, dass ist alles nicht so schlimm und an Grippe sind auch schon 50 Menschen gestorben (@11:16).

Dann trotz der Situation in Iran, China etc. sehr lange keine Form von Kontrolle, Befragung oder Information an den Flughäfen. Jetzt teilt man Kärtchen aus und will  man zentral Masken kaufen:

Der Krisenstab beschloss außerdem, einen Vorrat an Schutzausstattung wie Atemmasken und Spezialanzügen – nicht nur für medizinisches Personal – anzulegen. Vorbereitet werden soll dafür eine zentrale Beschaffung durch den Bund.

Quelle: hier. Sie können ja dann bei Amazon eine Sammelbestellung machen.

Und dann das mit den Masken. „Masken sind Quatsch“ wird z.B. hier kolpotiert. Tenor:

  • Die N95/FFP2 Masken bringen zwar was, aber die kann man eh nur 30 Minuten tragen und schnappt dann nach Luft.
  • Die OP-Masken bringen eh nichts um sich gegen eine Infektion zu schützen, sondern hilft nur andere zu schützen, wenn man selbst krank ist.

Hintergrund für mein Genervtsein: In Japan tragen die Menschen (und ich damals auch) oft eine Maske, insbesondere zur Grippesaison. Nicht das sich Japan jetzt besonders geschickt in der aktuellen Situation angestellt hätte, aber zu Mindestens subjektiv wurde ich deutlich weniger angehustet und angeniest.

Kleines Gedankenexperiment: Wenn nahezu alle eine Maske tragen würden, dann würde auch die eine tragen die derzeit krank sind, sogar die, die asymptomatisch krank sind. Ich meine Logik und Mengenlehre ist jetzt nicht sooo schwer.

30 Minuten beträgt übrigens meine tägliche Pendelzeit im chronisch überfüllten ÖPNV.

Und zuletzt: Es gibt zwar tatsächlich keine Beweise, dass das Tragen einer OP-Maske vor einer Infektion schützt. Es gibt aber eben auch keine klare Datenlage, dass nicht. Das Problem ist schlichtweg, dass es sehr schwierig ist, hier eine kontrollierte wissenschaftliche Studie zu machen, die andere Faktoren ausschließt. Das Setting (z.B. im Krankenhaus, in der Schule, auf einem kleinen Dorf) spielt da mit rein. Masken schützen vor dem „unbewusst ins Gesicht fassen“. Es gibt durchaus Studien, die darauf hindeuten, dass solche Masken auch (passiv) was bringen, z.B. hier, hier, oder auch mal zur Abwechselung hier mit Tenor „bringt alles wirklich nicht so viel“.

Also insgesamt. Nein, keine Panik, keine Zombie-Apokalypse; wir werden nicht alle sterben.

Aber wahrscheinlich 0.5 bis 2 Prozent aller Infizierten, und das ist schon echt ’ne ganze Menge. Und auch nicht nur die alten und Schwachen, sondern auch Leute die mitten im Leben stehen, wie der 47 jährige Mann aus Gangelt bei Heinsberg, der offenbar momentan um sein Leben kämpft.

So genug. Ich muss jetzt nochmal unbedingt nächste Woche hamstern.

Denn Desinfektionsmittel sind ja wichtig in so einer Krise. Ich denke, ich werde da primär auf Akohollösungen mit 4.8 Prozent Volumenanteil setzen.

Die Bonpflicht

Januar 30, 2020

Nichts regt mich derzeit mehr auf, als die Diskussionen in Zeitungen und sozialen Netzwerken. Einee der letzten bescheuertesten Headlines war „Boxen gegen die Bonpflicht„.

Zusammen kommt in solchen Diskussionen immer eine unglaublich große Menge Ignoranz und Dummheit, ein diffuses „Der Staat ist so blöd“, „Man will uns gängeln“ und seltsame Umweltgedanken (Teile von Fridays for Future z.B. reiten auf der Welle mit), und manchmal gefährliches technisches Halbwissen.

Kurz zur Sachlage: Ab jetzt bzw. ab dem 1.10.2020 gilt folgendes in Deutschland:

  • Händler müssen einen Kassenbon ausgeben (ab jetzt)
  • Registrierkassen müssen über eine technische Sicherheitseinrichtung verfügen (ab 1.10.2020)

Um zu verstehen, warum das so eingeführt wurde, sollte man erstmal das Problem verstehen. Man erinnere sich an seinen letzten Restaurantbesuch. Da gibt es verschiedene Varianten, aber eine von beiden ist dem geneigten Leser bestimmt schon mal aufgefallen.

  1. Variante: Man kriegt entweder keinen Kassenbon sondern z.B. wird alles auf einem Bierdeckel zusammengeschrieben, oder im Eissalon auch mal gerne auf einer Serviette
  2. Variante: Brauchen Sie einen Kassenbon? Die meisten antworten „Nein“ (es sei denn, man will bzw. kann den Restaurantbesuch steuerlich absetzen).

Im ersten Fall gibt der Wirt die Buchung erstmal gar nicht in die Kasse ein. Der Restaurantbesuch hat nie stattgefunden. Dementsprechend muss der Wirt für diesen Restaurantbesuch keine Umsatzsteuer abführen, denn der Besuch hat halt nie stattgefunden.

Im zweiten Fall ist die Buchung meist schon eingegeben, aber der Wirt storniert einfach die Buchung. Der Restaurantbesuch hat nie stattgefunden. Dementsprechend muss der Wirt für diesen Restaurantbesuch keine Umsatzsteuer abführen, denn der Besuch hat halt nie stattgefunden.

Das passiert übrigens fast überall. Im Eiskaffee, im Restaurant, aber auch bei der Bäckerei, im Dönerladen, China-Restaurant. Beim Taxifahren. Überall wo bar bezahlt wird, und man die Menge an Besuchen/Taxifahrten/gekauften Essen schlecht von außen nachprüfen, höchstens schätzen kann.

Wenn man mit EC-Karte zahlt, bekommt man übrigens eigentlich immer einen Bon. Denn wenn die Transaktion auf dem Konto auftaucht, kriegt auch das Finanzamt davon Wind.

Als ich bei meinem Autohaus den Leihwagen bezahlen wollte, wurde ich auch schon mal gefragt: „Haben Sie gerade 30 Euro in bar?“. „Äh, sorry, gerade nicht in bar, kann ich mich Karte…?“ „Äh, dann sind es 35,70 Euro“.

Die Situation ist inbesondere auch deswegen so schwierig, weil in der Gastronomie – mutmaßlich – das ganze so verbreitet ist, dass wenn als Restaurantbetreiber ehrlich ist, quasi kaum noch überleben kann.

Wie kann man dem ganzen jetzt begenen? Nun, man muss zwei Dinge sicherstellen:

  1. Ein Geschäftsvorfall muss vom Betreiber in die Kasse eingegeben werden
  2. Wenn der Kram in die Kasse eingegeben wurde, darf die Buchung nicht hinterher manipuliert werden können (also z.B. jede zweite Buchung wieder löschen, zeitlich vor- oder zurückbuchen, unbemerkt nachbuchen wenn sich das Finanzamt ankündigt, usw).

Das erste Problem kann man technisch fast nicht lösen, sondern nur organisationell. Und deswegen gibt es die Bonpflicht. Gibt’s keinen Bon, dann heißt das direkt Steuerbetrug. Kein wenn und aber.

Viele Länder machen übrigens eine Lotterie, so dass die Leute die Bons auch mitnehmen und einscannen/irgendwo hinschicken, so dass es eine größere Datenbasis gibt, mit der das Finanzamt dann arbeiten kann. Finde ich ’ne nette Idee. Also man muss dann an der Lotterie nicht mitmachen, Stichwort Datenschutz, ist nur ein Anreiz. Aber der Betreiber kann sich nie sicher sein, dass sein Bon nicht doch in der Lotterie landet.

Das zweite Problem kann man technisch lösen, und deswegen gibt es ab jetzt sogenannte technische Sicherheitseinrichtungen. Die man nicht so einfach austricksen kann, denn da stecken Sicherheitschips drin, wie sie z.B. auch bei Pay-TV oder EC-Karten benutzt werden. Also vermutlich auch nicht völlig unmöglich, aber so aufwendig, dass es kaum jemand macht bzw. überhaupt schafft.

Technisch basiert das übrigens nicht auf einer Blockchain, wie oft dämlicherweise geschrieben wird, sondern ganz old-school auf digitalen Signaturen. Jede Kasse kriegt ein asymmetrisches Key-Pair, wobei der öffentliche Schlüssel bei der Finanzbehörde registriert wird, und der private Schlüssel im Sicherheitschip steckt. Jede Transaktion wird dann mit dem privaten Schlüssel signiert. In die Signatur gehen die Transaktionsdaten selbst mit ein, aber auch ein Zeitstempel (mit einer Uhr aus dem Sicherheitschip), und ein Zähler, der einfach mitzählt wie oft der Schlüssel schon benutzt wurde. Die Transaktionsdaten + Zeit + Zähler + Signatur werden auch mit als QR-Code auf den Kassenbon gedruckt.

Wenn man Steuerprüfer ist, kann man also zwei Dinge machen: Einen Quick-Check, z.B. kurz was kaufen, und auf dem Bon schauen ob die Kasse auch signiert (also verbucht) hat, und ob da so grob übereinstimmt was gekauft wurde und was auf dem Bon steht.

Oder die große Variante, wo man sich die ganzen Signaturen – die der Steuerzahler speichern muss, z.B. bei den oben verlinkten Lösungen auf einen USB-Stick – holt, und dann nachschaut, ob das alles so stimmen kann. Also z.B. wenn eine Transaktion am 13.01.2020 ist mit Signaturzähler = 10, und die nächste gespeicherte Transaktion war am 27.02.2020 mit Signaturzähler = 250, dann hat da jemand 240 mal den Signaturschlüssel benutzt, aber die Transaktionen dazu fehlen – wahrscheinlich nachträglich rausgelöscht. Bisher geht das bei den herkömmlichen Kassen übrigesn problemlos, nennt sich „Zapper“-Softare.

Genauso kann man sich die Uhrzeiten anschauen. Z.B. wurde da bei einem Händler immer erst nach Ladenschluss fleißig gekauft, dann deutet das darauf hin, dass der Händler tagsüber nicht alles eingibt, und dann nachts alles nachbucht (wobei das natürlich auch beim Kauf den Kunden auffällt, weil er dann ja nie direkt Bons ausgibt, sondern nur am Tagesende alle Bons druckt). Und dann gibt es noch so alle möglichen statistischen Auffälligkeiten, die man anschauen kann – z.B. Stichwort Benfordsche Gesetze.

Also nochmal in der Summe:

  1. Bonpflicht: Damit der Kauf auch in die Kasse eingegeben wird
  2. Sichere Kassen: Damit die Buchung nicht nachträglich manipuliert wird

Wer jetzt sagt, „Weg mit der Bonpflicht“, der sagt auch „Umsatzssteuerbetrug ist okay“.

Und manche fordern ganz offen „Weg mit der Bonpflicht“, wie z.B. die FDP.

In sozialen Netzwerken liest man dagegen oft: „Man hackt ja nur auf den kleinen Mann, weil Cum-Ex, und überhaupt“. Was in etwa so viel Sinn macht wie zu sagen: „Mein Nachbar hat mit dem Enkeltrick Millionen gescheffelt und wurde nicht erwischt. Deswegen sollte jetzt Banküberfall keine Straftat mehr sein. Weil, man muss ja auch mal was für den kleinen Mann tun.“

Und das kotzt mich tierisch an, denn ich zahle regelmäßig meine Lohnsteuer, und mit dieser Kohle läuft irgendwie der halbe Staat. Da kann niemand tricksen, dass holt sich der Staat direkt. Aber wenn Händler/Restaurant etc. keine Steuern zahlen, dann ist das plötzlich okay? Und die Händler/Restaurantbesitzer die ehrlich ihre Steuern abführen, können quasi dicht machen, weil sie preislich nicht mithalten können?

Latex vs Word, Revisited

Januar 28, 2020

The whole Microsoft Office Suite has some „Latex-like“ capabilities for some time now. There was some interesting blog by Microsoft’s Murray Sargent — now archieved, available here — on how this was implemented and what considerations lead to the design choices. It’s always very interesting to read and see such documents of Microsoft insiders, as it illustrates that there indeed are very capable and smart people trying to do their best at Microsoft. Next to those people at Microsoft, who decide to sneakily change your search engine without asking you, or installing software on your computer without asking.

Before a Tex-like syntax for mathematical formulas was introduced in Office 2007, there were and are plugins so that it’s possible to write Tex within PowerPoint – like the excellent IguanaTex – but it’s of course always better to have such features directly implemented in the core program.

Recently I authored a paper (scientific conference, thus Latex) with a colleague and had to present the results. Since you now have the capability to type in Latex-Formula in Microsoft Office, in particular in PowerPoint, I did the presentation in PowerPoint. This has several advantages over beamer, notably speed and graphics capabilities. Sure, you can do amazing graphics and illustrations in beamer with e.g. TikZ, unfortunately it takes ages. So this is quite often avoided. The result is that quite often presenters would simply copy the most important formulas from their scientific paper and paste them into beamer, and then more or less read the paper aloud.

For example when googling for „tex beamer talk“ the third result was this document. This is very typical for an average beamer talk. The result looks nice, but: It consists mostly of bullet points and lists. There are zero illustrations (like arrows, graphics, icons) etc.

Of course, finding quickly a PowerPoint counter-example is not easy either, as there are just too many overwhelmingly bad PowerPoint presentations.

These Tex-like features made me think on whether it’s worthwhile and also more productive to write a paper itself in Word from now on. So I set up a small experiment. This experiment was inteded to check the workflow that I have when authoring a paper:

  • there is an existing stylesheet provided by the journal/proceedings, like the notorious LNCS stylesheet
  • I have some specific thoughts on what to write, which often includes complex mathematical formulas
  • there will be some graphics and illustrations, and they should be easy to paste and look nice. I didn’t bother with this one, since I already knew the result.

So I took the LNCS style sheets (the Word version, as well as the LaTex version), and tried to write down the first proof from the book [1]. I did not intend to recreate the layout given there – as I was writing with the LNCS stylesheets anyway – but wanted to (re)create the text and the formulas. As said, for me, that’s a typical real-world scenario.

Btw, there is an interesting study published in the very best of all quality journals, namley PLS One, that investigated on the productiveness of Latex vs Word [2]. The task given to users of varying experience with Word and Latex was to recreate specific texts including the text’s layouts. They concluded that Latex-users are dellusional and suffer from Stockholm-Syndrome: Even though they are vastly less productive and struggle much more, they are more convinced and happy to work with Latex.

Personally, I think this study is heavily flawed. First, I already mentioned in a previous blog post that creating a good-looking layout from scratch takes time in Latex, and is usually not worth the time if you just want to create a small document. But that’s not the typical workflow you have if you want to publish – there will be a layout already crated by the journal/proceedings, i.e. like the LNCS template – and there is no need to recreate that layout. But, aside from that, they imho forgot other quite important factors. I will illustrate the pro- and cons of Latex vs Word in this completely objective table. Let’s also forget about things like Adobe InDesign or Pagemaker for a moment.

caters to people who are smartYESNO
caters to everyoneNOYES
can handle vector graphics properlyYESNO
can handle math properlyYESNO
has a lot of bugs that result in unpredictable behaviorNOYES
result looks like a turdSOMETIMESALWAYS [3]
A fair comparison of features between LaTeX and Microsoft Word

I have to say, that I created this table after doing this small experiment. And I seriously went unbiased into it, I am looking always for new and more effective ways to do it.

Let’s have a look. I wrote these document as quickly as I could, without investingating much in optimizations. This is the result for Latex LNCS, this is the one for Microsoft Word.

Now with Word 2007, Microsoft introduced a new (actually quite nice-looking) font dubbed Cambria. All their math symbols are created in Cambria, however the Springer Stylesheet uses Times New Roman. So you have Cambria Math mixed with text in Time New Roman, and this creates of course a visual clash. Hence I created another version where the text is formatted with Cambria as well. All in all, the Word version looks just horrible:

  1. In the first line (N = { …. }), page 1, the mathematical numbers are larger than the text next to it, due to non-matching math/text fonts. However, no automatic adjustment is done by Word. The Cambria-version looks better though.
  2. „in a finite set {p_1, p_2, …, p_r }“, page 1. Here the spacing between the left bracket and p_1 and the p_r and right bracket is off. Word doesn’t adjust for the space taken by the index in p_r.
  3. page 2: „the last sum is equal to“. The display formula below has a way too large margin between the text and the formula. (There is no newline after „is equal to“).
  4. In the large products (log x <= …), the indices below the product signs have a margin that is way too small. In Acrobat Reader and some zoom resolutions, the indices even tend to even overlap with the product signs.
  5. Sometimes Word sets a margin before starting text after formulas (e.g. in „Now clearly…“), sometimes not („and therefore“). I was unable to remove the margin before „Now clearly“. If I would try to change anything at that position (e.g. reset margin for paragraph, try to delete that part etc.) Word would automatically convert the formula into inline mode. It looked like one of those formatting bugs that randomly occur in Word, i.e. you change something at some point, and due to complicated interactions of various formatting rules, something indeterministically breaks.

Time-wise, it took me longer to write the Word version, but maybe that’s because I am more familiar with Tex.

  1. can handle math properly: All of the item points above, proven
  2. has a lot of bugs that result in unpredictable behavior: Item number 5 above, proven. Or just ask anyone of their experience in handling complex (> 100 pages) documents with Word.
  3. result looks like a turd, proven, just see the PDFs.

More shitty looking math is available from NIST (chosen here, since I can probably copy and paste small portions under fair use). See e.g. this NIST standard.

As for vector graphics: Word will simply rasterize any graphic given to it, even if the source is a vector graphic (like a PDF, EPS or even EMF). Even if it’s stored vectorized within Word (I think it can be done to some extend with WordArt in the later Word versions), once you convert to a PDF, it’ll be rasterized. Another point is that when graphics are created and then later scaled, you will inevitably end up with fonts within the graphic that does not match the running text. Line width of graphics will also be odd, sometimes way too thick, sometimes way too thin.

Case in point: Consider this NIST standard:

Compare the graphic above to this example from TikZ. Namely this PDF. So the item about vector graphics: Proven, as above.Then there is the „caters to everyone vs. caters to the smart ones“. It’s much easier to create a shitty diagram like the one above vs creating one with TikZ. For the latter you have to have a rough understanding on programming, and most people don’t. Also, it is surely quicker to generate the shitty looking graphic above.As said in the study, you are probably quicker. But then again, do you care that you get a quick result and other suffer when reading your shitty looking document, or do you invest more time that others have an easier time reading and understanding your document? Also a matter of perspective, which is not addressed in the study.

Another example: Consider this box plot from the study mentioned. Rasterized, blue/red color scheme. Font is also sans-serif, but font sizes don’t match the running text, as can be seen in the PDF. My bet would be that this was generated by Excel. Compare that to this really nice looking bachelor thesis from the author of the excellent texfig, i.e. especially consider this illustration. Fully vectorized in the PDF. Q.E.D.

All set and done, what is surprising to me is that quite often, other people don’t see that this is bad layout and design. Take for example the picture from the NIST draft above. Conversation is then something like:

MasterChief: „This looks like shit compared to the TikZ figure“

Tinkerbell: „I don’t think so. I think it looks nicer. It has colors.“

MasterChief: „But why? why do you think so?“

Tinkerbell: „This has colors! It’s so much nicer. It has colors!“

MasterChief: „But you don’t recognize anything. Especially when you print it in black/white.“

Tinkerbell: „But it has colors!“

MasterChief: „Why do you need colors in this picture? There are so few categories, it doesn’t add any clarity. Also orange/light blue is a horrible choice for a color scheme. Think of those who are colorblind.“

Tinkerbell: „But it has colors!“

So taste is a difficult thing. There is maybe a slight direction and sense among professionals who layout on what is really bad design and what is just a matter of taste, but with Word everyone feels like a designer. So most will simply ignore basic design rules.

Uh, and if you still didn’t get my point: User LaTex if your document is reasonably complex (> 30 pages) or has math in it. Or graphics.

[1] It’s four, I knew you would ask, and you can computer yours here.

[2] It is interesting, that everyone compares Latex to Word, but nobody in their sane mind would compare Indesign with Word and then come to the conclusion, that Word works „just as well“, that „it’s faster“, and you should switch over. Every magazine layouter/typesetter would just laugh at you.

[3] There are a lot of bad-looking LaTex documents, too. Especially when it comes to tables. Note the excellent, eye-opening presentation „Small Guide to Making Nice Tables“ by Markus Püschel.

Cracking Jurassic Park (DOS, Ocean)

Januar 20, 2020

Another issue I have is, that I simply can’t let things go.

So this is how to remove the copy protection from Jurassic Park (DOS) from Ocean Software.

I’ve already written in a previous blog post on how I unsuccessfully attempted to crack the copy protection mechanism, namely Rob Northen’s ProPack (sometimes abbreviated RNC). I’ve also written that I was able to unpack the INSTALL.EXE binary with Universal Program Cracker. In the meantime I’ve also found UNP, a universal DOS unpacker that supports dozens of formats. This one works as well, and seems to be the gold-standard of DOS unpacking.

Next thing is the copy protection mechanism itself. In my memories, you had to have Disk#1 in the drive to start up the game. But looking at this review from the German magazine PC Player, the review says w.r.t. copy protection: „can only be installed from original disks“. So the step where I stumbled upon last time – disk check during installation – seems to be the actually copy protection mechanism, and after managing the installation, we should be done. I confirmed this by copying a successful installation from Dosbox to a FreeDos VirtualBox instance, and the game would still boot up.

Now this all doesn’t make any sense at all as a copy protection mechanism, because you can simply installed Jurassic Park once, and then zip (or arj or rar) the game directory manually and give it to someone else. And we were young but not stupid back then. So maybe there were some additional run-time checks back then. Or I didn’t actually crack the game but there is some nifty hidden catch, like a boss-fight deep in the game where you can’t win anymore… but at least right now it seems as if you break the disk check during installation, you’re done. Ok, so let’s do that then.

Step 1

Unpacking the INSTALL.EXE file. As mentioned, can be done by either UPC or UNP.

Step 2

Debug the unpacked INSTALL.EXE. You can either go the classic route via Turbo Debugger or … even (?), or simply use the DosBox debugger. Now during this whole process I have to say that looking back, USE MODERN TOOLS.

Fun Fact: With Turbo Debugger you can of course not easily break in between a graphical program, so this makes things difficult. You can of course do remote debugging, i.e. execute the program on one PC (graphically, as it would normally run) and the use a null-modem connection and run Turbo Debugger on another machine. There is an interesting article about doing that with DosBox, but it’s very slow.

Fun Fact 2: Back in the days, most folks could probably only dream about remote debugging … like owning two PCs? How rich do you have to be? … but back then my father was so annoyed that I was always using his 386, that I got my own computer (a used 286). So technically, I could have done remote debugging back then. However, I didn’t know what debugging was in the first place, so that’s a bummer. In any event, if I’ve learned anything during this process it was: Unless you really wanna rock like it’s 1992, use modern tools instead. Makes things much easier.

With DosBox debugger (dbd) I set a breakpoint at int 13 with ah=00. That’s „reset disk system„. Setting the breakpoint in dbd is ‚bpint 13 00‚. Then F5 to execute the program.

I mean the installer checks for disks in drive A and B, and so I thought that that might be a good starting point.

Now what the f*“§!“$ is that? There is an int3 instruction, which is a software breakpoint. If you’d use Turbo Debugger, then now things already become difficult, since I think TD relies on software breakpoints.


Short background lecture: int3 (0xCC) is a debugging interrupt. Basically a (simple) debugger works like this. Say you want to set a breakpoint at position X. Then the debugger will look up the assembly instruction at position X, remember it, and replace it with int3. During system execution, interrupt 3 will be called, and the debugger hooks to that interrupt. You can then inspect every state of the program, and if you want to continue execution, the debugger will insert the original instruction at X, and continue. So usually int3 is reserved for debugging, and you won’t find it in a normal program.

What apparently happens here is that the installer has set his own interrupt handler for int3. During normal execution, that int3 handler is called. When a debugger hooks int3 instead, then the debugger’s handler is called. This is of course different program code than the installer’s int3 handler, and the installer will notice that the debugger stepped in, and detect the debugger.

What’s more bothering is that apparently the code around this position is encrypted, and basically illegal garbage. The int3 handler of INSTALL.EXE will decrypt the program code during runtime. You can see in the next screenshot that IDAPro fails to interpret the assembly code, as it’s encrypted.


Fabulous Furlough has an interesting article on how  even tougher versions of RNC worked (in this INSTALL.EXE I could not identify any int1 hooks, but in his blog post, FF talks about some soccer manager game (maybe Graeme Souness Soccer Manager? – I need to check this out) and not Jurassic Park.

I was completely stuck and almost giving up, but when you continue to step through the program, at some point you reach a position where a dialog „insert original disk“ appears. Somewhere around here is short before the „insert disk“ prompt:


Then here we reach the „insert disk“ prompt:


I then looked this up in IDAPro.


Could it be that this is a position to patch? Here „call sub_1C5B“ is the call that will result in the nag-screen to appear. And „call loc_5AE6“ seems to be a check call (is this the disk check?!), where the result is verified, and depending on the result the nag-screen appears.

Moreover the whole function that IDA shows is either return 1 (i.e. mov ax,1 and then return) and the other code path is return 0 (xor ax,ax, and then return). My thinking was that the protection probably is like this:

loop a few times {  
  if(some complicated disk check verifies) {
    return 1;
  } else {
    request the user to insert the original disk;
// user was asked several times, still not correct disk
return 0;

Step 3

What would we be without the National Security Agency? Well, less spied upon that’s for sure, but also without an extremely useful and cool reversing tool, namely Ghidra. In particular Ghidra comes with an extremely powerful decompiler, completely free as in FLOSS. And it supports DOS binaries. Throwing in INSTALL.EXE we get:


Wow, I should’ve done that in the first place. Now it becomes clear: The main program just checks, whether the function CHECK_FOR_ORIGINAL_DISK (here called disk_check_FUN_1000_0bf3) returns 0 or not 0. If(!0) we do all kinds of installation routines, otherwise abort.


So how do we make sure that disk_check_FUN_1000_0bf3 always returns 1? Looking at the disassembly in Ghidra makes this really easy, as you can see the decompiled code side-by-side. The check

if (lVar2 == CONCAT22(param_2,param_1)) {
  return 1;

results in this assembler:

                       LAB_1000_0c33                                   XREF[1]:     1000:0c45(j)  
1000:0c33 eb 12           JMP        LAB_1000_0c47  // just directly go below and return from function

                      LAB_1000_0c35                                   XREF[2]:     1000:0bf7(j), 1000:0c2f(j)  
1000:0c35 e8 ae 4e        CALL       FUN_1558_0566                                    undefined FUN_1558_0566()
1000:0c38 3b 56 06        CMP        DX,word ptr [BP + param_2]
1000:0c3b 75 bc           JNZ        LAB_1000_0bf9   // if comparison fails, jump back to loop @0bf9. Don't want!
1000:0c3d 3b 46 04        CMP        AX,word ptr [BP + param_1]
1000:0c40 75 b7           JNZ        LAB_1000_0bf9  // if comparison fails, jump back to loop @0bf9. Don't want!
1000:0c42 b8 01 00        MOV        AX,0x1        // write 0x01 in return register
1000:0c45 eb ec           JMP        LAB_1000_0c33 // return via 0c33 which jumps to 0c47
                      LAB_1000_0c47                                   XREF[1]:     1000:0c33(j)  
1000:0c47 5e              POP        SI
1000:0c48 5d              POP        BP
1000:0c49 c3              RET    // return from function

The crack then is quite easy. Just nop out both the jumps JNZ LAB_1000_0bf9, i.e. go to the file offset, and replace 75 bc by 90 90 and 75 b7 also by 90 90. And… drum roll … it WORKS!

Conclusion and Web-Links

So it seems that unpacking INSTALL.EXE is probably the more difficult task. On the other hand there are tools like UNP and UPC that work well, so I don’t think I will investigate any further in this direction.

The original question was: Could I have done that in 1993? Definitely not. There was no unpacker, no DosBox debugger, no IDAPro… and probably – we’ll never now for sure I guess 🙂 – Ghidra. Actually thinking of it, there was no Ghidra either, since Java and Swing didn’t exist in 1993; the first JDK was released in 96.

There are other useful tools and links I found in the process of doing this. I have to say though that DosBox Debugger + IDAPro 5 (free version available @ScummVM ) and Ghidra turned out to create the best workflow. If you have a full version of IDA, Ghidra can probably be omitted (not sure whether the full version of IDA can still do DOS binaries).

  • Sourcer is a DOS disassembler from the good old days. The latest version I could find (Sourcer 8.01) is from the late 90s and thus very advanced. It can even produce assembly code that compiles with MASM or TASM. There is a nice blog post introducing Sourcer. Of course re-compileable code means you can really mess with the source code and if you want to deeply understand the game itself, e.g. its game-logic. But to just remove some protection – for compatibility reasons – like I did here, I very much prefer IDA. Nothing beats IDA’s graph mode.
  • Syncing IDA and DosBox. My way of „synching“ was simply taking the DosBox output and searching for sequences of assembly instructions (i.e. hex codes). That’s cumbersome, but I am not aware of another method with the free version of IDA. There is an IDA plugin for DosBox, but you need the commercial version of IDA.
  • There is a nice page which gives an overview over tools that remove disk protection schemes for DOS. There were apparently crack-collection programs available back in the days. If I only had one of these programs back in the days 🙂 Interestingly, a tool called Crock was also able to crack the Jurassic Park installer. I checked the applied binary changes for confirmation on whether I did the right thing. The patches applied are different but essentially result in the same, namely that the check function always returns 1. They patched the whole function so that directly after entry, the code jumps to MOV AX,0x1 and returns.
  • There is also Insight, a free DOS Debugger. Haven’t tried it, but might be an alternative to Turbo Debugger.

And while we are at it: Cracking Need for Speed 3: Hot Pursuit

November 15, 2019

With 3D Gaming titles like Doom (btw absolutely MUST READ: Masters of Doom), Duke Nukem 3D and Tomb Raider, suddenly 3D accelaration became a big thing in the mid 90s.

However since nothing was standardized in DOS, let alone even developed, both software developers and hardware makers began experimenting on how to accelarate 3D graphics. Since all engines were custom developed and using all sorts of hacks, it wasn’t even clear how to effectively accelerate the engines, or to create better looking graphics. A huge field of experimentation. Fabien Sanglard has some really interesting articles about these early chipsets, like the Voodoo1 and the Rendition Vérité 1000 and how to program them.

Thus it was also a huge field of experimentation for early adaptors. Of the expensive kind.

The typical approach by hardware manufacturers in the beginning was to design a 3D chip, had a few selected software developers adapt their titles for it, and ship it bundled with the board.

The first 3D accelerated card I bought was one based on the S3 Virge/DX. It was marketed as a solid 2D graphics chips with 3D features. The problem with this chip was that on the one hand its 3D accelaration capabilities were very limited, and on the other hand the 2D accelaration had also issues. This was at a point in time where video game makers extended the typical resolution from VGA with 320×200@256 colors to higher resolutions, like 640×480 or 800×600. While there were standardization efforts by VESA, a lot of manufacturers didn’t implement all VESA standards, so there were issues with higher resolutions. There was UniVBE, but still I had a lot of issues especially with DukeNukem3D (based on the Build engine).

Thus I replaced it with the best card for DOS I could think of, the Matrox Mystique. This solved most of the issues, but it lacked decent 3D acceleration features.

My next try was then an add-on board. From an early adaptor’s perspective it wasn’t really clear that 3dfx‘ Voodoo 1 would make the race, so I opted for an NEC Power VR PCX-2 chip in the form of a Matrox m3D instead. It shipped with Ultim@te Race, a pretty boring racing game, and there were also patched versions of Flight Unlimited and Tomb Raider 1 (see here for a comparison of a Voodoo 1 and Power VR). Unfortunately, performance wasn’t really that great, and the games also looked only slightly better than software-based rendering. And the three mentioned games were basically it. Turns out I bet on the wrong horse, as 3dfx‘ Voodoo 1 made the race.

Thus when I upgraded from my Cyrix 6×86 P166+ (with its shitty FPU performance) mostly used for DOS gaming to a Pentium II 300 based system mostly intended for Windows gaming, I opted for the best of the best: 3dfx‘ Voodoo 2. There were two main reasons:

Voodoo 2 was the most expensive option of course. Not only was the Voodoo 2 board quite expensive, you also required a 2D board as well. And when Bjurn and Winnie the Pooh went for new systems and opted for a bargain ALDI PC, it was only natural for Matthew and myself to mock them for doing that, instead of building a machine on their own. However, the ALDI PC was actually quite decent and came with an onboard Nvidia RIVA 128 ZX.

And, long story short, this is where Need for Speed 3: Hot Pursuit comes into the story. By that time, 3D graphics were standardizing more and more, and 3dfx‘ proprietary Glide interface was more and more replaced by DirectX/Direct3D. And Nvidia not only concentrated on accelarating DirectX, but also continually improved on the driver front. While NFS3 still looked better on our Voodoo 2 boards, it really looked decent and was running quite well on the Nvidia chips. Which were several magnitudes cheaper, as they combined a 2D board and 3D board and were still cheapter than the Voodoo 2 alone. Nvidia continued their success with the TNT, TNT2 and GeForce 256 and the rest is history. Seeing NFS3 on the Riva 128 really made me think: Hm… the Voodoo 2 look better for sure, but this is quite ok, and I paid what? Next time Nvidia, that’s for sure…

Need for Speed 3 made a huge impact on us. The pursuit system was incredible fun in multiplayer mode over local area network; graphics look nice even today, the techno-soundtrack was great, and in the German version some ingame voice was dubbed by Egon Hoegen. Egon Hoegeon also dubbed the very popular traffic-safety television series Der 7. Sinn, which gave the whole thing a funny and humorous note.

Unfortunately, Need for Speed 3: Hot Pursuit requires a CD drive, which I don’t have on my current notebook computer. Also the installer is a 16 bit program, which doesn’t even execute anymore on a 64 bit Windows. So let’s startup IDAPro7 Free and do the whole GetDriveTypeA thing.


Looking for references for GetDriveTypeA, we end up at the function @0x4F9410. Checking then where this function is called from, we see a function at around @0x4B6394, which depending on some jumps creates some MessageBoxes (MessageBoxA). So it is not difficult to assume that these are the calls for the error messages („Please insert the NFS3 CD“). Then it’s a little guess work, and after some trial and error it turns out that turning the JNZ marked in the picture below to a JMP instruction suffices to circumvent all the CD check code.


So, a very simple one. These were the times…

Oh, and this is for education only. If you actually want to play the game, I suggest to head over here and download that patch package as well as follow the instructions, as NFS3 also requires some registry patches. Using a file compare on my patched .exe and the one provided in the link, I noticed that the other crack had also patched jumps at other locations, all near the above mentioned check locations, resp. the procedure around @0x4B6394. On my system they were not neccessary for the game to run, however.

It’s really time for the old NFS games to be released on GOG

Challenge #2 (Success): Dark Reign (Windows 95)

November 6, 2019

As I failed to solve challenge #1, I really wanted this not to be another failure.

The first difficulty here was to actually get an original copy of Dark Reign. I wanted to have the original ISO file, as this was the one I got back in the days. As I don’t have the CDs anymore this proved to be tricky, but after some googling, I managed to get a copy.

The original version of Dark Reign doesn’t work on modern systems anymore, due to DirectX and DirectDraw issues. Dark Reign used DirectX 3. Fond Memories. C&C Red Alert… I am getting old.

I digress.

There is a patch from version 1.0 to version 1.2, and applying this patch to version 1.2 made the game run in compatibility mode. The problem here is that dkreign.exe is now exe-packed. Detect-It-Easy report Neolite 1.01. I’ve found no tool to automatically unpack Neolite 1.01. There is a tutorial (link blocked with Firefox, use at your own risk) on how to do it manually using SoftICE, but then again, the days of SoftICE are long gone, and I have no Windows 98 machine here.

I’ve copied the original 1.0 version of dkreign.exe into the 1.2 patched folder, replacing the 1.2 exe. Dark Reign would still start, so I’ve focused my efforts on this one. I don’t consider this cheating, since I am very sure that I got the original version 1.0 on CD back in the days, i.e. the unpacked exe.

To disassemble dkreign.exe I’ve used state-of-the-art IDAFree 7. Ghidra with its powerful decompiler is probably worth a try, too; but let’s start with the well-proven route. Unfortunately, the IDAFree debugger failed to work (it’s incredibly helpful to have graph-mode available when debugging), and would hang with some runtime errors in apphelp.dll – I attribute this to the Compatibility settings for that old exe.

x64dbg somehow worked, but I am not very familiar with that one (something I really need to work on), and wasn’t sure on how to set breakpoints on certain imports, especially on GetDriveTypeA (more below).

So it means that all was done with the disassembly of IDAPro and a hex editor (I use HxD). I stuck to some “poor man’s code analyzer”, i.e. to identify code flows in IDA (say to identify whether some JNZ was taken, i.e. wether the Zero-Flag was set or not), I’d randomly zero out parts of the code in one of the jump locations. If the patched exe would still run, the jump doesn’t reach that location. If the program crashes, it apparently did.

Remembering the good old days, the typical ways CD-checks were implemented (in the good old days, before such nasty things as SecureROM or SafeDisc) like this:

  • call GetDriveTypeA (a Win32 Kernel Import).
  • check if the function returns DRIVE_CDROM (5) or something else like DRIVE_FIXED (3).
  • If 5 is returned, then read some files from the CD and check their content, if they match run the program.

Otherwise prompt the user to enter the CD. Something like this (taken from the actual disassembly here):


Unfortunately things are more complex here, than just patching the final jnz above. So anyway, I looked for imports of GetDriveTypeA. Other suspicious candidates are GetVolumeInformationA and GetLogicalDrives).


Bingo. GetDriveTypeA leads us to 0x57f290, apparently some check-routine.


Checking cross-references (i.e. positions, where the sub-routine at 0x57f290 is called) leads us to 0x57aa50.


Wow. The function at 0x57aa50 looks huuuge. After looking around, it looks that some checks are happening, and then there is some large jump-table. This all seems, as if the main game menu is processed here.

Now it got nasty. It took me approximately two evenings and another full day to make sense of the code-flows here. As mentioned I used to patch various locations in order to provoke crashes to identify which code path‘ were taken.

First there was a string reference at 0x57AE46 (“Credits”). Random manipulations here resulted in crashes when clicking on “Credits” in the main menu. Bingo, so I was correct.

The game complained on almost every other option in the main menu that there was no original CD. So I focused first on the program flow of clicking “Single Player” → “New Game” → Get a message with „Please insert your CD“.

Trying to find this code-flow was very frustrating. But at some time I was sure that when you click on “New Game”, then you end up 0x57AE05, and the sub-menu where you can actually start a new game is executed below with a call to 0x572130.


But… what is this string reference at 0x5725E5? „SS_NO_CD“. There are similar code blocks at 0x5726C5 and 0x572505. These very much look like code blocks that generate a “Please insert your CD” message. We surely don’t want to get there! So again after mangling with the code paths and patching here and there to get a controlled flight into terrain (a controlled crash), I was able to identify the following jump:

loc_572669: ; patch jump, to start game in single player, new game
call mc_checks_sub_401470
cmp eax, 3
jz loc_572737 ; patch this to jnz, i.e. from 0F 84 ... to 0F 85 ...

This resulted in the “Please enter your CD” message vanishing, and instead the intro-video and New-Game options to show. I was able to start a new game, but there were apparent graphical glitches.

Now I suspect 99% that it’s due to DirectX issues and not the CD check, but of course I cannot be 100% sure unless the game is fully playable.

Nevertheless, at this point I stopped. The rest would be just routine work. Getting through all remaining code path (i.e. check all options of the main menu), identify the other check positions – which will be similar to the one of “New Game”, and patch them out.

Disclaimer: During all this work, I left the DarkReign ISO mounted as a windows drive. Could be, that without the disc present the game doesn’t start. But at least the “Is it a genuine Disc?” verification is patched, i.e. this would’ve solved my problem back in the days with the copied CD-R.

Would I have been able back then to crack the game, if I had the right tools or information?

Very difficult to say. Tools of the trade back then were W32Dasm and SoftICE. W32Dasm didn’t have a graph mode; I think this was introduced in IDA in what… 2005 or 2006? So basically you’d have to draw your own control flow graphs with pen-and-paper. Probably one reason why I never could make sense of all this back in the days – not that I seriously tried, let alone tackle DarkReign. Maybe SoftICE would’ve helped. But I never really figured that one out.

All in all, this was very time-consuming; time which I actually don’t really have. But reversing is incredible fun, and I sometimes think I should move my career more into malware analysis… Currently my lack of debug-skills is definitely preventing this, though….

IDAPro database and patched exe are available for educational purposes here. Password for the zip file is ‚fckgw‘. Note that the IDAPro database files contain a lot of unclean comments, i.e. comments like “patched” indicate that I patched there at some point in time to identify a code flow. But really, the only patch necessary to start a single player game is at 0x572669 to change the jz to a jnz.

Challenge #1 (Fail): Jurassic Park (DOS)

November 6, 2019

I give it away already in the title. But here is the chain of events.

You can get Jurassic Park (DOS) cracked from various Abandonware sites. No links here. Unfortunately it is – as of writing this – not on Gog.

The first difficulty for me was actually to read my old original 3.5 inch floppy drives. I bought an external USB floppy drive from amazon – but not sure whether it was driver issues or the drive being broken, I couldn’t read a single disk. I was already short of giving up, when I found this drive from Conrad which was not only able to read all my 1.44 HD Floppy Disks but also the 720KB DD ones as well.

I used WinImage to read out the disks and create IMG files. Mounted the images in DosBox, and started the installer and…

the installer complained that I do not have the original disk in the drive. So even the installer did a copy protection check. Sucks.

I used Detect-It-Easy to detect it easily and identified Rob Northen ProPack on the installer. So it seems Jurassic Park is secured with Rob Notherns Copylock, the DOS version.

There seem to be two issues here: First, the disk archive files (.RNC) seem to be packed with ProPack. The Exe Installer seems to be exe-packed with Pro-Pack, too. I suspect that the actual game-exe is secured with Copylock. To actually work on the game, I first have to install it somehow on hard-disk, and unpack the .RNC files. There are some online ressources for ProPack here, here and here. According to Detect-It-Easy, the versions match, but with none of the tools I was able to unpack the archive files.

My idea was then to unpack the installer-exe, and then try to patch the installer so that it unpacks the RNC files despite not identifying the disks as genuine. The ProPack tools above however also failed to unpack the installer-exe.

Now, I was never an expert on (EXE) packers, so I my only hope was to find an existing tool. After googling, I’ve found the Universal Program Cracker (UPC) , which seems to be a univeral unpacker for DOS programs created with Turbo C and similar compilers. It (seemingly) succeeded in unpacking the installer exe.

Next I’ve disassembled the installer-exe with IdaPro 5 Free.

It confirmed my guess that UPC was successful, but I couldn’t make any sense on the disassembled source.

The next steps would be either to buy an original DOS machine to just install the frickin’ game, or use a patched version of DOSBox to debug the thing. But the comfort level of the debugging abilities of DOSBox compared to say, IDAPro, is quite… eh different.

In the end, I simply gave up.

Would I have been able back then to crack the game, if I had the right tools or information?

I can firmly assert that no, not at all. Back then I didn’t even know what a debugger was. I’d go so far to say that even for a very skilled programmer with access to the right tools back then, it would’ve been very very difficult. So unfortunately, a full failure.

It makes me appreciate the guy that cracked Indy3 for DOS even more.

Dealing with Traumatic Childhood Issues

November 6, 2019

Back in the days, computer games were expensive. I mean ridiculously expensive. There were basically two sources of acquisition: The straightforward one was your local computer shop, who would actually deal in hardware, but also ordered software on demand from his wholesaler. Everyone wanted to make his share, so at each step in this chain – from the developer via the publisher, distributor, wholesaler to the local shop – added something. I remember that when Monkey Island 2 was released, the price at the local store was 120 German Marks.

According to Wikipedia, Monkey Island 2 was released in December, 1991 – so probably in 1992 in Germany. There are some online tools to calculate historical inflation: 120 German Marks would approximately value 91 Euros today; but from a more personal perspective, I’d value it more at 120 to 150 Euros nowadays. That’s a lot of money, especially for a middle school student with no real source of income.

The other way of getting games was to order them via classical mail order business. The issue here was the method of payment and shipping costs. A game like Monkey Island 2 was around 90 Marks via mail order, but cash-on-delivery and shipping added easily 25 Marks. The cheapest way of payment was to convince my dad in writing an account-only cheque, then send this as a letter to the mail order business, and hopefully get back my game; this would amount to approximately transaction fees and shipping costs of around 10 Marks . Probably 90% of all cheques my dad wrote in his life so far have been for ordering PC games.

With such prices, it was usual to share my games with my middle-school friends, and they would share their games with me. Say on my birthday, I’d ask my parents to buy Zak McKracken and share it with Bjurn, and then when Bjurn got Microprose‘ The Lost Files of Sherlock Holmes, he’d share a copy with me.

Enter copy protection.

The first time we were confronted with copy protection was when Bjurn got a copy of Zak McKracken, but without the code book. You can play parts of the game without the code book, but at some point you need the codes to progress.

I’d ask my dad whether he could remove the copy protection from the game. He wasn’t a programmer and just a tech enthusiast, but he tried his best. He’d open a hex editor, scroll through the binary and try to identify spots where the protection was happening, but of course would see nothing. Nevertheless, this all looked like magic for me.

Today of course I know that the copy protection is built deep into the game logic: Within the game, you have to travel through the world, and need to enter correct VISA codes in order to be able to board a plane [1].

All the verification stuff as well as the game logic itself is SCUMM code, which itself is basically its own programming language and runs on some dedicated game engine resp. SCUMM interpreter. Without reversing the interpreter, it’s a futile attempt to remove the copy protection by looking at the assembler code alone.

Still, the game got us hooked so badly that we wanted to continue, and this resulted in my first legally bought computer game, namely the DOS port of Zak McKracken. The game was great fun, my first graphic adventure, and I never regret shelling out the money for that. The code book was actually literally a code book, printed on dark brown paper in order to make xeroxing harder. But with a modern Xerox machine at my Dad’s place of work, this was no challenge. Without any hint books available, Bjurn, Tobias and I required around two month to complete the game and solve all puzzles. So despite the price, still good value for money 🙂

I rarely saw cracks for DOS games. Was it a lack of tools? In those years I’ve only encountered a crack once. It was a crack[ for Indiana Jones and the Last Crusade (the LucasGames Graphic Adventure). It’s a tragedy that I lost the disks. Basically the crack was a TSR program called INDYPATC.EXE (or INDYPATC.COM, can’t remember). After loading the TSR program, you’d start the actual game INDY3.EXE, and then at the code check (which was actually not at startup, but quite late into the game), you’d have to press some key combination, and the correct answer was displayed in the upper right corner [2].

Even by today’s standards this is quite remarkable:

  • It was the German EGA version of Indiana Jones, which makes it likely that the reverser was German
  • I have no clue whether Indy3 was running in real or protected mode, but essentially the tool must’ve monitored memory locations where the answers were stored – how to get these locations? How to be sure these locations were stable? Due to SCUMM, disassembling would’ve never sufficed. A debugger must’ve been used. Up until starting investigating all this, I didn’t even now SoftICE (for DOS!) or TurboDebugger existed!
  • I have some suspicion that actually Sebastian was the source of this crack. Sebastian was the son of one of my Dad’s colleagues, and even though only a few years older, he was sort of an uber-hacker for us kids. Matthew witnessed Sebastian coding 6510 assembler on the C64 fluently. Rumor has it, he was part of a C64 Warez called Acia(?) or Acer(?); he definitely had connections to the software underworld. But then again, working on a bare-metal machine like the C64 and on DOS with all its abstraction layers is quite a different league, so maybe I’m giving false credit here.

In fact, there were no help files or instructions, and the whole approach was so unfamiliar to me, that I couldn’t really figure out what to do – after all, when you started INDYPATC.EXE nothing ever happened – it was a simple TSR without any screen output or instructions on what to do.

I managed to get the actual code book via other means (Tobias had a friend who had a xeroxed copy), and only some years later I figured out how to actually use the crack.

The Jurassic Park Traumata

Going to the movies back in the 90s wasn’t an easy thing. Interesting movies had parental guidance ratings, which were actually enforced at our local cinema. Then there were the logistics – you had one of your parents drive you to the cinema and give you a ride back home. Simply getting to know which movies were running at all was something you had to look up in the newspaper, and then you had to somehow acquire information on what that movie was actually about. And whether it was worth the hassle.

I distinctively remember the following movies from the 90s as leaving a huge impression on me:

There were other good movies of course, but I didn’t watch them in the cinema.

Jurassic Park was the first one of them all. It was just mind-blowing. I haven’t seen it lately, but I’d bet that it probably aged well. It was just awesome [3].

So awesome that I just had to have the game. Sure, it was from Ocean Software, which had a history of making very average movie conversions. But Jurassic Park was reviewed in PC Games Germany with 79%, and that was good enough for me. So I managed to somehow get the money and buy the game (Mom, Dad – this could be my birthday and Christmas present combined, what do you guys think? Pretty Pleeeeease!).

But there was Rob Northen, who had apparently issues with sharing. Jurassic Park didn’t have one of those easily xeroxable code books or manuals, where you had to type in a code or a specific word from the manual at startup to verify that you were a legitimate owner. This one was copy protection on the disk level. You had to have floppy disk #1 in your 3.5 inch disk drive, otherwise the game wouldn’t start up. That was nasty, and a bad idea – even back then. After all, floppy disks weren’t actually known to be very reliable. And thus sharing Jurassic Park with Bjurn meant really that – I couldn’t share a copy with him, I had to physically share the disks with him. I lent him the whole game for more than one month. In fact, he was and is a more resilient gamer than I am; i.e. he got quite far, but couldn’t pass the 3D stages. I should probably finally play this ‚till the end screen with a walkthrough, if I’ll ever find the time.

So long story short, my first childhood issue is Rob Northens frickin‘ copy protection system. I always wondered: Would it have been possible for me to crack it back then, given I’d had have more knowledge? That’s an interesting challenge, namely

Challenge #1: Remove the copy protection from Jurassic Park (DOS)

Fast forward to the late 90s. Things had changed significantly by then. You could buy PC games actually at large electronic chains, such as Saturn and MediaMarkt (in Germany). Prices had also dropped. I remember buying my original copy of Starcraft – Protoss Box – for 66 Marks. I mean they had PC Games(!!!!). Not just console games, and they would even sell PCs!!! PCs were so mainstream, that you didn’t have to go to specialized shops. Insane. Unbelievable. Who would’ve thought?

Then there was the Internet. In fact, I had a fat PC with a 14.4 modem, a Creatix SG 144, soon replaced with a 64kbit ISDN connection. Nevertheless, the net was not nearly what it is today, i.e. information was still sparse and scattered, especially in German.

What I mean is, to program or reverse games you need certain qualities:

  • an above average IQ
  • be stress resistant – reversing is an incredible frustrating experience with a high learning curve
  • have a high motivation (this goes hand in hand with 2)
  • have information, i.e. tutorials, books, articles
  • have the right tools

Of course, you don’t need all qualities to be somewhat successful. For example my IQ, that is my raw calculation ability, is very average. Something I painfully understood when doing my PhD in Formal Methods or playing chess. There were a lot of folks, who were simply smarter than me, a very humbling experience. On the other hand, I score high on stress-resistance and motivation, considering that I survived and succeded in a very stressful and very-high-pressure-to-perform lab-environment.

However I want to focus on the information and tools part. The point is, back in the days getting information and getting the right tools was incredible difficult, especially if you weren’t well connected in certain circles.

I very much remember that in the 90s I wanted to learn programming. I knew a few bits and pieces of Pascal. I got a copy of Turbo Pascal, and then I got some tutorials from my Dad. He was not much into programming himself, but he had some teaching material for Pascal for CP/M machines. Now this didn’t really fit with Turbo Pascal: While there was a CP/M version of Turbo Pascal, all the instructions and commands didn’t really fit, we had no manual for the DOS version, and the tutorials were very outdated, i.e. not considering things like graphic output. And so I was quite stuck. Especially in the early 90s, the PC community was incredible small – most home users had C64s and Amigas and weren’t into programming, but rather into games – and I had no connection to people who could actually program.

Books were hard to come buy. Libraries had only outdated stuff. Bookstores had rarely computer books, and if they had some, it was books targeting beginners. There were very few book reviews available. My English wasn’t up to par. Still, I wanted to learn C or C++ programming. In fact, I didn’t know the difference back then, but Tobias said that he heard that Sid Meier used C to program Civilization, so it MUST be good, right? Right!

I bought Borland C++ 5. Das Kompendium by Dirk Louis. I didn’t own Borland C++, but hoped to get a copy somewhere somehow via my Dad’s colleagues. Unfortunately that never happened, Borland’s C++ simply wasn’t that wide-spread.

The book was 100 German Marks, and it’s a really bad book. Half is spent on the IDE itself and explaining its GUI options. Some part is spent on C, then some other part is on C++. I am not sure whether the author actually understood object oriented programming himself back then.

I’d never heard of these guys called Ritchie and Kernighan, and that they wrote a book that might have been useful for me. I never heard about GNU and that there was a free C compiler.

By the way, another shitty book that comes to my memory is Computerspiele selbermachen. (Make your own Computer Games). There was no review, I only saw the title in some catalogue and bought it. Looking at the title, my expectation was that it would teach you how to program games, like starting from simple things like Pong and Space Invaders up to something slightly more complex, say Doom.

Just, it didn’t teach you programming at all. It mentioned POV-Ray for DOS (without actually explaining much), and shipped with a trial version of Game Builder Lite.

In any event, the author created some ray-tracing pictures, added some notes on story creation, added a CD with various freeware or trial versions and compiled that into a book. In other words, another wasted 40 Marks.

My point being: Whereas today there is a plethora of (free) information out there, back then, having access to the right information and tools was an incredible competitive advantage.

Enter Nils

Aside from Sebastian, Nils was also a supposed uber-hacker back then. In retrospect I think he knew shit (as opposed to Sebastian), and suspect that he was just well connected, had some tools and barely knew how to use them. But that was sufficient, and for us, he was the uber-hacker.

I mean we once heard a story that Nils was capable of creating a virus, that would trash your hard disk with a head crash. I think he actually told and spread the story himself. Allegedly he would do this either via a firmware hack or by going very low-level on DOS to be able to spin up the drive and suddenly park it, or send other inconsistent command sequences that would physically trash your hard disk.

I discussed it via Matthew back then. I was very sceptical. Matthew was too, but I think as he saw Nils in action a few times “hacking”, he took it more for real. I mean even today I am 99% sure it’s bullshit but then again, there were and are some doubts – the simple point is I don’t have any clue about hard disk firmware of 90s disks, or what kind of low-level disk commands were possible in DOS. But I guess that is the very thing – people who actually have some knowledge of something will double and triple check things before making bold statements, whereas amateurs will make bold statements without a glimpse – even though they are completely wrong.

I mean think Trump.

Later, when the whole DivX stuff became a thing with DivX 3.11alpha, I met once again with Nils for trading ripped movies. We spent two days copying CDs, and since we had a lot of time, we started talking and he proudly showed me a Visual Basic program. He started it, and then there was a small window where you could enter something in an input field, and the program would then start Internet Explorer and google the term for you. It was at a time where I was short before or already studying computer science.

Needless to say, I was not impressed.

The DarkReign Complex

In any event, in the mid and late 90s, real time strategy games were the thing. Dune II, Command and Conquer, Warcraft, StarCraft, you name it. Copy protection back then was usually just some checks on whether a CD was physically present in the drive. As CD-burners were extremely expensive (initially an HP 2x CD burner was around 1500 Marks, one No-Name CD-R was approx. 12 Marks, a CD-R from a brand label approx. 20 marks), copying a game was almost as expensive as buying the original.

But there was a short time with a break-even point, where copying and selling games on CD-Rs was profitable with a small margin. Nils got a CD burner back then. In fact I had one, too – but never went into the copied-game-selling business. But he had something else – Dark Reign, brand new, just being released. He charged me (and I think Bjurn and Matthew) 20 Marks each in advance for copies. We eagerly awaited the release. Got the CD-Rs from Nils. And…

there was some copy protection on this one, that identified the CD-R copies as non-genuine.

We were pissed. Nils assured us that “soon there will be a crack available”. But this was bullshit of course. If there would’ve been a NoCD crack, we wouldn’t have required the CD-R copies in the first place. Also, if Nils was this uber-hacker, why didn’t he crack it himself? After all, he already got the money from us!

Obviously, I never bought a game from him again. Today however I wonder whether I could’ve done the crack myself. So

Challenge #2: Remove the copy protection (i.e. create a NoCD patch) of Dark Reign

Or to summarize the motivation for the challenges: Given I had the right tools and good information back then, would I have been able to crack Jurassic Park (DOS) and Dark Reign? What were resp. what are the challenges involved?

For this challenge I will use modern tools. First, I cannot recreate those old times and in any event don’t have the old hardware available, so I have to rely on emulation, and Windows compatibility modes and the like. Also, it’s both difficult and cumbersome to acquire and get to know old tools (I am talking Turbo Debugger, SoftICE for DOS/Windows, and W32Dasm here). Thus instead I will use modern tools like IDAPro 7 Free. The next two posts will investigate Challenge #1 and #2.

[1] I see some coincidence to the US VISA waiver program here. By the way, do you plan to execute a terrorist attack on the United States? Answer in the comments below – you can win a free journey to some CIA black site somewhere in rural Eastern Europe!

[2] Interestingly, I’ve found this NFO when googling INDY3.EXE for this article, which is also quite remarkable. Saved a local copy here as an LibreOffice ODT.

[3] AWESOME, like in: Getting better flight connection by some super-gay check-in agent.

[4] The word „crack“ was common terminology back then, but nowadays it seems just strange to use it. It feels like an ancient word from the past. Like Wares. Or Warez.