Elantech Touchpad on Fujitsu Notebook in Ubuntu

November 30, 2021

So I’ve got a brand-new notebook, however I could not choose the device on my own – it was bought by my company.

It’s a Fujitsu E5410. A decent machine, even though I expected more w.t.t battery time. Also the case feels not that sturdy, too much plastic. Personally I still prefer the Thinkpad T line of models.

One problem is that Fujitsu E5410 uses an Elantech Touchpad. And I had quite some trouble getting this one to work in Ubuntu 20.04. The Touchpad is not recognized at all in Ubuntu. Moreover when using the stock kernel, sometimes the keyboard would not work too. So you are at the login-screen, and both keyboard and touchpad are not working; and without a USB-keyboard there is no way to even shut down the computer properly.

The reason is that modern Touchpad use a weird HID over I2C protocol. It was developed by Microsoft, and rumor has it that manufacturers don’t follow that protocol strictly.

Here is how to get it working:

1.) Install the Ubuntu OEM kernel:

sudo apt install linux-oem-20.04b 

2.) Load the i2c_hid and i2c_hid_acpi upon startup by creating a textfile which contains two lines with the names of the kernel modules

cd /etc/modules-load.d
sudo printf "i2c_hid\ni2c_hid_acpi" > i2c_hid.conf

And that’s it. It tried other kernels, like the default Ubuntu kernel or a mainline kernel, but then either the keyboard would not work anymore, or „tap-to-click“ would not work with the touchpad. Another advantage with the OEM kernel over mainline kernels is that it is signed, so there is no requirement to deactivate Secure Boot. Now, I am not a fan, but it’s still better than nothing…

Kleine Kunde großer Flügel

September 30, 2021

Update 2023: Ich hatte ja unten geschrieben, ich setze das mal auf Wiedervorlage nach zwei Jahren. Wir haben unseren gebrauchten Flügel damals bei Pianovum gekauft, waren damals und sind auch heute noch rundum zufrieden damit. Klare Empfehlung.

Ich poste mal was zu Flügeln aber nicht Red Bull. Weil als wir einen Flügel kaufen wollten habe ich natürlich im Netz gesucht, so nach Infos. Zum Beispiel welche Stücke sich besonders eigenen um das gesamte Klangvolumen abzutesten oder so. Wie man prüft ob die Hammermechanik in Ordnung ist. Wie man am besten Risse im Resonanzboden aufspürt.

Sowas halt.

Dagegen fand ich nur so sinnvolle Hinweise wie „Kaufen Sie einen Flügel nicht ungesehen im Internet. Betrachten und spielen Sie den vor Ort. Überprüfen Sie den Flügel. Sind alle Tasten vollständig?“.

Sowas halt.

Und deswegen hätte ich mir gewünscht, einen Artikel zu finden so wie diesen hier und schreibe hier ein paar Gedanken auf. Aber mal vorab: Glaubt nicht was im Internet steht. Clavio oder ähnliche Orte. Warum? Manche Leute posten, sehr überzeugt von ihrer Meinung, sehr sinnvolle Dinge. Manche Leute posten, sehr überzeugt von Ihrer Meinung, komplett schwachsinnige Dinge. Alle haben Meinungen. Wenige haben Ahnung.

Warum? Na, weil Yuja Wang abends vielleicht noch fünf Minuten online ist um neue Miniröcke zu shoppen, aber ganz sicher nicht um ihre Meinungen über Flügel und Flügelmarken kundzutun. Ansonsten übt die. Oder tritt bei einem Konzert auf. Die hat da überhaupt keine Zeit für. Profis haben selten Zeit irgendwas im Internet zu posten.

Gilt auch für Klavierbaumeister. Wobei das eh nicht so die typische Internet-Crowd ist. Übrigens auch was ich hier schreibe. Ist alles Quatsch. Ist meine Meinung, aber ich habe auch keine Ahnung von dem Ganzen. Umso lauter tue ich sie kund. Also glaubt das nicht.

Meine Meinung beruht im Wesentlichen auf einem Frage-Antwort-Spiel mit der HzB. Und einfach nur um deren Qualifikation einzuschätzen und natürlich anzugeben; sie hat’s immerhin bis zu Geidai geschafft. So kann man das vielleicht einfach besser einordnen. Ob so eine Meinung für Euch Sinn macht oder nicht.

Nun auf zum Frage-Antwort-Spiel.

Was unterscheidet einen Anfänger von jemandem der das schon etwas länger macht?

Der Anfänger liest ein c und spielt dann ein c. Fertig. Wenn man das schon etwas länger macht, dann merkt und hört man dass es da zahlreiche Arten gibt, dass c zu spielen. Sind analoge Instrumente. Dann schaut man sich so Dinge an: Was kommt vorher? Was kommt danach? Was hat sich der Autor dabei gedacht? Ist alles wie im Deutschunterricht, holt einen immer wieder ein.

Letztendlich ist jeder Auftritt eine fein nuancierte Interpretation. Wo man auch einfach lernen muss richtig hinzuhören. Sonst hört man nur ein c.

Das ist übrigens auch der Grund warum viele Klavierlehrer es nicht mögen, wenn die Schüler Keyboard spielen. Beim Keyboard drückt man das c sagen wie zu 32,356 Prozent und bekommt immer den gleichen Ton. Man lernt also: Ich muss hier an der Stelle im Stück das c zu 32,356 Prozent drücken, dann passt das. Bei einem analogen Instrument gibt es nicht nur mehr Nuancen, sondern es kann auch den einen Tag so klingen und den anderen Tag wieder anders. An einem anderen Flügel sowieso. Man ist also gezwungen zu hören was da eigentlich kommt. Und entsprechend so zu spielen, dass es passt, dass die Töne kommen die man sich überlegt hat, die da erklingen sollen.

Wie ist der Flügelmarkt heute?

Im Wesentlichen gibt es drei Möglichkeiten.

  1. Man ist reich. Dann kauft man einen neuen Steinway. Oder vielleicht einen Fazioli. Oder einen Bösendorfer. Aber wenn man so viel Geld ausgibt, kann man auch gleich einen Steinway kaufen. Vermutlich also einen Steinway. Und den Fazioli als Zweitflügel oder so…
  2. Man ist nicht reich und kauft einen gebrauchten Flügel. Vermutlich einen Steinway. Oder einen Flügel aus deutscher Produktion, also z.B. einen alten Schimmel. Vielleicht ein Bechstein. Oder einen Bösendorfer. Sagen wir deutschsprachiger Raum. Ibach. Was auch immer.
  3. Man ist nicht reich und kauft einen neuen Flügel. Dann kauft man einen Yamaha. Oder vielleicht ein Kawai.

Und dann gibt es noch unzähliger Hersteller, wo die Chinesen mit drin sind. Fast alle deutschen Manufakturen haben über die Jahre schwieriger Zeiten hinter sich, und da wird dann entweder alles oder Teile in China produziert. Eigentlich auch für den chinesischen Markt – alles mit deutschem Logo.

In China spielt dann auch der Preis eine große Rolle, und halt nicht unbedingt die Qualität. Der europäische Markt mit den paar neu verkauften Flügeln, ablassendem Interesse und einem riesigen Gebrauchtmarkt spielt da nicht unbedingt eine Rolle. Früher kaufte man ein Klavier oder einen Flügel und stellte sich den ins Wohnzimmer, wenn man was sein wollte. Heute kauft man einen 4K Fernseher und ein SUV. Ich zähle die ganzen Marken jetzt nicht alle auf, gibt ja das Internet.

Was unterscheidet einen preiswerten Flügel aus China von einem guten Flügel?

Die Produktion eines Flügels ist jede Menge Handarbeit. Und hängt von der Qualität der verwendeten Materialien ab. Des Holzes.

Manche sagen z.B. Yamaha habe verschiedene Holzsorten oder verschiedene Holzquellen für verschiedene Kontinente (Klimazonen). Ob das nur ein Gerücht ist weiß ich nicht, aber der Punkt ist, dass es auch nicht absurd ist, sich das vorzustellen.

Es gibt nichts, was man nicht noch ein bisschen schlechter und ein bisschen billiger machen könnte. Das hat sich China aufs Herz geschrieben.

So richtig weiß ich aber letztendlich auch nicht, woher der Qualitätsunterschied kommt. Und die hzB hat gesagt, sie habe auch eigentlich noch nie einen neuen Low-Cost Flügel gespielt, fast immer nur gebrauchte, und die klangen schlecht. Und sie hat eigentlich nie einen z.B. (ursprünglich teuren/Qualitäts) gebrauchten Flügel von sagen wir Steinway gespielt, der schlecht klang. Vielleicht nicht so klang wie sie das mocht, aber nicht so kaputt und durch. Dagegen jede Menge fürchterliche gebrauchte Flügel von den üblichen Herstellern „Made in China“.

Aber was ist der klangliche Unterschied?

Die hzB hat‘s mir Drehrumdiebolzeningenieur so erklärt, aber gleich dazu gesagt: Das ist meine Meinung. Andere Pianisten sehen das vielleicht auch völlig anders.

Man erinnere sich an alte Grafikstandards, so DOS. Da gab es EGA und VGA. Bei EGA konnten 16 Farben gleichzeitig dargestellt werden, bei VGA 256 Farben. Made in China = EGA, Steinway = VGA, um es mal völlig vereinfacht auszudrücken.

Das hieße übersetzt, bei 256 Farben hat man einfach viel mehr Nuancen und Kontrolle über den Klang, und damit kann man ein Stück viel nuancierter spielen.

Auf der anderen Seite konnte man sowohl bei EGA als auch bei VGA die entsprechenden Farben aus einer Palette wählen. Also zwar nur 16 Farben gleichzeitig, aber eben das eine Computerspiel nutzt eine Palette wo man Rottöne nuanciert ausdrücken kann, dass andere wo es mehr Grüntöne gibt etc.

Preiswerter vs. Teurer Flügel ist wie EGA vs VGA. Flügel von Hersteller X vs Flügel von Hersteller Y ist wie Palette A vs Palette B. Mag auf eine gewisse Art äquivalent sein, aber spiel sich halt völlig anders. Klingt nicht unbedingt schlechter, ist vielleicht auch einfach eine Frage des Geschmacks.

Das ist übrigens wirklich interessant, wenn man als absoluter Laie daneben steht und zuhört. Da höre dann nämlich sogar ich den Unterschied. Auch wenn ich das nicht bewerten oder einordnen kann, hört man sofort, dass da was anders ist… Also auch wenn man denkt: Hoh, das hört sich doch ok an, und dann hört man im Vergleich ein anderes Instrument und merkt erst dann, dass das erste doch gar nicht so gut klingt.

Warum Steinway?

Wenn man einen Auftritt hat, dann steht da ein Steinway. Wenn man sich eine Aufnahme anhört, ob jetzt Deutsche Grammophon oder Naxos, und rausfinden will, wie ein Stück gespielt wird, ober wie man ein Stück spielen könnte, dann war die Aufnahme wahrscheinlich auf einem Steinway.

Und wenn man dann üben möchte, dann möchte man auf einem Steinway zu Hause üben. Einmal weil’s gut klingt. Aber auch weil man diese Übersetzungsleistung nicht zu erbringen hat. Quasi im Computergrafik Kontext das hin- und her-mappen der Paletten, was dann so richtig ja immer doch nicht passt, wenn z.B. die Abstände zwischen zwei Farben in einer Palette so anders sind als die äquivalenten Farben in einer anderen Palette. Und dann kann ein Yamaha noch so gut sein, das ist dann relativ egal. Da geht es dann nicht mehr nur um den Klang.

Die Steinway Farbpalette ist … komplex und schwierig zu beherrschen. Aber auf der anderen Seite sehr vielseitig. Die Yamaha Farbpalette ist einfacher zu beherrschen und „netter“ zum Pianisten. Sagt die hzB, schränkt aber ein, dass das ihre ganz persönliche Interpretation ist. Da hat jeder seine Meinung… Sowas ist auch in einem anderen Kontext zu sehen, wenn man unterrichtet. Da geht es vielleicht erstmal darum, dass ein Schüler überhaupt ein Bewusstsein für die Palette entwickelt. Und da ist ein Steinway dann gar nicht unbedingt so gut, weil das den Schüler überfordert…

Wie ist der Steinway Gebrauchtmarkt?

Die Restauration von Steinway-Flügeln ist sowieso so ein zweischneidiges Schwert. Die größte Konkurrenz von Steinway sind gebrauchte Steinways.

Steinway sagt: Alles was nicht von uns mit Echtheitszertifikat restauriert wurde, ist völliger Murks. Das muss nach Hamburg. Und auf der einen Seite stimmt das, es gibt viele Osteuropa-Restaurationen minderer Qualität. Die werden dann preiswert verkauft an Personen, die einen Steinway im Wohnzimmer stehen haben wollen, aber wenig spielen (können). Die wollen dann auch einen Steinway (passt zum Benz) und nicht ein Yamaha-Flügel (nix aus Schina? Was Japan? Ist doch alles Asien, das Gleiche!)

Auf der anderen Seite gibt es natürlich auch gute Klavierbaumeister die gut restaurieren. Und die machen dann Steinway wirklich das Geschäft kaputt. Es gilt nun – sofern man nicht reich ist – einen solchen Klavierbaumeister zu finden. Und das ist gar nicht so einfach und geht nur über Hörensagen etc. Bei sowas helfen Google Maps Bewertungen nicht weiter…

Welchen Flügel für zu Hause?

Kleine Flügel (Salonflügel mit 150 bis 160 cm) unterliegen höheren mechanischen Belastungen. Sind sie zu klein ist überhaupt die Frage ob ein gutes Klavier nicht sinnvoller ist. Zwischen 170cm und 200cm ist was für zu Hause, je nachdem wie groß auch der Raum ist wo man das hinstellt bzw. hinstellen kann. Klar, wenn man einen Konzertsaal als an sein Eigenheim angebaut hat, dann kann man noch was Größeres dahinstellen, aber das werden die wenigsten haben.

Wie teuer ist ein guter Flügel?

Ich behaupte feist die Referenz ist hier ein Yamaha C3X. Preiswert und gut. Neu meiste für ca. 30k zu bekommen. 186cm lang; passt also gut in die Musikschule und auch nach Hause und findet sich auch entsprechend oft.

Darunter ist dann ein durchaus hörbarer Klangunterschied, sogar für mich als absoluter Laie. Die „paar cm“ zum C2X machen doch klanglich viel aus. Und größer passt dann meistens nicht mehr ins Zimmer, außer man hat einen eigenen Konzertsaal angebaut. Aber wenn man sich das leisten kann dann kauft man halt einen neuen Steinway und einen Fazioli als Zweitinstrument.

Der C3X ist dann halt auch irgendwie die Referenz an der sich vieles misst – zu mindestens war das unser Eindruck. Yamaha macht einfach gute Flügel. Aber es sind eben auch keine Steinways. Und da stellte sich dann für uns die Frage: Was Gebrauchtes oder ein neuer Yamaha?

Ist das nicht echt teuer irgendwie?

Ja.

Ne, ist das nicht wirklich teuer irgendwie?

Nun, hier in unserer Wohngegend ist eigentlich SUV – Pflicht. Oder mindestens einen Audi. Besser Mercedes.

Ich weiß auch nicht, wie wir hierhin geraten sind. Es gibt noch ein Nachbarspaar, da fährt der Mann einen gebrauchten Volvo V40 und die Frau einen Ford Mondeo. Das sind die Armen. Mit uns.

Kenne ich alles so nicht aus meiner Heimatstadt.

Wir haben nur ein Auto, einen Toyota. Ich fahre mit Bus und Bahn, das ist billiger. Jeder hat so seine Prioritäten.

Wie lange hält ein Flügel?

Keine Ahnung. Kommt halt auch drauf an, wie oft man damit spielt, wie man gut man die Umgebung (insb. Luftfeuchte) reguliert etc. Aber da eine große mechanische Belastung herrscht, geht zwangsläufig auch irgendwann mal was kaputt bzw. nutzt sich ab. Und zwar mehr als eine Saite.

Wie teuer ist eine Restauration eines Flügels?

Ich denke eine umfassende Restauration wird so bei ca. 30,000 Euro liegen. Hier gibt es z.B. eine Preisliste.

Das kommt natürlich drauf an, was alles gemacht wird und wo. Werden z.B. die Arbeiten in Deutschland durchgeführt? Gerade bei Steinways hat sich ein reger Handel etabliert, der alte Flügel kauft, in Osteuropa (z.B. Polen) preiswert restauriert und dann hier weiterverkauft. Die Qualität der Restauration schwankt dabei natürlich…

Warum ist das Reparieren/Restaurieren so teuer? Hier sieht man mal so grob, was alles gemacht wird, und was das für ein Aufwand ist. Und Arbeitszeit (und natürlich die Ersatzteile) kosten einfach eine Menge Geld.

Warum gibt es keine gebrauchten Yamaha oder Kawais, wenn die so gut sind?

Das ist eine gute Frage, und ich kann auch nur spekulieren. Aber ich denke, das ist einfach nicht wirtschaftlich.

Erstens mal hat Yamaha sich über die Jahre stark verbessert. Also die Flügel sind auch wirklich besser geworden, auch in der Qualität (wie viele japanische Produkte). Deswegen ist ein Yamaha Flügel aus den 50ern vielleicht gar nicht so erstrebenswert. Aber sagen wir mal ein es gibt einen guten mittelgroßen Flügel (180 cm). Der Händler kauft den für wenige tausend (?) Euro und restauriert den dann und steckt 25.0000 Euro Material und Arbeitszeit rein. Und dann kommt ein Kunde und steht vor der Wahl ob er einen uralten restaurierten Yamaha für 30.000 kauft, oder einen neuen C3X für 30.000 Euro…

Wie prüfe ich, ob ein Flügel in Ordnung ist?

Ganz einfach. Wirklich ganz einfach: Spielen 🙂

Wenn er gut klingt, ist er gut. Zu mindestens zu dem Zeitpunkt, wo er gekauft wird. Sagt natürlich nichts über die Haltbarkeit aus.

Aber das ist auf jeden Fall die Methode der hzB; so hat sie zum Beispiel das jetzige Schimmel-Klavier ausgewählt. Haben wir mit Glück gehabt, das hat sich als sehr robust erwiesen und jetzt den Platz im Wohnzimmer bekommen…

Auf jeden Fall haben wir jetzt einen gebrauchten Steinway M gekauft. Restauriert. Hat uns der Händler übers Ohr gehauen? Wurde der doch in Polen restauriert und nicht in Deutschland wie uns versprochen? Wird der lange halten oder werden wir in einem Jahr merken, dass die Restauration Murks war und alles auseinanderfällt?

Der Resonanzboden ist zu mindestens original, bzw. restauriert also die Risse geklebt, geschliffen. Kann man nur hoffen, dass wir nicht einen unwesentlichen Betrag versenkt haben.

Auf jeden Fall habe ich mir mal gedacht ich setze das mal auf Wiedervorlage. Wenn das Ding in zwei Jahren noch ok ist, dann nenne ich hier lobend den Händler. Wenn nicht, dann nenne ich den auch, aber so verklausuliert dass man doch drauf kommt, mir aber keiner ans Bein pinkeln kann.

Trotzdem, ich hätte den C3X genommen.

Aber ich würde auch jederzeit einen neuen Lexus statt einen gebrauchten Ferrari kaufen. So bin ich halt.

Linux on the Desktop 2021 (and 2022, and 2023, and 202x)

März 9, 2021

For almost 20 years, I usually have two operating systems running. Linux for development stuff, and a Windows installation for everything else. A long time ago it was a dual boot system, but nowadays I usually have a separate notebook with Linux for development.

Every once in a while I consider switching to Linux as my main desktop operating system. And every time there are some significant showstoppers that prevent me from doing so.

From 2010 to about 2020, the main issues was hardware video acceleration in the browser. You know, watching Youtube and stuff and not having your notebook constantly run at 100% CPU or choke at Full HD videos. Battery running low. This stuff.

Way back, Youtube was based on Flash video, and Adobe just gave up on trying to create anything accelerated due to the fsck*** mess of various video acceleration interfaces in Linux. There is VDPAU (nvidia), VaAPI (intel) and XvBA (amd).

> "The good thing about standards is that there are so many to choose from." – from the book Computer Networks by Andrew S. Tanenbaum

I thought the situation would improve when Flash went away and everyone switched to HTML5. And right I was, fast forward ten years, and now Firefox 80 ships with optional GPU acceleration. Well at least for vaapi, but not for vdpau; so if you have say an Nvdia GPU for machine learning you’re out of luck. Also it’s optional, you have to manually activate it. And not available in Chromium.

Just ten years, so I guess sometimes you just have to be patient and problems will eventually solve themselves.

Other showstoppers for me right now:

  • a USB microscope (DinoLite) not working anymore. Apparently there was some regression w.r.t. USB or camera drivers which were removed… I didn’t dig very deep here, but it suddenly stopped working.
  • A Ricoh Afficio SP300DN not working properly in Linux. I did extensive research before buying this one, looking up various magazines and the net. One magazine (c’t) reported compatibility with Linux, as the MAC driver ships with a PPD. The PPD installs, and printing PDFs works 99 out of 100 times. Just once in a while, the printer suddenly stops with a Postscript error and goes in berserk mode: It’ll feed in every sheet of paper that is in the tray until the tray is empty and print one line of garbage on the sheet (so that you can’t reuse it anymore). Probably some issue how poppler creates PDFs… no clue. The printer works fine with the PPD in macOS though.
  • the most read entry of my blog is actually the one about my fight with getting WiFi working on a Thinkpad E470. Again I did extensive research before buying this, but couldn’t find any warning w.r.t. the WiFi chip in advance…
  • Aggressive Link Power Management (ALPM) is not working. On my development notebook this means that the SSD is always at full power. This not only reduces battery time, but also make the left palm rest (metal) really hot. Whereas the flash chips of the SSD should not be affected, I do worry about the controller chips in the SSD and the overall life-span of the SSD. There is experimental support for ALPM, but this is deactivated by default in most distributions due to potential data loss. However, I really don’t want to test out myself on my production machine wether I am affected and my machine has subtle and unexpected data loss…
  • There is no proper echo cancellation in Linux. If you are using Skype or some other conference system, and using the built-in microphone and speakers (and not a headphone), you’ll have annoying echo and latency issues when video-conferencing. Now for work I of course use a headset, but when we family talk to my parents this is an absolute no-go. Again there seems to be no progress for several years now.
  • My tax software is not available for Linux. In fact no tax software is available for Linux in Germany. This is the least issue of course, as I could use another system for taxes, dual-boot or use a VM. But still, this is annoying.
  • There are other small tools which are not available. Exact Audio Copy. DVDShrink. IdaPro5. There are however workarounds with Wine or equivalent tools.

Trying to categorize the above issues, there are mostly two major points:

  • Driver support
  • Availability of Commercial Software

Unfortunately looking back twenty years, the situation was exactly the same, just details vary. Back then printers were an issue – anyone remembers GDI printers? Sound was an issue, too – anyone remembers OpenSoundSystem or that you could only play one sound at a time in Linux, and i.e. would not receive notification sounds from your instant messenger when you were simultaneously listening to music? WiFi – anyone remembers ndiswrapper?

Uh, and there was also no tax software.

But then again, with everything moving to the browser, the software issue is indeed becoming slightly less important.

But why is the overall situation not improving?

In my humble opinion, it is due to political reasons. The market reality and the roles and responsibilities of product development are not taken into account. And unfortunately this is why I think the situation won’t change anytime near in the future.

Let’s take the viewpoint of a device manufacturer. You’ve designed and build your device, and now you have to develop a driver and ship the product. You have a tight schedule as your competitors also have a product in the pipeline. First you target Windows, as this gets you 90% of the operating system market share:

  • You license some generic driver package for some of the chips you use in your device. The chip manufacturer provides these drivers under a commercial license.
  • Based on that generic driver package you have your developer create a Windows device driver. You estimate that it will take about three month for your developer to program the driver.
  • After that you get your driver signed as WHQL to make sure your driver works flawlessly in Windows. You estimate another three month for this process. Microsoft used to charge a small fee for WHQL testing (neglectable), but nowadays it’s even free. There is also a small fee for the driver signing certificate. Again neglectable if you take into account the overall cost of product development.
  • You finish on time. The driver meets all quality standards, and the static program analysis tools from Windows like Static Driver Verifier, Code Analysis for Drivers, CodeQL and other test tools make sure that your driver works stable and without bugs.
  • You ship your product. The windows driver model changes very infrequently; you can be assured that consumers can use your product likely 5+ years, most likely even 10+ or more years, even with newer Windows versions – Microsoft cares a lot about backward compatibility.

Your device is a market success, now you also want to get another 1% market-share, and target the Linux desktop.

  • Shipping a binary driver is impossible, as there is no stable ABI. The only possibility would be to either have to do nasty things like write some abstraction layer like nvidia does. However the legality w.r.t. GPL is questionable. Also it’s a hassle for users, and due to the abstraction process frequent updates and testing is required throughout the life-cycle of the product.
  • Don’t use the generic driver package that you licensed and re-develop everything from scratch and open-source the driver. You fear however that some company from Far East will create a clone of your device and just copy your driver. Essentially that clone manufacturer gets a driver for free, and also probably quite some insight in how your device works by looking at the driver source code. You fear that they will beat you on the market, as consumers will buy the cheaper clone.
  • Develop a driver, pitch the driver to the kernel guys
  • Get rejected ‚cause your code doesn’t meet the kernel code style and quality guidelines
  • Do that back and forth until your driver is eventually included in the kernel
  • Now at some undefined future time some distributions will include the new kernel revision. You have no clue nor any control when this is going to happen. Your product is late to market, and nobody wants it anymore.
  • After all this is done, your product works perfectly. But then there is some regression in the kernel, cause someone decided to redo the USB stack. Due to some subtle bug, your device is not working anymore on one of the major distributions. You work with both the kernel guys and the distro maintainers to get your device working again.
  • You also have to test whether your device works on Ubuntu 18.04, Ubuntu 20.04, Arch Linux, Debian Testing and Unstable, Redhat Enterprise, Suse Enterprise and some more major distributions and versions to make sure your device doesn’t have subtle bugs in Linux.
  • It’s really hard to plan for all of this; there is no defined process for driver inclusion. You have to talk to a lot of people, like the kernel guys. And hope everything will work out eventually.

Hypothetical? Well, these are the issues that occurred w.r.t. the DinoLite (apparent regression) and the Ricoh printer. Granted, the drivers were not specifically written by the manufacturer but by third parties, but still – these are the issues you will run into.

As for software: Let’s say you’re and ISV and develop this particular software package. Let’s say it’s a software for some specific purpose, say some CAD tool for a very specific industry. The software is your product, so there really is no way open-sourcing it and sell support.

A software developer who want to ship a product

  • has not anything stable to ship against. There are various versions and releases of Gtk, Qt and other libraries in a bazillion of distributions. There is no stable ABI, like Win32 or Cocoa.
  • you have to package all libs by yourself. However since some of these libs are shipped by the distributions in different versions, you have to make sure that only your libs are loaded during program startup. You also have to test this for at least 20 different distributions and versions of distributions.

Hypothetical? I challenge you to ship a binary package of a small C++/Qt application for four to five major Linux distributions. I did it. It’s possible, but the effort is really huge compared to, say, Windows or macOS. Just google things like ‚Failed to load platform plugin "xcb"‘.

I have to say, Snap and Flatpak have changed the situation slightly for the better though.

In other words: The lack of a kernel driver ABI and an application ABI hinder the adoption of Linux to the desktop.

Same situation btw. is happening in Android. The chipset manufacturer ships a proprietary board support package, which definitely will not end up in the kernel. It is targeted to some specific kernel. Then freeze it. That’s their ABI. Just look at the Android kernel of your phone. My phone is running kernel 3.18.71. Apparently, an ABI is needed.

Due to lack of it, right now everyone just creates binary blobs and freezes the kernel. Google tries to abstract the hardware and create something of an ABI with Project Trebble and other initiatives. Let’s see what comes out of it.

What’s the kernel developers opinion on that? Well, there is this document called "Stable API Nonsense".

> You think you want a stable kernel interface, but you really do not, and you don’t even know it.

Well, they should talk to those Google guys, because it seems that they want a stable kernel interface after all and put millions of dollars into it. But seems these Google guys really don’t want one, but don’t know it yet!

Someone should tell them. They could save millions of dollars!!!

I mean, we just need to tell these Qualcomm, Broadcomm, and Mediatek guys that they should open source everything and create open kernel drivers! Easy, isn’t it?

In all seriousness, I think this "stable api nonsense" document is not very honest.

The technical argument of not creating a stable ABI is essentially: We don’t want to do it. Because it’s too much work. However, Microsoft and Apple show how it can be done. And Google shows how it can be done even with the Linux kernel.

And then there is some political argument, which is not mentioned in the document, but which is probably the major reason against a stable ABI from the perspective of the kernel guys and open source community:

Not having a stable ABI is intended to actively force vendors to honor the GPL license and upstream free code in the kernel. Prevent manufacturers to create closed source drivers and encourage them to create open source drivers.

And it kind of works. Especially for big iron stuff. Think of how Intel supports open source now, and how the situation was ten or twenty years ago. Intel does not do this out of pure goodwill though – there is $$$ involved. If some big iron data center wants to use Linux installations for heavy computation, it’s either Linux works with your stuff, or we will use AMD or some other vendor.

However it just kind of works. Nvidia is doing all this complex proprietary driver abstraction because customers want fast Linux drivers and pay for it. But still, this is apparently more economical for them than to create and ship open source drivers.

And for other devices from smaller manufacturers, especially those targeting the desktop, it is even less economical.

And this is why I will still maintain two machines in the foreseeable future: One for development/number crunching with Linux, and one for general purpose computing with Windows – being my main machine.

M.U.L.E. – Input Lag (Delay Testing)

März 11, 2020

If you’d ask me what the best multiplayer computer game is, then without doubt both StarCraft:Broodwar and M.U.L.E. come to my mind. 

M.U.L.E. is an incredible entertaining turn-based strategy game, originally devloped for the Atari 800, and then ported to various home computer systems. The C64 port is by far the most popular version of the game.

The game play centers around settling on the far-away planet IRATA, and producing four goods, namely food, energy, smithore and crystite. The first three are for direct consumption, and crystite is a luxury item. In each turn, a player can choose which of these goods to produce, how to increase his prodution capacity and so on.

M.U.L.E. also adds some real-time elements. In particular goods can be traded after each turn, similar to a trading exchange. Then four players negotiate in real-time with their input device for good prices and the best deal. Depending on the trades and what each player specializes in, prices can vary a lot. Thus observing what goods your competitors bet on, what they produce and how things will turn out is part of what makes this so fun.

Actually thinking of it, M.U.L.E. feels almost like a classical board game, augmented by the capabilities of a home computer:

  • Classical board games are usually turn-based, but have no real-time strategy aspect. Here, players have to act in a limited amount of time, and their jostick-skills come in to play. This results in focusing on your next turn way more than in a typical board game situation, and adds some nice tenseness. It often happens in some situations, that player sit silently in front of the screen, as one player focuses on his next moves.
  • Classical board games, say like Avalon Hill’s famous Civilization also often have trades of goods somewhere. But here the computer adds a lot; first the structured way trades a conducted in a time-limited manner – as mentioned, similar to a real stock-exchange – and second as the computer computes the resulting market prices in real-time via complex formulas. Such calculations would be too tedious to carry out manually in a board game situation.

I think this together with the fact that the game is simply extremely well-balanced  – rumor has it, that the developers spent an incredible amount of time beta-testing with friends at their private home – is the reason why the game has aged so well.

In fact, a few friends and myself regularly meet to play a game of M.U.L.E. For decades, this was at Matthew’s home, who owns several C64 computers and 1541 floppy drives, all still in working condition. It is however more and more difficult to keep C64s in working condition – excellent Youtubers Jan Beta and Adrian Black spent quite some time doing so. C64s have some fragile parts and design flaws: First, some chips are known to fail quite often, afaik that’s the CIA and SID chips. Another issue is the original power supply. It provides, among others, a 5 volt rail, that is fed directly to the chips. Unfortunately this power supply gets quite hot, and has no surge protection on that 5 volt rail. So in case of a fault, the 5 volt can quickly become 7, or 10 or 12 or more volt, and will fry several chips at once (Jan Beta has two videos on how to build your own replacement power supply. It’s easier than you think).

Another point is that some time ago Matthew bought an humongous Sony 4K TV – almost as if to compensate for something. In the old days, we would connect the C64 via it’s S-Video out, i.e. separate chroma and luma via a SCART adapter to an old fashioned tube tv. Actually it’s not really S-Video as the standard was defined in 1987, years after the C64 hit the market, but it still works (pins are slightly different). This resulted in quite good image quality – then again, a tube tv has quite average picture quality in the first place. The modern Sony 4K TV only had a composite input which resulted in a noticeably worse image. Also we noticed a huge input lag. Here with input lag I mean the time measured from pressing a button on the joystick until a visible change on the screen occurs.

All in all, this is why we looked into emulation and input lag. Here is the setup: Matthew used his iPhone and recorded a video. On that video I’d smash a joystick in one direction (or a key on the keyboard) and in the background you’d see the screen change. We then counted frames from the point in time where the joystick would be at it’s maximum angle, or the key pressed fully down until a screen change was visible. The C64 (the European PAL version that we use) has frequency of 50 fps. Matthews iPhone can record videos with 240fps, so that’s well beyond the Nyquist rate.

In fact, I’d show you the video or some frames of it, but Matthew voted against it, citing that his secret gay porn collection as well as details on his Grindr account are visible in the background. Which I can understand but:

Matthew, it’s okay. We like you the way you are.

In any way, I tend to rant too much, the table below shows the results. The Sony TV is a Bravia KD-65XE9005.

SetupLag (#Frames, Video @ 240fps)Lag (ms)
Joystick button, original C64 @ Sony4K TV via composite, Sony 4K TV in standard mode37.5156
Joystick button, original C64 @ Sony4K TV via composite, Sony 4K TV in game mode833
Keyboard button, original C64 @ Sony4K TV via composite, Sony 4K TV in standard mode

39

162

Keyboard button, original C64 @ Sony4K TV via composite, Sony 4K TV in game mode1249
VICE 3.1 (x64.exe) on Windows 10/Thinkpad R500, Joysticks connected via an e4you RetroFun! Twin USB adapter, Sony 4K TV in game mode~2083
8BitGuy’s measurement, C64 mini on unknown LCD-TV in game-mode 90
8BitGuy’s measurement, C64 maxi on unknown LCD-TV in game-mode 90

We have several sources of lag:

  • The joystick controller (probably negligible on a C64, but can be an issue with joystick connected via USB). We unfortunately didn’t manage to test my recently upgraded poor man’s joystick, I added one of these zero-delay encoder boards. I don’t expect much difference to the RetroFun Twin though.
  • Processing during emulation (i.e. lag caused by VICE or some other emulator)
  • Image processing by the TV

It’s unfortunate that we cannot establish a base delay, as noone has a tube tv anymore – buying one just for testing is simply overkill. On the other hand: If a tube tv runs a 50 Hz (i.e. 50 half-frames per second), and the cathode ray starts at the upper left corner at time point 0 and ends its run at 1/50 seconds in the lower right corner, we can roughly expect it to hit the middle of the screen at half the time, i.e. (1/50)/2 = 0.01 seconds, i.e. 10 milliseconds. Another question is how often the joystick ports are triggered by the implementation. In other words, there is likely an inherent delay, and not a zero-delay as the baseline.

Interestingly, for an SNES someone established that there is an inherent delay of 50 ms, that is even with an original SNES with a wired controller connected to a classical tube tv (I expect this not to be the case with a C64).

From the stats above and using this particular 4K TV, we can see that the baseline with original equipment is betwen 33 ms and 49 ms.

With emulation via VICE we are somewhere around 80 ms to 90 ms. That’s probably like having 3.6 on your dosimeter during a nuclear accident, not great, not terrible. I hoped something below 50 ms would be achievable. But then again, in the end, it’s always how you feel the delay, i.e. whether it impacts the game play. And we all agreed that this 90 ms were not noticeable. It felt original. I could even imagine playing Katakis with this setup.

Some comparisons:

  • Here they achieved 70 ms with Retroarch (SNES emulation) with Run-Ahead latency reduction set to 2 frames and a wired XBox controller.
  • Here is a screenshot and discussion from 8BitGuy’s video. Note that in our setup we didn’t measure but didn’t notice audio lag when using a wired connection. Bluetooth audio is incredible laggy though.
  • Here is a more detailed explanation of what lag to expect from a tube (i.e. CRT).
  • Some more latency analysis w.r.t. emulation and RetroPie. They report delays of 32 ms and 50 ms (original NES/SNES connected to tube), delays of 95 ms and 93 ms with the NES Classic / SNES Classic re-imaginations, and 122 ms and 143 ms with RetroPie NES/SNES, all on a Dell 2007FP Monitor (Delays on a Samsung TV in game mode were worse).
  • Here are some stats from RetroPie. They don’t mention the frame rate, but as all typical home computer systems and consoles of the past run with 50 fps in Europe, I assume 50 fps. With all optimizations turned on they measure an average frame delay of 5.51, resulting in a delay of approx 110 ms.

Assuming the USB controller lag from the RetroFun Twin cannot be improved (maybe we’ll compare with a gaming keyboard or with the zero-delay encoder from my home-grown joystick), the only other source of lag we could improve on is the choice of the emulator. But in our setup, VICE adds at most 60 ms delay (more realistically 40 ms to 50 ms). Measuring that in frames and assuming 50fps (20 ms / frame), we can state that in this setup VICE adds 60 ms = 3 frames @ 50 fps.

And then there is the Ultimate64, a C64 redone using an FPGA. Quoting from their homepage:

What are the frame delays of the digital HDMI port? None. There is no frame buffer, so there is no need to worry.

I didn’t buy one due to the price tag, but I probably should. Or maybe my friends and I can share the burden and put some money together…

Oh, and no blog post about M.U.L.E. is complete without mentioning World Of M.U.L.E., an excellent resource on M.U.L.E. and all it’s ports and remakes. There is a Japanese version. And there is even a physical board game.

Corona

März 2, 2020

ein paar kurze Gedanken zum Coronavirus. Manchmal verstehe ich meine Mitmenschen nicht so ganz. Zum Beispiel äußersten mehrere Kollegen – intelligente Menschen – sich in der letzten Woche im Stile von „Ich verstehe gar nicht, warum man da so eine Aufregung um das Coronavirus macht. Das ist doch nicht viel schlimmer als eine Grippe. An der Grippe sterben viel mehr Menschen in Deutschland.“

Alles intelligente Menschen, die irgendwie keine Prozentrechnung können.

Die Wahrscheinlichkeit, an einer Grippe zu sterben, liege bei 0,1 bis 0,2 Prozent, sagte RKI-Präsident Lothar Wieler am Donnerstag. Nach den bisher bekannten Zahlen liegt die Rate beim Virus Sars-CoV-2 fast zehnmal so hoch – bei ein bis zwei Prozent. 80 Prozent der Infizierten hätten nur milde Symptome, doch 15 Prozent erkrankten schwer an der Lungenerkrankung Covid-19. «Das ist viel», sagte Wieler.

Quelle hier.

Also dann rechnen wir mal. 15 Prozent aller Erkrankten entwickeln eine schwere Lungenentzündung, bei der der Sauerstoffgehalt des Blutes gefährlich sinkt, d.h. die Sauerstoff zusätzlich brauchen. Nehmen wir mal an, in einer Kleinstadt erkranken relativ zeitgleich bei einem Ausbruch 10000 Menschen. D.h. wir brauchen 10000 * 0,15 = 1500 Betten + Sauerstoff für die Menschen. 100 bis 200 Menschen werden sterben.

Und es trifft eben nicht nur notwendigerweise die alten und schwachen, sondern auch relativ junge Menschen, wie z.B. Li Wenliang.

Spätestens im Januar war aufgrund der Fallzahlen und R0=2.28 oder höher relativ klar, dass die Geschichte auch nach Deutschland kommen wird (das war der Zeitpunkt wo ich mal ein bisschen hamstern bin, d.h. ein paar Sachen so dass man mal zwei Wochen nicht aus dem Haus gehen muss, und ja auch Klopapier und viel Seife) gegangen bin und ein paar Masken bestellt habe. Und nein, ich bin kein Prepper. Aber vielleicht lese ich auch einfach zu viel ausländische Presse?

Und was hat man in Deutschland gemacht? Keine Ahnung, vielleicht viel hinter den Kulissen. Aber ansonsten im Wesentlichen nichts, außer beteuern, dass ist alles nicht so schlimm und an Grippe sind auch schon 50 Menschen gestorben (@11:16).

Dann trotz der Situation in Iran, China etc. sehr lange keine Form von Kontrolle, Befragung oder Information an den Flughäfen. Jetzt teilt man Kärtchen aus und will  man zentral Masken kaufen:

Der Krisenstab beschloss außerdem, einen Vorrat an Schutzausstattung wie Atemmasken und Spezialanzügen – nicht nur für medizinisches Personal – anzulegen. Vorbereitet werden soll dafür eine zentrale Beschaffung durch den Bund.

Quelle: hier. Sie können ja dann bei Amazon eine Sammelbestellung machen.

Und dann das mit den Masken. „Masken sind Quatsch“ wird z.B. hier kolpotiert. Tenor:

  • Die N95/FFP2 Masken bringen zwar was, aber die kann man eh nur 30 Minuten tragen und schnappt dann nach Luft.
  • Die OP-Masken bringen eh nichts um sich gegen eine Infektion zu schützen, sondern hilft nur andere zu schützen, wenn man selbst krank ist.

Hintergrund für mein Genervtsein: In Japan tragen die Menschen (und ich damals auch) oft eine Maske, insbesondere zur Grippesaison. Nicht das sich Japan jetzt besonders geschickt in der aktuellen Situation angestellt hätte, aber zu Mindestens subjektiv wurde ich deutlich weniger angehustet und angeniest.

Kleines Gedankenexperiment: Wenn nahezu alle eine Maske tragen würden, dann würde auch die eine tragen die derzeit krank sind, sogar die, die asymptomatisch krank sind. Ich meine Logik und Mengenlehre ist jetzt nicht sooo schwer.

30 Minuten beträgt übrigens meine tägliche Pendelzeit im chronisch überfüllten ÖPNV.

Und zuletzt: Es gibt zwar tatsächlich keine Beweise, dass das Tragen einer OP-Maske vor einer Infektion schützt. Es gibt aber eben auch keine klare Datenlage, dass nicht. Das Problem ist schlichtweg, dass es sehr schwierig ist, hier eine kontrollierte wissenschaftliche Studie zu machen, die andere Faktoren ausschließt. Das Setting (z.B. im Krankenhaus, in der Schule, auf einem kleinen Dorf) spielt da mit rein. Masken schützen vor dem „unbewusst ins Gesicht fassen“. Es gibt durchaus Studien, die darauf hindeuten, dass solche Masken auch (passiv) was bringen, z.B. hier, hier, oder auch mal zur Abwechselung hier mit Tenor „bringt alles wirklich nicht so viel“.

Also insgesamt. Nein, keine Panik, keine Zombie-Apokalypse; wir werden nicht alle sterben.

Aber wahrscheinlich 0.5 bis 2 Prozent aller Infizierten, und das ist schon echt ’ne ganze Menge. Und auch nicht nur die alten und Schwachen, sondern auch Leute die mitten im Leben stehen, wie der 47 jährige Mann aus Gangelt bei Heinsberg, der offenbar momentan um sein Leben kämpft.

So genug. Ich muss jetzt nochmal unbedingt nächste Woche hamstern.

Denn Desinfektionsmittel sind ja wichtig in so einer Krise. Ich denke, ich werde da primär auf Akohollösungen mit 4.8 Prozent Volumenanteil setzen.

Die Bonpflicht

Januar 30, 2020

Nichts regt mich derzeit mehr auf, als die Diskussionen in Zeitungen und sozialen Netzwerken. Einee der letzten bescheuertesten Headlines war „Boxen gegen die Bonpflicht„.

Zusammen kommt in solchen Diskussionen immer eine unglaublich große Menge Ignoranz und Dummheit, ein diffuses „Der Staat ist so blöd“, „Man will uns gängeln“ und seltsame Umweltgedanken (Teile von Fridays for Future z.B. reiten auf der Welle mit), und manchmal gefährliches technisches Halbwissen.

Kurz zur Sachlage: Ab jetzt bzw. ab dem 1.10.2020 gilt folgendes in Deutschland:

  • Händler müssen einen Kassenbon ausgeben (ab jetzt)
  • Registrierkassen müssen über eine technische Sicherheitseinrichtung verfügen (ab 1.10.2020)

Um zu verstehen, warum das so eingeführt wurde, sollte man erstmal das Problem verstehen. Man erinnere sich an seinen letzten Restaurantbesuch. Da gibt es verschiedene Varianten, aber eine von beiden ist dem geneigten Leser bestimmt schon mal aufgefallen.

  1. Variante: Man kriegt entweder keinen Kassenbon sondern z.B. wird alles auf einem Bierdeckel zusammengeschrieben, oder im Eissalon auch mal gerne auf einer Serviette
  2. Variante: Brauchen Sie einen Kassenbon? Die meisten antworten „Nein“ (es sei denn, man will bzw. kann den Restaurantbesuch steuerlich absetzen).

Im ersten Fall gibt der Wirt die Buchung erstmal gar nicht in die Kasse ein. Der Restaurantbesuch hat nie stattgefunden. Dementsprechend muss der Wirt für diesen Restaurantbesuch keine Umsatzsteuer abführen, denn der Besuch hat halt nie stattgefunden.

Im zweiten Fall ist die Buchung meist schon eingegeben, aber der Wirt storniert einfach die Buchung. Der Restaurantbesuch hat nie stattgefunden. Dementsprechend muss der Wirt für diesen Restaurantbesuch keine Umsatzsteuer abführen, denn der Besuch hat halt nie stattgefunden.

Das passiert übrigens fast überall. Im Eiskaffee, im Restaurant, aber auch bei der Bäckerei, im Dönerladen, China-Restaurant. Beim Taxifahren. Überall wo bar bezahlt wird, und man die Menge an Besuchen/Taxifahrten/gekauften Essen schlecht von außen nachprüfen, höchstens schätzen kann.

Wenn man mit EC-Karte zahlt, bekommt man übrigens eigentlich immer einen Bon. Denn wenn die Transaktion auf dem Konto auftaucht, kriegt auch das Finanzamt davon Wind.

Als ich bei meinem Autohaus den Leihwagen bezahlen wollte, wurde ich auch schon mal gefragt: „Haben Sie gerade 30 Euro in bar?“. „Äh, sorry, gerade nicht in bar, kann ich mich Karte…?“ „Äh, dann sind es 35,70 Euro“.

Die Situation ist inbesondere auch deswegen so schwierig, weil in der Gastronomie – mutmaßlich – das ganze so verbreitet ist, dass wenn als Restaurantbetreiber ehrlich ist, quasi kaum noch überleben kann.

Wie kann man dem ganzen jetzt begenen? Nun, man muss zwei Dinge sicherstellen:

  1. Ein Geschäftsvorfall muss vom Betreiber in die Kasse eingegeben werden
  2. Wenn der Kram in die Kasse eingegeben wurde, darf die Buchung nicht hinterher manipuliert werden können (also z.B. jede zweite Buchung wieder löschen, zeitlich vor- oder zurückbuchen, unbemerkt nachbuchen wenn sich das Finanzamt ankündigt, usw).

Das erste Problem kann man technisch fast nicht lösen, sondern nur organisationell. Und deswegen gibt es die Bonpflicht. Gibt’s keinen Bon, dann heißt das direkt Steuerbetrug. Kein wenn und aber.

Viele Länder machen übrigens eine Lotterie, so dass die Leute die Bons auch mitnehmen und einscannen/irgendwo hinschicken, so dass es eine größere Datenbasis gibt, mit der das Finanzamt dann arbeiten kann. Finde ich ’ne nette Idee. Also man muss dann an der Lotterie nicht mitmachen, Stichwort Datenschutz, ist nur ein Anreiz. Aber der Betreiber kann sich nie sicher sein, dass sein Bon nicht doch in der Lotterie landet.

Das zweite Problem kann man technisch lösen, und deswegen gibt es ab jetzt sogenannte technische Sicherheitseinrichtungen. Die man nicht so einfach austricksen kann, denn da stecken Sicherheitschips drin, wie sie z.B. auch bei Pay-TV oder EC-Karten benutzt werden. Also vermutlich auch nicht völlig unmöglich, aber so aufwendig, dass es kaum jemand macht bzw. überhaupt schafft.

Technisch basiert das übrigens nicht auf einer Blockchain, wie oft dämlicherweise geschrieben wird, sondern ganz old-school auf digitalen Signaturen. Jede Kasse kriegt ein asymmetrisches Key-Pair, wobei der öffentliche Schlüssel bei der Finanzbehörde registriert wird, und der private Schlüssel im Sicherheitschip steckt. Jede Transaktion wird dann mit dem privaten Schlüssel signiert. In die Signatur gehen die Transaktionsdaten selbst mit ein, aber auch ein Zeitstempel (mit einer Uhr aus dem Sicherheitschip), und ein Zähler, der einfach mitzählt wie oft der Schlüssel schon benutzt wurde. Die Transaktionsdaten + Zeit + Zähler + Signatur werden auch mit als QR-Code auf den Kassenbon gedruckt.

Wenn man Steuerprüfer ist, kann man also zwei Dinge machen: Einen Quick-Check, z.B. kurz was kaufen, und auf dem Bon schauen ob die Kasse auch signiert (also verbucht) hat, und ob da so grob übereinstimmt was gekauft wurde und was auf dem Bon steht.

Oder die große Variante, wo man sich die ganzen Signaturen – die der Steuerzahler speichern muss, z.B. bei den oben verlinkten Lösungen auf einen USB-Stick – holt, und dann nachschaut, ob das alles so stimmen kann. Also z.B. wenn eine Transaktion am 13.01.2020 ist mit Signaturzähler = 10, und die nächste gespeicherte Transaktion war am 27.02.2020 mit Signaturzähler = 250, dann hat da jemand 240 mal den Signaturschlüssel benutzt, aber die Transaktionen dazu fehlen – wahrscheinlich nachträglich rausgelöscht. Bisher geht das bei den herkömmlichen Kassen übrigesn problemlos, nennt sich „Zapper“-Softare.

Genauso kann man sich die Uhrzeiten anschauen. Z.B. wurde da bei einem Händler immer erst nach Ladenschluss fleißig gekauft, dann deutet das darauf hin, dass der Händler tagsüber nicht alles eingibt, und dann nachts alles nachbucht (wobei das natürlich auch beim Kauf den Kunden auffällt, weil er dann ja nie direkt Bons ausgibt, sondern nur am Tagesende alle Bons druckt). Und dann gibt es noch so alle möglichen statistischen Auffälligkeiten, die man anschauen kann – z.B. Stichwort Benfordsche Gesetze.

Also nochmal in der Summe:

  1. Bonpflicht: Damit der Kauf auch in die Kasse eingegeben wird
  2. Sichere Kassen: Damit die Buchung nicht nachträglich manipuliert wird

Wer jetzt sagt, „Weg mit der Bonpflicht“, der sagt auch „Umsatzssteuerbetrug ist okay“.

Und manche fordern ganz offen „Weg mit der Bonpflicht“, wie z.B. die FDP.

In sozialen Netzwerken liest man dagegen oft: „Man hackt ja nur auf den kleinen Mann, weil Cum-Ex, und überhaupt“. Was in etwa so viel Sinn macht wie zu sagen: „Mein Nachbar hat mit dem Enkeltrick Millionen gescheffelt und wurde nicht erwischt. Deswegen sollte jetzt Banküberfall keine Straftat mehr sein. Weil, man muss ja auch mal was für den kleinen Mann tun.“

Und das kotzt mich tierisch an, denn ich zahle regelmäßig meine Lohnsteuer, und mit dieser Kohle läuft irgendwie der halbe Staat. Da kann niemand tricksen, dass holt sich der Staat direkt. Aber wenn Händler/Restaurant etc. keine Steuern zahlen, dann ist das plötzlich okay? Und die Händler/Restaurantbesitzer die ehrlich ihre Steuern abführen, können quasi dicht machen, weil sie preislich nicht mithalten können?

Latex vs Word, Revisited

Januar 28, 2020

The whole Microsoft Office Suite has some „Latex-like“ capabilities for some time now. There was some interesting blog by Microsoft’s Murray Sargent — now archieved, available here — on how this was implemented and what considerations lead to the design choices. It’s always very interesting to read and see such documents of Microsoft insiders, as it illustrates that there indeed are very capable and smart people trying to do their best at Microsoft. Next to those people at Microsoft, who decide to sneakily change your search engine without asking you, or installing software on your computer without asking.

Before a Tex-like syntax for mathematical formulas was introduced in Office 2007, there were and are plugins so that it’s possible to write Tex within PowerPoint – like the excellent IguanaTex – but it’s of course always better to have such features directly implemented in the core program.

Recently I authored a paper (scientific conference, thus Latex) with a colleague and had to present the results. Since you now have the capability to type in Latex-Formula in Microsoft Office, in particular in PowerPoint, I did the presentation in PowerPoint. This has several advantages over beamer, notably speed and graphics capabilities. Sure, you can do amazing graphics and illustrations in beamer with e.g. TikZ, unfortunately it takes ages. So this is quite often avoided. The result is that quite often presenters would simply copy the most important formulas from their scientific paper and paste them into beamer, and then more or less read the paper aloud.

For example when googling for „tex beamer talk“ the third result was this document. This is very typical for an average beamer talk. The result looks nice, but: It consists mostly of bullet points and lists. There are zero illustrations (like arrows, graphics, icons) etc.

Of course, finding quickly a PowerPoint counter-example is not easy either, as there are just too many overwhelmingly bad PowerPoint presentations.

These Tex-like features made me think on whether it’s worthwhile and also more productive to write a paper itself in Word from now on. So I set up a small experiment. This experiment was inteded to check the workflow that I have when authoring a paper:

  • there is an existing stylesheet provided by the journal/proceedings, like the notorious LNCS stylesheet
  • I have some specific thoughts on what to write, which often includes complex mathematical formulas
  • there will be some graphics and illustrations, and they should be easy to paste and look nice. I didn’t bother with this one, since I already knew the result.

So I took the LNCS style sheets (the Word version, as well as the LaTex version), and tried to write down the first proof from the book [1]. I did not intend to recreate the layout given there – as I was writing with the LNCS stylesheets anyway – but wanted to (re)create the text and the formulas. As said, for me, that’s a typical real-world scenario.

Btw, there is an interesting study published in the very best of all quality journals, namley PLS One, that investigated on the productiveness of Latex vs Word [2]. The task given to users of varying experience with Word and Latex was to recreate specific texts including the text’s layouts. They concluded that Latex-users are dellusional and suffer from Stockholm-Syndrome: Even though they are vastly less productive and struggle much more, they are more convinced and happy to work with Latex.

Personally, I think this study is heavily flawed. First, I already mentioned in a previous blog post that creating a good-looking layout from scratch takes time in Latex, and is usually not worth the time if you just want to create a small document. But that’s not the typical workflow you have if you want to publish – there will be a layout already crated by the journal/proceedings, i.e. like the LNCS template – and there is no need to recreate that layout. But, aside from that, they imho forgot other quite important factors. I will illustrate the pro- and cons of Latex vs Word in this completely objective table. Let’s also forget about things like Adobe InDesign or Pagemaker for a moment.

 LatexWord
caters to people who are smartYESNO
caters to everyoneNOYES
can handle vector graphics properlyYESNO
can handle math properlyYESNO
has a lot of bugs that result in unpredictable behaviorNOYES
result looks like a turdSOMETIMESALWAYS [3]
A fair comparison of features between LaTeX and Microsoft Word

I have to say, that I created this table after doing this small experiment. And I seriously went unbiased into it, I am looking always for new and more effective ways to do it.

Let’s have a look. I wrote these document as quickly as I could, without investingating much in optimizations. This is the result for Latex LNCS, this is the one for Microsoft Word.

Now with Word 2007, Microsoft introduced a new (actually quite nice-looking) font dubbed Cambria. All their math symbols are created in Cambria, however the Springer Stylesheet uses Times New Roman. So you have Cambria Math mixed with text in Time New Roman, and this creates of course a visual clash. Hence I created another version where the text is formatted with Cambria as well. All in all, the Word version looks just horrible:

  1. In the first line (N = { …. }), page 1, the mathematical numbers are larger than the text next to it, due to non-matching math/text fonts. However, no automatic adjustment is done by Word. The Cambria-version looks better though.
  2. „in a finite set {p_1, p_2, …, p_r }“, page 1. Here the spacing between the left bracket and p_1 and the p_r and right bracket is off. Word doesn’t adjust for the space taken by the index in p_r.
  3. page 2: „the last sum is equal to“. The display formula below has a way too large margin between the text and the formula. (There is no newline after „is equal to“).
  4. In the large products (log x <= …), the indices below the product signs have a margin that is way too small. In Acrobat Reader and some zoom resolutions, the indices even tend to even overlap with the product signs.
  5. Sometimes Word sets a margin before starting text after formulas (e.g. in „Now clearly…“), sometimes not („and therefore“). I was unable to remove the margin before „Now clearly“. If I would try to change anything at that position (e.g. reset margin for paragraph, try to delete that part etc.) Word would automatically convert the formula into inline mode. It looked like one of those formatting bugs that randomly occur in Word, i.e. you change something at some point, and due to complicated interactions of various formatting rules, something indeterministically breaks.

Time-wise, it took me longer to write the Word version, but maybe that’s because I am more familiar with Tex.

  1. can handle math properly: All of the item points above, proven
  2. has a lot of bugs that result in unpredictable behavior: Item number 5 above, proven. Or just ask anyone of their experience in handling complex (> 100 pages) documents with Word.
  3. result looks like a turd, proven, just see the PDFs.

More shitty looking math is available from NIST (chosen here, since I can probably copy and paste small portions under fair use). See e.g. this NIST standard.

As for vector graphics: Word will simply rasterize any graphic given to it, even if the source is a vector graphic (like a PDF, EPS or even EMF). Even if it’s stored vectorized within Word (I think it can be done to some extend with WordArt in the later Word versions), once you convert to a PDF, it’ll be rasterized. Another point is that when graphics are created and then later scaled, you will inevitably end up with fonts within the graphic that does not match the running text. Line width of graphics will also be odd, sometimes way too thick, sometimes way too thin.

Case in point: Consider this NIST standard:

Compare the graphic above to this example from TikZ. Namely this PDF. So the item about vector graphics: Proven, as above.Then there is the „caters to everyone vs. caters to the smart ones“. It’s much easier to create a shitty diagram like the one above vs creating one with TikZ. For the latter you have to have a rough understanding on programming, and most people don’t. Also, it is surely quicker to generate the shitty looking graphic above.As said in the study, you are probably quicker. But then again, do you care that you get a quick result and other suffer when reading your shitty looking document, or do you invest more time that others have an easier time reading and understanding your document? Also a matter of perspective, which is not addressed in the study.

Another example: Consider this box plot from the study mentioned. Rasterized, blue/red color scheme. Font is also sans-serif, but font sizes don’t match the running text, as can be seen in the PDF. My bet would be that this was generated by Excel. Compare that to this really nice looking bachelor thesis from the author of the excellent texfig, i.e. especially consider this illustration. Fully vectorized in the PDF. Q.E.D.

All set and done, what is surprising to me is that quite often, other people don’t see that this is bad layout and design. Take for example the picture from the NIST draft above. Conversation is then something like:

MasterChief: „This looks like shit compared to the TikZ figure“

Tinkerbell: „I don’t think so. I think it looks nicer. It has colors.“

MasterChief: „But why? why do you think so?“

Tinkerbell: „This has colors! It’s so much nicer. It has colors!“

MasterChief: „But you don’t recognize anything. Especially when you print it in black/white.“

Tinkerbell: „But it has colors!“

MasterChief: „Why do you need colors in this picture? There are so few categories, it doesn’t add any clarity. Also orange/light blue is a horrible choice for a color scheme. Think of those who are colorblind.“

Tinkerbell: „But it has colors!“

So taste is a difficult thing. There is maybe a slight direction and sense among professionals who layout on what is really bad design and what is just a matter of taste, but with Word everyone feels like a designer. So most will simply ignore basic design rules.

Uh, and if you still didn’t get my point: User LaTex if your document is reasonably complex (> 30 pages) or has math in it. Or graphics.

[1] It’s four, I knew you would ask, and you can computer yours here.

[2] It is interesting, that everyone compares Latex to Word, but nobody in their sane mind would compare Indesign with Word and then come to the conclusion, that Word works „just as well“, that „it’s faster“, and you should switch over. Every magazine layouter/typesetter would just laugh at you.

[3] There are a lot of bad-looking LaTex documents, too. Especially when it comes to tables. Note the excellent, eye-opening presentation „Small Guide to Making Nice Tables“ by Markus Püschel.

Cracking Jurassic Park (DOS, Ocean)

Januar 20, 2020

Another issue I have is, that I simply can’t let things go.

So this is how to remove the copy protection from Jurassic Park (DOS) from Ocean Software.

I’ve already written in a previous blog post on how I unsuccessfully attempted to crack the copy protection mechanism, namely Rob Northen’s ProPack (sometimes abbreviated RNC). I’ve also written that I was able to unpack the INSTALL.EXE binary with Universal Program Cracker. In the meantime I’ve also found UNP, a universal DOS unpacker that supports dozens of formats. This one works as well, and seems to be the gold-standard of DOS unpacking.

Next thing is the copy protection mechanism itself. In my memories, you had to have Disk#1 in the drive to start up the game. But looking at this review from the German magazine PC Player, the review says w.r.t. copy protection: „can only be installed from original disks“. So the step where I stumbled upon last time – disk check during installation – seems to be the actually copy protection mechanism, and after managing the installation, we should be done. I confirmed this by copying a successful installation from Dosbox to a FreeDos VirtualBox instance, and the game would still boot up.

Now this all doesn’t make any sense at all as a copy protection mechanism, because you can simply installed Jurassic Park once, and then zip (or arj or rar) the game directory manually and give it to someone else. And we were young but not stupid back then. So maybe there were some additional run-time checks back then. Or I didn’t actually crack the game but there is some nifty hidden catch, like a boss-fight deep in the game where you can’t win anymore… but at least right now it seems as if you break the disk check during installation, you’re done. Ok, so let’s do that then.

Step 1

Unpacking the INSTALL.EXE file. As mentioned, can be done by either UPC or UNP.

Step 2

Debug the unpacked INSTALL.EXE. You can either go the classic route via Turbo Debugger or … even debug.com (?), or simply use the DosBox debugger. Now during this whole process I have to say that looking back, USE MODERN TOOLS.

Fun Fact: With Turbo Debugger you can of course not easily break in between a graphical program, so this makes things difficult. You can of course do remote debugging, i.e. execute the program on one PC (graphically, as it would normally run) and the use a null-modem connection and run Turbo Debugger on another machine. There is an interesting article about doing that with DosBox, but it’s very slow.

Fun Fact 2: Back in the days, most folks could probably only dream about remote debugging … like owning two PCs? How rich do you have to be? … but back then my father was so annoyed that I was always using his 386, that I got my own computer (a used 286). So technically, I could have done remote debugging back then. However, I didn’t know what debugging was in the first place, so that’s a bummer. In any event, if I’ve learned anything during this process it was: Unless you really wanna rock like it’s 1992, use modern tools instead. Makes things much easier.

With DosBox debugger (dbd) I set a breakpoint at int 13 with ah=00. That’s „reset disk system„. Setting the breakpoint in dbd is ‚bpint 13 00‚. Then F5 to execute the program.

I mean the installer checks for disks in drive A and B, and so I thought that that might be a good starting point.

Now what the f*“§!“$ is that? There is an int3 instruction, which is a software breakpoint. If you’d use Turbo Debugger, then now things already become difficult, since I think TD relies on software breakpoints.

int3_large

Short background lecture: int3 (0xCC) is a debugging interrupt. Basically a (simple) debugger works like this. Say you want to set a breakpoint at position X. Then the debugger will look up the assembly instruction at position X, remember it, and replace it with int3. During system execution, interrupt 3 will be called, and the debugger hooks to that interrupt. You can then inspect every state of the program, and if you want to continue execution, the debugger will insert the original instruction at X, and continue. So usually int3 is reserved for debugging, and you won’t find it in a normal program.

What apparently happens here is that the installer has set his own interrupt handler for int3. During normal execution, that int3 handler is called. When a debugger hooks int3 instead, then the debugger’s handler is called. This is of course different program code than the installer’s int3 handler, and the installer will notice that the debugger stepped in, and detect the debugger.

What’s more bothering is that apparently the code around this position is encrypted, and basically illegal garbage. The int3 handler of INSTALL.EXE will decrypt the program code during runtime. You can see in the next screenshot that IDAPro fails to interpret the assembly code, as it’s encrypted.

obfuscated_code.png

Fabulous Furlough has an interesting article on how  even tougher versions of RNC worked (in this INSTALL.EXE I could not identify any int1 hooks, but in his blog post, FF talks about some soccer manager game (maybe Graeme Souness Soccer Manager? – I need to check this out) and not Jurassic Park.

I was completely stuck and almost giving up, but when you continue to step through the program, at some point you reach a position where a dialog „insert original disk“ appears. Somewhere around here is short before the „insert disk“ prompt:

before_check.png

Then here we reach the „insert disk“ prompt:

disk_prompt

I then looked this up in IDAPro.

ida1.png

Could it be that this is a position to patch? Here „call sub_1C5B“ is the call that will result in the nag-screen to appear. And „call loc_5AE6“ seems to be a check call (is this the disk check?!), where the result is verified, and depending on the result the nag-screen appears.

Moreover the whole function that IDA shows is either return 1 (i.e. mov ax,1 and then return) and the other code path is return 0 (xor ax,ax, and then return). My thinking was that the protection probably is like this:

function CHECK_FOR_ORIGINAL_DISK
loop a few times {  
  if(some complicated disk check verifies) {
    return 1;
  } else {
    request the user to insert the original disk;
  }
}
// user was asked several times, still not correct disk
return 0;

Step 3

What would we be without the National Security Agency? Well, less spied upon that’s for sure, but also without an extremely useful and cool reversing tool, namely Ghidra. In particular Ghidra comes with an extremely powerful decompiler, completely free as in FLOSS. And it supports DOS binaries. Throwing in INSTALL.EXE we get:

ghidra.png

Wow, I should’ve done that in the first place. Now it becomes clear: The main program just checks, whether the function CHECK_FOR_ORIGINAL_DISK (here called disk_check_FUN_1000_0bf3) returns 0 or not 0. If(!0) we do all kinds of installation routines, otherwise abort.

ghidra2.png

So how do we make sure that disk_check_FUN_1000_0bf3 always returns 1? Looking at the disassembly in Ghidra makes this really easy, as you can see the decompiled code side-by-side. The check

if (lVar2 == CONCAT22(param_2,param_1)) {
  return 1;
}

results in this assembler:

                       LAB_1000_0c33                                   XREF[1]:     1000:0c45(j)  
1000:0c33 eb 12           JMP        LAB_1000_0c47  // just directly go below and return from function

                      LAB_1000_0c35                                   XREF[2]:     1000:0bf7(j), 1000:0c2f(j)  
1000:0c35 e8 ae 4e        CALL       FUN_1558_0566                                    undefined FUN_1558_0566()
1000:0c38 3b 56 06        CMP        DX,word ptr [BP + param_2]
1000:0c3b 75 bc           JNZ        LAB_1000_0bf9   // if comparison fails, jump back to loop @0bf9. Don't want!
1000:0c3d 3b 46 04        CMP        AX,word ptr [BP + param_1]
1000:0c40 75 b7           JNZ        LAB_1000_0bf9  // if comparison fails, jump back to loop @0bf9. Don't want!
1000:0c42 b8 01 00        MOV        AX,0x1        // write 0x01 in return register
1000:0c45 eb ec           JMP        LAB_1000_0c33 // return via 0c33 which jumps to 0c47
                      LAB_1000_0c47                                   XREF[1]:     1000:0c33(j)  
1000:0c47 5e              POP        SI
1000:0c48 5d              POP        BP
1000:0c49 c3              RET    // return from function

The crack then is quite easy. Just nop out both the jumps JNZ LAB_1000_0bf9, i.e. go to the file offset, and replace 75 bc by 90 90 and 75 b7 also by 90 90. And… drum roll … it WORKS!

Conclusion and Web-Links

So it seems that unpacking INSTALL.EXE is probably the more difficult task. On the other hand there are tools like UNP and UPC that work well, so I don’t think I will investigate any further in this direction.

The original question was: Could I have done that in 1993? Definitely not. There was no unpacker, no DosBox debugger, no IDAPro… and probably – we’ll never now for sure I guess 🙂 – Ghidra. Actually thinking of it, there was no Ghidra either, since Java and Swing didn’t exist in 1993; the first JDK was released in 96.

There are other useful tools and links I found in the process of doing this. I have to say though that DosBox Debugger + IDAPro 5 (free version available @ScummVM ) and Ghidra turned out to create the best workflow. If you have a full version of IDA, Ghidra can probably be omitted (not sure whether the full version of IDA can still do DOS binaries).

  • Sourcer is a DOS disassembler from the good old days. The latest version I could find (Sourcer 8.01) is from the late 90s and thus very advanced. It can even produce assembly code that compiles with MASM or TASM. There is a nice blog post introducing Sourcer. Of course re-compileable code means you can really mess with the source code and if you want to deeply understand the game itself, e.g. its game-logic. But to just remove some protection – for compatibility reasons – like I did here, I very much prefer IDA. Nothing beats IDA’s graph mode.
  • Syncing IDA and DosBox. My way of „synching“ was simply taking the DosBox output and searching for sequences of assembly instructions (i.e. hex codes). That’s cumbersome, but I am not aware of another method with the free version of IDA. There is an IDA plugin for DosBox, but you need the commercial version of IDA.
  • There is a nice page which gives an overview over tools that remove disk protection schemes for DOS. There were apparently crack-collection programs available back in the days. If I only had one of these programs back in the days 🙂 Interestingly, a tool called Crock was also able to crack the Jurassic Park installer. I checked the applied binary changes for confirmation on whether I did the right thing. The patches applied are different but essentially result in the same, namely that the check function always returns 1. They patched the whole function so that directly after entry, the code jumps to MOV AX,0x1 and returns.
  • There is also Insight, a free DOS Debugger. Haven’t tried it, but might be an alternative to Turbo Debugger.

And while we are at it: Cracking Need for Speed 3: Hot Pursuit

November 15, 2019

With 3D Gaming titles like Doom (btw absolutely MUST READ: Masters of Doom), Duke Nukem 3D and Tomb Raider, suddenly 3D accelaration became a big thing in the mid 90s.

However since nothing was standardized in DOS, let alone even developed, both software developers and hardware makers began experimenting on how to accelarate 3D graphics. Since all engines were custom developed and using all sorts of hacks, it wasn’t even clear how to effectively accelerate the engines, or to create better looking graphics. A huge field of experimentation. Fabien Sanglard has some really interesting articles about these early chipsets, like the Voodoo1 and the Rendition Vérité 1000 and how to program them.

Thus it was also a huge field of experimentation for early adaptors. Of the expensive kind.

The typical approach by hardware manufacturers in the beginning was to design a 3D chip, had a few selected software developers adapt their titles for it, and ship it bundled with the board.

The first 3D accelerated card I bought was one based on the S3 Virge/DX. It was marketed as a solid 2D graphics chips with 3D features. The problem with this chip was that on the one hand its 3D accelaration capabilities were very limited, and on the other hand the 2D accelaration had also issues. This was at a point in time where video game makers extended the typical resolution from VGA with 320×200@256 colors to higher resolutions, like 640×480 or 800×600. While there were standardization efforts by VESA, a lot of manufacturers didn’t implement all VESA standards, so there were issues with higher resolutions. There was UniVBE, but still I had a lot of issues especially with DukeNukem3D (based on the Build engine).

Thus I replaced it with the best card for DOS I could think of, the Matrox Mystique. This solved most of the issues, but it lacked decent 3D acceleration features.

My next try was then an add-on board. From an early adaptor’s perspective it wasn’t really clear that 3dfx‘ Voodoo 1 would make the race, so I opted for an NEC Power VR PCX-2 chip in the form of a Matrox m3D instead. It shipped with Ultim@te Race, a pretty boring racing game, and there were also patched versions of Flight Unlimited and Tomb Raider 1 (see here for a comparison of a Voodoo 1 and Power VR). Unfortunately, performance wasn’t really that great, and the games also looked only slightly better than software-based rendering. And the three mentioned games were basically it. Turns out I bet on the wrong horse, as 3dfx‘ Voodoo 1 made the race.

Thus when I upgraded from my Cyrix 6×86 P166+ (with its shitty FPU performance) mostly used for DOS gaming to a Pentium II 300 based system mostly intended for Windows gaming, I opted for the best of the best: 3dfx‘ Voodoo 2. There were two main reasons:

Voodoo 2 was the most expensive option of course. Not only was the Voodoo 2 board quite expensive, you also required a 2D board as well. And when Bjurn and Winnie the Pooh went for new systems and opted for a bargain ALDI PC, it was only natural for Matthew and myself to mock them for doing that, instead of building a machine on their own. However, the ALDI PC was actually quite decent and came with an onboard Nvidia RIVA 128 ZX.

And, long story short, this is where Need for Speed 3: Hot Pursuit comes into the story. By that time, 3D graphics were standardizing more and more, and 3dfx‘ proprietary Glide interface was more and more replaced by DirectX/Direct3D. And Nvidia not only concentrated on accelarating DirectX, but also continually improved on the driver front. While NFS3 still looked better on our Voodoo 2 boards, it really looked decent and was running quite well on the Nvidia chips. Which were several magnitudes cheaper, as they combined a 2D board and 3D board and were still cheapter than the Voodoo 2 alone. Nvidia continued their success with the TNT, TNT2 and GeForce 256 and the rest is history. Seeing NFS3 on the Riva 128 really made me think: Hm… the Voodoo 2 look better for sure, but this is quite ok, and I paid what? Next time Nvidia, that’s for sure…

Need for Speed 3 made a huge impact on us. The pursuit system was incredible fun in multiplayer mode over local area network; graphics look nice even today, the techno-soundtrack was great, and in the German version some ingame voice was dubbed by Egon Hoegen. Egon Hoegeon also dubbed the very popular traffic-safety television series Der 7. Sinn, which gave the whole thing a funny and humorous note.

Unfortunately, Need for Speed 3: Hot Pursuit requires a CD drive, which I don’t have on my current notebook computer. Also the installer is a 16 bit program, which doesn’t even execute anymore on a 64 bit Windows. So let’s startup IDAPro7 Free and do the whole GetDriveTypeA thing.

getdrivetypea

Looking for references for GetDriveTypeA, we end up at the function @0x4F9410. Checking then where this function is called from, we see a function at around @0x4B6394, which depending on some jumps creates some MessageBoxes (MessageBoxA). So it is not difficult to assume that these are the calls for the error messages („Please insert the NFS3 CD“). Then it’s a little guess work, and after some trial and error it turns out that turning the JNZ marked in the picture below to a JMP instruction suffices to circumvent all the CD check code.

patch

So, a very simple one. These were the times…

Oh, and this is for education only. If you actually want to play the game, I suggest to head over here and download that patch package as well as follow the instructions, as NFS3 also requires some registry patches. Using a file compare on my patched .exe and the one provided in the link, I noticed that the other crack had also patched jumps at other locations, all near the above mentioned check locations, resp. the procedure around @0x4B6394. On my system they were not neccessary for the game to run, however.

It’s really time for the old NFS games to be released on GOG

Challenge #2 (Success): Dark Reign (Windows 95)

November 6, 2019

As I failed to solve challenge #1, I really wanted this not to be another failure.

The first difficulty here was to actually get an original copy of Dark Reign. I wanted to have the original ISO file, as this was the one I got back in the days. As I don’t have the CDs anymore this proved to be tricky, but after some googling, I managed to get a copy.

The original version of Dark Reign doesn’t work on modern systems anymore, due to DirectX and DirectDraw issues. Dark Reign used DirectX 3. Fond Memories. C&C Red Alert… I am getting old.

I digress.

There is a patch from version 1.0 to version 1.2, and applying this patch to version 1.2 made the game run in compatibility mode. The problem here is that dkreign.exe is now exe-packed. Detect-It-Easy report Neolite 1.01. I’ve found no tool to automatically unpack Neolite 1.01. There is a tutorial (link blocked with Firefox, use at your own risk) on how to do it manually using SoftICE, but then again, the days of SoftICE are long gone, and I have no Windows 98 machine here.

I’ve copied the original 1.0 version of dkreign.exe into the 1.2 patched folder, replacing the 1.2 exe. Dark Reign would still start, so I’ve focused my efforts on this one. I don’t consider this cheating, since I am very sure that I got the original version 1.0 on CD back in the days, i.e. the unpacked exe.

To disassemble dkreign.exe I’ve used state-of-the-art IDAFree 7. Ghidra with its powerful decompiler is probably worth a try, too; but let’s start with the well-proven route. Unfortunately, the IDAFree debugger failed to work (it’s incredibly helpful to have graph-mode available when debugging), and would hang with some runtime errors in apphelp.dll – I attribute this to the Compatibility settings for that old exe.

x64dbg somehow worked, but I am not very familiar with that one (something I really need to work on), and wasn’t sure on how to set breakpoints on certain imports, especially on GetDriveTypeA (more below).

So it means that all was done with the disassembly of IDAPro and a hex editor (I use HxD). I stuck to some “poor man’s code analyzer”, i.e. to identify code flows in IDA (say to identify whether some JNZ was taken, i.e. wether the Zero-Flag was set or not), I’d randomly zero out parts of the code in one of the jump locations. If the patched exe would still run, the jump doesn’t reach that location. If the program crashes, it apparently did.

Remembering the good old days, the typical ways CD-checks were implemented (in the good old days, before such nasty things as SecureROM or SafeDisc) like this:

  • call GetDriveTypeA (a Win32 Kernel Import).
  • check if the function returns DRIVE_CDROM (5) or something else like DRIVE_FIXED (3).
  • If 5 is returned, then read some files from the CD and check their content, if they match run the program.

Otherwise prompt the user to enter the CD. Something like this (taken from the actual disassembly here):

getdrivetypea

Unfortunately things are more complex here, than just patching the final jnz above. So anyway, I looked for imports of GetDriveTypeA. Other suspicious candidates are GetVolumeInformationA and GetLogicalDrives).

imports

Bingo. GetDriveTypeA leads us to 0x57f290, apparently some check-routine.

xref

Checking cross-references (i.e. positions, where the sub-routine at 0x57f290 is called) leads us to 0x57aa50.

xrefs_getdrivetypea

Wow. The function at 0x57aa50 looks huuuge. After looking around, it looks that some checks are happening, and then there is some large jump-table. This all seems, as if the main game menu is processed here.

Now it got nasty. It took me approximately two evenings and another full day to make sense of the code-flows here. As mentioned I used to patch various locations in order to provoke crashes to identify which code path‘ were taken.

First there was a string reference at 0x57AE46 (“Credits”). Random manipulations here resulted in crashes when clicking on “Credits” in the main menu. Bingo, so I was correct.

The game complained on almost every other option in the main menu that there was no original CD. So I focused first on the program flow of clicking “Single Player” → “New Game” → Get a message with „Please insert your CD“.

Trying to find this code-flow was very frustrating. But at some time I was sure that when you click on “New Game”, then you end up 0x57AE05, and the sub-menu where you can actually start a new game is executed below with a call to 0x572130.

start_new_game

But… what is this string reference at 0x5725E5? „SS_NO_CD“. There are similar code blocks at 0x5726C5 and 0x572505. These very much look like code blocks that generate a “Please insert your CD” message. We surely don’t want to get there! So again after mangling with the code paths and patching here and there to get a controlled flight into terrain (a controlled crash), I was able to identify the following jump:

loc_572669: ; patch jump, to start game in single player, new game
call mc_checks_sub_401470
cmp eax, 3
jz loc_572737 ; patch this to jnz, i.e. from 0F 84 ... to 0F 85 ...
no_cd_check_final

This resulted in the “Please enter your CD” message vanishing, and instead the intro-video and New-Game options to show. I was able to start a new game, but there were apparent graphical glitches.

Now I suspect 99% that it’s due to DirectX issues and not the CD check, but of course I cannot be 100% sure unless the game is fully playable.

Nevertheless, at this point I stopped. The rest would be just routine work. Getting through all remaining code path (i.e. check all options of the main menu), identify the other check positions – which will be similar to the one of “New Game”, and patch them out.

Disclaimer: During all this work, I left the DarkReign ISO mounted as a windows drive. Could be, that without the disc present the game doesn’t start. But at least the “Is it a genuine Disc?” verification is patched, i.e. this would’ve solved my problem back in the days with the copied CD-R.

Would I have been able back then to crack the game, if I had the right tools or information?

Very difficult to say. Tools of the trade back then were W32Dasm and SoftICE. W32Dasm didn’t have a graph mode; I think this was introduced in IDA in what… 2005 or 2006? So basically you’d have to draw your own control flow graphs with pen-and-paper. Probably one reason why I never could make sense of all this back in the days – not that I seriously tried, let alone tackle DarkReign. Maybe SoftICE would’ve helped. But I never really figured that one out.

All in all, this was very time-consuming; time which I actually don’t really have. But reversing is incredible fun, and I sometimes think I should move my career more into malware analysis… Currently my lack of debug-skills is definitely preventing this, though….

IDAPro database and patched exe are available for educational purposes here. Password for the zip file is ‚fckgw‘. Note that the IDAPro database files contain a lot of unclean comments, i.e. comments like “patched” indicate that I patched there at some point in time to identify a code flow. But really, the only patch necessary to start a single player game is at 0x572669 to change the jz to a jnz.