Florian Schrofner

Mobile developer based in Salzburg, Austria, who loves cooking, cycling and coffee. I plan to blog mostly about technical stuff, but also randomly about personal opinions and experiences.

GameDev Log #1 - Getting Started

I recently realized that I'm more capable than ever to actually turn one of my childhood dreams into reality. Creating my own video game, that is. When I was a young teen I always dreamt of making my own game. I started by doing some first experiments in Visual Basic, but as I didn't have any experience in programming I didn't get very far. That lasted until I discovered Game Maker, which made things quite a bit easier for me. I was hooked, especially after I've gotten the book "The Gamemaker's Apprentice" for my birthday, which covers the whole flow from start to final (simple) game.
Using that as a basis, I created a few small games back then. I particularly remember two Jump'n'runs I made, which were Simpsons and Spongebob themed. Me and my brothers drew the sprites together and I would create a game out of them. We even recorded some phrases from the TV shows to put them into the game. Unfortunately the results of our work are gone for good. I'd love to revisit them, especially as I don't remember much of the actual gameplay, but maybe it's for the better as I'm pretty sure they weren't as great as I remember them.
Anyway, after that I didn't think much about creating my own game anymore until recently. This was when I realized that I'm in a better position than ever to actually put that dream of mine into action. Since then I've gotten a Master's degree in Computer Sciences and several years of experience as developer, so my chances actually look decent to succeed, at least on the programming side.

Finding the right tools

After that epiphany, the first thing I needed to do is to find the right tools for the job. The great research time began!
Researching and comparing solutions is actually a thing I enjoy a lot, so this was more of a pleasant experience than a chore to me. Most information I got was from the fantastic Youtube channel GameFromScratch, which I watched for hours and hours comparing different game dev tools and engines.

The first thing I laid down for myself was that it has to be a 2D game. 3D would make it too complicated in coding as well as in creating the necessary assets. The next thing I defined was that I want to use a game framework rather than an engine, so that I can at least increase the learnings I will take from this, even if I might fail.
Since I want to go for a pixelart style game I decided to use Aseprite to create my assets, as it provides quite a lot of convenience features and also allows you to create animations directly within the software. Additionally, I bought a huge sound FX pack from itch.io, which was discounted to ~10$ and I figured I can't really create those effects on my own (maybe I'll record a bit of voice, though). Aside from that I also bought 1BITDRAGON which makes it a lot easier for people like me to create their own music. I didn't spend too much time on it yet, but even in the short period of time I tested it, I got some decent results out of it, so I'm optimistic.

Another thing you'll need is an actual game engine/framework. Here is where it gets tricky. There is an abundance of game engines out there, each with its own quirks and strong suites. And as this is the area in which I'm most experienced in (I basically got no clue about creating game music nor art), I felt a strong urge to try a bunch of them myself to see what I like the most.
So the first one I went for was LibGDX, as it seemed quite logical. I use Kotlin in my dayjob and as LibGDX is a Java game engine I can write my game entirely in Kotlin. I actually liked it and there's a lot of Youtube tutorials out there that would guide you through the whole process of creating a game. One issue I had, although not an issue with the framework itself, is that you are not forced to follow a specific way of implementation. So when I started I just followed one guy on Youtube and it took me a while to figure out that his way of doing things wasn't all that great. Maybe it's alright for total beginners, but I wanted to use a nice and maintainable architecture right from the start.
So i scratched everything I had and followed another tutorial, this one was way better and I really learned a lot of things, especially about entity component systems (ECSs), an architectural pattern I've never used before.
Although it was already going kinda well, I still ran into some issues. Especially concerning the ECS, as I never felt like I had proper control over it. But that was probably more me not getting the hangs of it than the ECS' fault (which in the case of LibGDX is called ashley, by the way).
So I moved on as I wasn't totally happy and I wanted to compare some game frameworks anyway. The next ones I tried were Love2D and Bevy.
Both were very nice in their own way and I was able to achieve results quickly. But although Love2D probably gave me the fastest results of all the engines I tried, thanks to all the amazing Lua libraries and Lua's easy to learn syntax, it did not seem like a proper fit for bigger projects as it's not type safe and your code might fail silently during runtime as well as a lot of convenience features do not work, namely autocomplete and refactoring.
I tried to fix at least parts of that by switching to a typed Lua dialect called Teal, but that in turn broke some of the libraries I was using. So I moved to the other extreme of the spectrum, to the Rust library Bevy. Bevy is still under heavy development and changing quite a bit from update to update, that is probably the reason why it's documentation is almost non existent. Although they're already planning on improving that in the future by creating the Bevy book. Aside from that I can not say anything bad about Bevy itself, except that I don't think Rust is the perfect choice for game development, although certainly also not the worst, but that's a personal opinion.
Before I arrived at my current choice, I also had a quick peek at Korge, which is completely written in Kotlin, so it can actually compile to native code and DragonRuby, which allows you to write games in freaking Ruby. Although I only checked out the latter because I got it for free when they were giving it away on itch.io for a day. I didn't implement anything in either of them, but I checked out the examples of Korge and watched around 1h of Ryan Gordon implementing Tetris in DragonRuby. I wouldn't use DragonRuby because of multiple reasons. For one it's not open source, second it requires you to pay yearly if you want to release to mobile and finally because it doesn't offer any bindings to a proper physics library (there only seems to be a very outdated Box2D binding for Ruby, but I'm not even sure if that works). And I later decided that it would be cool to have some proper physics in my game.
Korge on the other hand doesn't have any of those anti-features, but the things that didn't really convince me, were that its level editor frooze several times on me and that the whole framework is basically created and maintained by a single guy. Which is impressive, of course, but makes me worry about the project's future.

So finally, I discovered a framework called raylib, which is written in C and has a crazy amount of bindings to other languages. Although this one is also mainly created by one guy, the project itself received a lot of contributions from others and also Ray received an Epic mega grant, so I see its future more optimistic. Additionally, you can easily use the Physac physics engine with it. But as I don't want to write my game in C nor C++, I tried to find a language I liked, which has up-to-date bindings to raylib. The most promising one I found was this one here, which binds to the Nim programming language.
I've actually heard about Nim before by coincidence, or actually I've known it by its original name, which was Nimrod, but I bet a lot of developers never heard of it before as it is a rather niche language, which is a shame. It's a language that compiles to C, which in turn compiles to highly optimized native code. Because of it's compilation into C it's also very compatible with any C library, while the language itself is very highlevel and kind of Python or Ruby like, but also borrows a lot of features from other languages. This makes it very interesting as a game development language, as you can script your game in a language that is a joy to write, while still producing performant executables that can bind to existing C libraries.
Intrigued by this language I decided to give it a try and so far I've not been disappointed. The bindings work like a charm and its syntax is very readable. One issue, though, is that there's almost no library for game development yet, so you have to build quite some things from scratch. For example, if you want to use LDTK as your level editor, there won't be any SDK that you can readily use. There seems to be an inofficial one for Tiled, but it's broken. Given this situation I decided to just write my own parser for LDTK and see how I can proceed from there. The documentation of LDTK's file format is amazing and it gave me a good first task to learn Nim, so I actually enjoyed it a lot. I will probably release the outcome once I deem it usable. After that I already started rendering the map and drawing some animations, which works nicely. I have to say that I rather enjoy writing those things myself, as I got a much better understanding of how things work now and I will probably tackle my own ECS in Nim in the future. But that will be the topic of a later blog post, as this one is getting way too long already.
The next one will hopefully be way shorter and I hope to release status updates in a regular manner, but we'll see if I'll actually achieve that.

Using Wireguard on a VPS to access your NATted home network

At home I'm using the open source media center Jellyfin hosted on a homeserver to stream series to different clients and keep track of watched episodes. It works rather nicely except for some minor hickups on the Fire TV app.
One problem of hosting your media at home, though, is that you can't access it remotely by default. Thankfully, you have a few options to fix that issue.

Option 1 - The direct approach

The easiest approach would be to open up the port of Jellyfin (8096) on your router so that you would be able to directly access your media center from remote.
Additionally, assuming you don't have a static IP, you could set up a dynamic DNS service which would automatically update it's records to point to your current IP (I used this successfully with my own domain, Gandi's DynDns API + a script I found on Github).
As tempting as this seems, it is not really a good idea as your security would rely on the security of Jellyfin, which might be a good media center, but I wouldn't trust it enough to keep my network safe.

Option 2 - Using a local VPN

Your second option would be to setup a VPN on your homeserver, only open up the port of the VPN on your router and then dial in via that. From there you would be able to access Jellyfin without problems.
For convenience you could also setup a dynamic DNS for this solution.

Option 3 - Using a middleman VPN

Although the second option is totally valid and a nice solution, it is not viable for my usecase as I am currently using an internet connection which is NATting, meaning that I can not simply open up ports on my router as these are dynamically remapped and I'm sharing my public IPs with others.
If you find yourself in the same situation you're actually quite limited in the amount of options you have. In all cases you will need some kind of middleman to be able to connect to your home network.
The solution I came up with is hosting a VPN on my VPS, that i was already using for my website anyway.
The homeserver and the client trying to access the homeserver would then both connect to that VPN and could henceforth establish a connection to each other.

The solution

After some research I decided to go for Wireguard instead of OpenVPN as it seems to be a more modern and secure solution. Also i really liked the way you can configure Wireguard on the client side.
An easy way to host your own Wireguard VPN is to use Debian as your distribution, activate the testing repositories and install freedombox (Wireguard support is unfortunately not included in the stable freedombox yet).
Once installed you should be able to login into the web interface of freedombox and install Wireguard from there.

To connect a client you need to generate a private and a public key for them. The Windows implementation does it automatically for you, you just need to add a tunnel and you will get the public + private key. On Linux you can use the wg command to generate them as follows:

cd /etc/wireguard
umask 077
wg genkey > privatekey
wg pubkey < privatekey > publickey

For later configuration you will need to copy the corresponding contents of those files (see below).
Back in the web interface of freedombox you would need to add a new client and paste the public key of your client there.
Once you've done that, Wireguard will give you an IP address that you can use in your client configuration. We now got everything we need to setup the Wireguard client.
On Windows you just need to paste the configuration into your client, on Linux you will need to create a file like /etc/wireguard/wg0.conf.
My configuration looks as follows:

PrivateKey = #the clients private key
Address = #the ip address you got from the server + its mask, e.g.
DNS = #any dns you like, seems to be required on windows

PublicKey = #the public key of the wireguard server, can be found in the freedombox interface
AllowedIPs = #the traffic you want to route through the server. could be if you want to route all traffic including internet. if you just want it to connect your homeserver use the ips wireguard is handing out + approriate mask, e.g.
Endpoint = #domain/ip + port of your server
PersistentKeepalive = 25 #should be added for clients behind a natted connection

On Windows just use the UI to enable the connection, on Linux use the wg-quick command like: wg-quick up wg0

I did this for both my client and the homeserver and they were able to ping each other using the Wireguard IP after that. I was not able to access Jellyfin, though.

After quite a bit of search I found out that the firewall on freedombox is causing the issue. To allow clients to access Jellyfin via the VPN network, I sshed into the VPS hosting freedombox and added a new firewall service configuration at /usr/lib/firewalld/services/jellyfin.xml.

<?xml version="1.0" encoding="utf-8"?>
  <description>The open source media center</description>
  <port protocol="tcp" port="8096"/>
  <port protocol="tcp" port="8920"/>
  <port protocol="udp" port="1900"/>

Then you can activate the service for the internal zone, which is the one used by Wireguard, with the following command:

sudo firewall-cmd --permanent --zone=internal --add-service=jellyfin

Finally, I also had to enable port masquerading in order to maintain the original ports when forwarded via Wireguard:

sudo firewall-cmd --zone=internal --add-masquerade --permanent

After reloading the firewall configuration, I was finally able to access Jellyfin from remote, yay!

sudo firewall-cmd --reload

I love this solution as I'm completely independent from the ISP I'm using now. Additionally, Wireguard is easily configured in a way so that only traffic targetted for my homeserver is routed via Wireguard and the rest stays untouched.
If you're using Gandi to buy your domains you can even setup URL forwarding to point a subdomain of yours to the Jellyfin server including the port, which makes it even more convenient.

Finally a tool to properly mirror the screen of your Android device

Our Android team at LOOP has envied the iOS team for a very long time now.
Not because of their toolchain or the small amount of devices they need to support, no it is a much more simple thing we were lacking that the iOS team had: a tool to properly share your mobile device's screen on a PC.
While the iOS team could simply use AirPlay to show the screen of their iPhone to the client, a native protocol which works flawlessly most of the time, we had to resort to some shady third party solutions like ApowerMirror to have a similar thing in our ecosystem.
Additionally, the tools we found are normally paid software, of which we only used the free version. Having watermarks in your shared screen is not necessarily the most professional impression you could make.

We already thought about implementing such a tool on our own, as it seems like an achievable goal, but thankfully, I've recently discovered a tool called scrcpy (better write that down, if you might want to use it in the future). An opensource solution, which does not require you to install an app on your phone, but rather uses classic adb to connect to your phone and mirror its screen.
The videostreaming is super smooth and resizing/scaling works like a charm. Even better: as you can use adb wirelessly, you can also use scrcpy wirelessly!
So in principle we got a similar solution as iOS now, it just took us quite a bit longer to find it :)

How StandardNotes Solves 2FA

Two-factor authentication has been around for quite a while now and I guess almost everybody who spends some time on the internet has already encountered it in one form or another.
The basic idea of 2FA is to increase the security of your account by forcing you to authenticate yourself using a second factor. This is normally done by requiring you to enter a code that you received via a different channel. So far I've encountered three different ways to retrieve said codes:

  • Using an application that supports the one-time password protocol (OTP)
  • Sending you the code via text message
  • Sending you the code via e-mail

Arguably the best option is to use the OTP protocol, as it is more secure than either of the other two and cheaper than sending a text message. In fact, the cost of text messages is probably the reason why I've only seen this option on really popular sites (e.g. Facebook), although most of those additionally offer authentication via OTP. The option to retrieve the code via e-mail is interestingly the only option for 2FA on GOG, I'm still not sure why they did it that way.
All of the smaller websites implementing 2FA are solely using the OTP protocol. As it is an open standard, there is a vast array of different OTP applications, the most popular one probably being the Google Authenticator, but there are also opensource solutions like FreeOTP, which is developed by RedHat.
Once you've setup the OTP app, it will continuously generate new codes for you, so the next time you want to sign in you will have to check the currently valid code and enter it in order to complete the login.

The problem

Now to the tricky part: as your codes are solely generated by the OTP application, this inherently means that once you break the device or you simply uninstall the app, your 2FA is gone for good. You have to use one of the backup codes they've provided to you so that you can sign in again and then reconfigure your app.
That's exactly the reason why I haven't been using 2FA to its full extent. Since I'm working as Android developer and I'm trying out different custom roms from time to time, I regularily reset or break my phone. I don't really mind, that's just the way it is, but setting up 2FA again was actually one of the worst parts. That's why I even changed my 2FA method to text messages on Facebook, as I just didn't want to set it up each and every time.

StandardNotes' solution

StandardNotes, an awesome, encrypted note service, now recently added a new extension called TokenVault, which exactly solves this problem. It basically adds an OTP client on top of their end-to-end encrypted service, which shows you those one-time passwords that you need. Synchronisation across devices is automatically handled by their service, so you can use it on your PC as well as on your mobile phone, resting assured that only you can access those codes (they've completely opensourced their code).
To me this is likely the ideal solution, as it allows me to graciously break my devices, without loosing access to my online accounts. In fact, I can still access my web accounts without having access to my phone at all, as having access to StandardNotes' web client is sufficient.

Now I'm only waiting for somebody to implement a bookmark manager on top of StandardNotes..

Domain Switch & Leaving Posteo Behind

Recently I switched my domain name (though flosch.at will still continue to work for a few months) as well as my main mail address & provider. This change came quite rapidly, but I think in one way or another it makes sense as a next step from where I started.
In this blog post I'll try to layout the reasoning behind my decision.

The realisation

It all started when a colleague of mine recently started to research e-mail providers, which value his privacy more than his current one (which aren't hard to find if your provider is called Google). As far as I know he still hasn't decided which one he wants to go for, but ProtonMail and Posteo are strong contestants.
Anyway, we talked about pros and cons of different e-mail providers and I did some research on my own to find arguments for the mail provider of my choice, which I've been using for the past 5 years already, called Posteo.
In my opinion Posteo was (and for most people still is) a totally fine choice. It only uses open source software, provides a strong, optional encryption concept, also offers calendar as well as contact sync and is as cheap as a trustworthy mail provider gets (1€ per month).
One argument against it, though, which I encountered several times online, is that it does not support using your own domains. This means that you are bound to use the ones that Posteo provides (which are all in the format of @posteo.*, where * is about any TLD you can imagine).
That didn't bother me back when I created my Posteo account, but now that I've read the discussions it caused, it got me thinking.
I didn't mind that Posteo owned the domain, I trust them enough to let me keep my e-mail address as long as they exist. But that's the point: as long as THEY exist, not me. If they cease to exist, which is not the most unlikely scenario I could imagine (see Lavabit), I would be most utterly fucked.
I'm not sure if I would be able to change my mail address all of a sudden at all the services that I use. At the very least it would a job so cumbersome to do, that I would probably curse myself multiple times.
After this realisation, I kept finding additional reasons why it could be a bad idea to lock yourself into one specific provider. For example, they could also increase the prices to exorbitant levels (I also would not assume they would do anything as bad as this) and I simply would have to pay, as I could not change my mail in time.
And finally, what if i just want to switch provider for the sake of it? What if some other provider simply offers some fancy cool feature that I would like to try out? Or offers a better looking interface and/or app (a topic I particularily have an eye on as mobile developer)? I couldn't switch.
After having spent so much time researching the more privacy-conscious and open solutions for my needs, I locked myself into one mail provider.


I already had a domain (flosch.at), but I didn't like it as much as I did when I first registered it. And now that I started to think about using it as my mail address, I liked it even less. So the first thing i had to do, was to find a new domain, which I would also be fine with to use it as mail address.
One important thing was that it had to be short. I don't want to type my full name into the command line, nor do I want it at the end of my e-mail address. So, I did some research on available top level domains and tried to find some, which cleverly mix with my name and the TLD to make it short. At first I didn't find any, but then I discovered the finnish top level domain .fi.
Whenever there are multiple Florians in a group I'm part of, people tend to refer to to me using a nickname version of my lastname: Schrofi . I checked and found that schro.fi was still available, a rather cheap domain and it ticked all the boxes, so I purchased it.
Setting up the new domain for my website was a piece of cake, I just updated the DNS records to point to my IP, updated the nginx configuration, ran certbot to get SSL certificates and it was good to go.
Now I only had to find a new mail provider and setup mailing correctly (which I've never done before).

Research again

One thing was certain: I do not want to run my own mail server. Mail is just too important to risk that i do something wrong (=stupid) and will not be able to receive any mail (and yes I know about mail-in-a-box).
Additionally, I would argue, that most mail providers probably even cost less than running your own server would, if you keep it running for the sole purpose of hosting your mail (which I would do, to make sure it does not interfere with anything).
So, that out of the way, I started researching again to find a mail provider which supports custom domains.

The important features I was looking for were:

  • supports custom domains
  • offers calendar & contact sync
  • does not cost more than 30 - 40€ per year

This limited my choices by quite a bit. Protonmail does not offer a calendar at the moment and additionally costs more than 3 times as much as Posteo, so it was out of the question.
Tutanota does not offer a calendar nor contact sync (though the first is planned to be released soon).
The ones that were closest to my requirements were Mailfence and Mailbox.org.
While I think Mailfence would have done an equally good job to host my mail, I simply went for Mailbox.org just because I liked the user interface better.

Setting it up

Contact and calendar sync were seamless. I exported everything from Posteo, imported it on Mailbox.org, changed my account in DAVx⁵ and off we go. Configuring the custom domain was a bit trickier, but good thing the Mailbox.org wiki provides an extensive step-by-step tutorial on how to set it up with their service.
After following these steps and also cross checking with CodingHorrors blog post about emails, I managed to pass all checks from Port25's authentication verifier service.

Summary of Results
SPF check: pass
"iprev" check: pass
DKIM check: pass
SpamAssassin check: ham

This seems to be good enough for most mail providers, but not for Outlook.com. For whatever reason each and every one of my mails ends up in the spam folder, eventhough I explicitly whitelisted the sender and marked the mail as "not spam".
Apparently, other mailbox users are having the same issue. I'm still not sure if it is related to my custom domain being unknown to Outlook's mail system or mailbox.org e-mails being classified as spam in general.
Anyway, I will keep watching this issue closely and hopefully it will resolve automatically. For now I'm occupied changing all of my accounts to point to my new e-mail address, knowing that this should be the last time I will ever have to do this.

Gentoo Does Not Like Elliptic Curves

I recently switched my hosting provider from HostSailor to Scaleway (using their VPS solution). Mostly because of the cheaper prices, as well as because they are Europe based (finally I can pay in Euros!) and the fact that they provide SSD servers instead of normal HDDs, which results in a performance boost. But I also made the switch because of the operating systems they provide, most notably: Arch Linux.

Unfortunately, though, I had to find out, that their Arch Linux image is currently broken and can not be built at the moment. So I figured, I could start with something new and maybe try a new distribution again. I was pretty certain that I do not want a standard release distribution, as the release updates I had to do with my Fedora server really became a pain in the ass. If I ever come back to standard release based distributions, I will certainly look for something with a less frequent release cycle, like CentOS or Debian. But for now I was looking for a rolling release distribution to drive my server. The only distro they provided, that fulfilled that restriction was: Gentoo.

Getting into Gentoo

After reading the Gentoo handbook to get an overall idea about how to do things, it wasn't actually too complicated to get things running. The update times were, of course, way longer compared to Arch, as Gentoo is a source based distribution (meaning that all packages are most of the time completely built from source), but that isn't actually that much of a put-off for a server system, which is running 24/7 anyway.

After setting things up, like openssl, openssh, mosh, tmux, emacs, nginx, etc. everything seemed to be working fine, so I started to point my domain towards the new server. I created new certificates using certbot for Let's Encrypt and finally I verified if everything is working okay, by issuing a Qualys SSL Test. That's where things started getting weird.

Somehow I didn't receive a bad rating on the SSL test, but I could see that a lot of clients were failing to connect at all. Interestingly those, that were able to connect, only used a normal Diffie-Hellman key exchange, instead of the elliptic curve one, although I explicitly placed those right at the beginning of my cipher suite. I also checked the available ciphers with sslscan and found out, that my server does not provide any elliptic curve suites. Those either failed or were rejected. I checked my nginx configuration again and again, but I couldn't find anything wrong there.

Exploring the cause

Doing a lot of research, I finally found the possible cause for my elliptic curve fiasco. As I, and apparently a lot of other people too, didn't know that elliptic curve cryptography is actually patented, I didn't even think about this possibility. The current situation about ECC patents is unclear, so some people handle it differently than others. Most binary Linux distributions seem to include ECC in their openssl binaries by default, that is why you maybe never heard about that issue before.

Gentoo however, does not include ECC in their binary distributions. "Wait a minute?" You might say. "I thought Gentoo is a source based distribution?" Well, yes it is.. but sometimes it isn't. I didn't know that either, but Gentoo actually uses binary distributions in some cases by default as well. There is a USE flag called bindist, which enables binary distributions. This also applies to openssh/openssl and their ciphersuites. So as long as this USE flag is enabled, you won't get any ECC support.

However, if you build openssl from source, the patents seem to be fine with that. I don't know why, but that seems to be the situation for now.

Fixing Things

To get things running like normal, it should actually be enough to forbid the "bindist" flag for either openssl/openssh or for the whole system, whatever you prefer. One way this can be done, is by adding -bindist to your make.conf and rebuilding those two packages. After I did that, I tried to scan my site again using sslscan. And ECC still did not work.

I checked my nginx configuration again, restarted it several times and I couldn't figure out what the heck was wrong with it. Finally I had the idea to remerge nginx again, so that it knows that it can use those ciphersuites. Apparently it does so during compile time, not run time. After that I finally got the ECC cipher suites and nearly no client failed to connect in the Qualys test (except the really old ones).

So Gentoo seems to be working for now, let's see which issue I run into next.

Let's Encrypt Encrypts The Internet

Letsencrypt has launched its public beta, and I (more than a year after my last blog post) have finally found something to talk about again.

But first things first. Some of you might not even know what the hype around Letsencrypt is about or what Letsencrypt itself actually is. So I'll try to explain shortly why it is something to have an eye on.

Encryption on the internet

You have hopefully already encountered several sites on the internet that use encryption (recognisable by the leading https://). Encryption has been a vital part of it for nearly two decades now, but was mostly used on sites that absolutely require it (like e-banking or shopping). This is primarily because of two reasons: first using encryption requires more computational power, second getting a certificate can be quite expensive.

The first issue lost a bit of relevance as technology advanced and we got more computational power at our disposal than ever before. The second issue is the one that is being attacked by Letsencrypt.

A certificate is basically used to proof to a connected client, that you are who you claim to be. Without such a mechanism anybody could intercept the traffic (given he/she has got the needed infrastructure) and act like someone else. So instead of visiting your website, the client could land on a completely different, maybe even malicious, website, still thinking that it is yours. This basically would also nullify any encryption approaches, as the attacker could just initiate two encrypted connections: one to the client, one to the actual server, the client is trying to reach. Then s/he can forward the responses from one connection to the other, adding some slight modifications to the content.
So that this doesn't happen, we need someone who will guarantee to the client, that everything is fine and noone is intercepting the traffic and/or exchanging the contents of the website.
This is the role of the certificate authorities.

What they do is: they verify the ownership of a website and then provide the website owner with a certificate, which he can use to sign his/her messages. Then the client can look up if the traffic coming from this domain was signed with a certificate approved by a certificate authority that he trusts (= his browser trusts).

In order to verify your identity, the certificate authorities require you to send them some kind of proof of your identity, like a scan of your passport. They will then check your identity against the data which was used to register your domain (its whois entry). If the two match each other, they will provide you with a certificate for the specified domain. Of course they require you to pay a certain amount of money, especially since they also need to pay their employees, who will verify your identity.

Letsencrypt's approach

So what does Letsencrypt do differently?
Letsencrypt does not have any employees that will verify your identity, neither does it require you to send them any verification of your identity. Instead of comparing the domain info and your info, it just skips that step and directly verifies that you own that domain. It does so by requiring you to install their software on your server, which will then change the content of your website, that is reachable under your domain. They will then check if they can verify those changes using their backend software. Currently Letsencrypt supports Apache and nginx (still experimental), as well as a standalone approach, which will require you to stop your webserver for the process. In order to guarantee security, those certificates are only valid for a remarkably short period of time (~90 days), and they are already planning to even lower that period further. This clearly shows that they want you to automate this process, instead of manually requesting this certificate every time.

In my case I had to use the manual process, as the nginx component is still experimental, but I'll gladly switch to the automatic version once it is stable. Anyhow the process was still pretty straight forward. After disabling nginx, i just had to enter the following commands to get started.

Update from 2019: You should now easily be able to install certbot from your distributions package repository. No need to checkout the git repository.

git clone https://github.com/letsencrypt/letsencrypt.git
cd ./letsencrypt
./letsencrypt-auto certonly

Now you have to enter the domain names you want to get a certificate for, wait a little and then you're all set for an encrypted website.

Getting TLS right

SSL Labs Rating

Getting the certificate is only half of the work. Next you need to reference the certificate inside your server settings and define the right ciphers, so that your website really gets secure. As you are hosting your own webserver, I just assume that you know how to configure the certificates themselves correctly (if you are using nginx, you can take a look at the example below).

It's the ciphers where it really gets interesting. Ciphers basically define how the webtraffic to your server gets encrypted. If you use ciphers that are too fancy, you risk that a lot of clients won't support your ciphers and can't reach your website, on the other hand, if you include a cipher that is too weak, it can potentially break your whole encryption effort. At the beginning of the communication the client and the webserver agree on a certain cipher suite, which will get used henceforth. So if you support weak or even broken ciphers, a potential attacker can try to downgrade to that cipher and decrypt the whole communication.

A good starting point to know which ciphers to support is the Mozilla Wiki. I just went ahead and picked the cipher suite they recommend for "Modern compatibility". However I had to adjust the suite a little bit, as they are still using DHE in their suite, which is not that safe any more. At least not safe in the way of "These keys, the paper showed, can be cracked within a year for around 100 million US dollars.". So to counter that issue, we simply have to kick the DHE suites out, so that only the elliptic curve variations of the Diffie-Hellman key exchange (=ECDHE) suites remain. ECDHE uses a similar kind of algorithm, but a different kind of math behind it (elliptic curves), so that it can provide the same level of security with much smaller numbers. We can use this, to guarantee a much higher security, with about the same size of numbers.

Now we have a good cipher suite to start with, but this suite leaves a lot of clients behind. Therefore we have to add an algorithm, that is old enough for these clients to support, but secure enough so that the communication can not be decrypted. The suite that comes to our rescue is DES-CBC3-SHA. We can add this suite at the very end of all the other cipher suites (but before the forbidden values, which are marked with a !). After that we have to remove the !3DES rule from our suites as well, because we are using 3DES inside the newly added suite. So that's how the cipher suite should look like afterwards:


Now that we have defined the cipher suite, we need to make sure, that the newly added 3DES suite will not be used by old clients, if they already support newer (and better) cipher suites, that do provide perfect forward secrecy. Unfortunately those clients often don't know what's best for them and use the older 3DES suite, as they think it is more secure. To countermeasure that, we simply need to tell our server, that he should insist on his order of ciphers. We do so by adding ssl_prefer_server_ciphers on to our configuration.

Finally, if you really want to completely secure all traffic to your server (and get a super fancy A+ rating on your encryption), you should add HTTP Strict Transport Security to your configuration by specifying add_header Strict-Transport-Security "max-age=15724800; preload" inside your config. However note, that this will ensure that your server is only reachable via HTTPS, so you could easily break some things by that. My final configuration looks just like this:

server {
  listen [::]:443;
  ssl on;
  ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
  ssl_prefer_server_ciphers on;
  ssl_ciphers "ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:\
  add_header Strict-Transport-Security "max-age=15724800; preload";
  ssl_certificate /etc/letsencrypt/live/flosch.at/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/flosch.at/privkey.pem;
  server_name flosch.at www.flosch.at;
  index index.html;
  root /srv/http/flosch.at;

That's it! You should now have a secure webserver. You can test it yourself by visiting the SSL Server Test website by Qualys SSL Labs.

Switching From Pico To Metalsmith

Probably no one recognized the changes I've made in the last few months, but the website now switched to a different CMS under the hood.

In fact it isn't really using a CMS anymore, but a static site generator called Metalsmith.

The CMS I was using before, was called Pico and it actually served me quite well, but I still had some issues with it.

One of these issues isn't actually Pico's fault and most likely even a reason to go for Pico to some: Pico is using PHP.

Don't get me wrong, PHP can be great, but in my case it just seems a bit overpowered (I'm just trying to host some static sites) and can even add a bit of insecurity to the whole webserver.

Another issue was that it was way too cumbersome to enhance Pico's functionalities.

The recommended way of making a blog, was to write the PHP script yourself (to be fair, they provide the whole script on their website). I somehow never managed to get it just the way i wanted it to be.

And finally the most important issue, that actually motivated me to do the whole site over again after I had just finished it: Pico is dead.

Update from 2019: Pico is maybe not as dead as I thought it to be back when I wrote this blog post. There has been an update here and then, but it's community is still not the most active.

Pico's dead, baby

I didn't really want to believe it, after I had just put so much work into my website, but I somehow knew that it was true.

After the next few months i tried to slowly switch my CMS, while still keeping an eye on Pico's development.

The original Pico developers dev7studios have dropped development completely and are now putting Pico into the arms of the community.
But there doesn't seem to be anyone who's really willing to keep this project alive. They just changed the Github account that manages the repository, moved the website to a different domain and changed the licence text, keeping anything else untouched.

This has been the situation for about 7 months now and I don't really expect it to change soon.

Metalsmith to the rescue

After my decision to switch to a new CMS was final, I quickly discovered that a static site generator would perfectly fit my needs.

Some hours of research later, I made Metalsmith the static site generator of my choice, and I haven't been disappointed yet.

Metalsmith uses a pretty interesting approach to make it the best fit for nearly everyone's needs: it's completely pluggable.

That means that Metalsmith, the core itself, only provides an API to work on a bunch of files and the rest is provided by different plugins.

This website for example uses plugins like handlebars (templating engine), markdown (to parse markdown files), permalinks (to organize the link structure), convert (to convert images for different resolutions) and tags (to provide tag functionality for blog posts).

These plugins are either coded by the original Metalsmith authors, or by any other individual that cares to enhance Metalsmith.

Another great advantage (for me) in comparison to Pico is that Metalsmith uses NodeJs and therefor Javascript, which I am at least a bit familiar with.

All in all Metalsmith seems like the ideal solution to me and I hope it does not die as fast as Pico.. but even when it dies, it's not really dead then, since once the site is generated, it will stay online and won't break unexpectedly (which you can't guarantee for a PHP script).

Another update from 2019: Although technically I can still get it to build my website, it is very cumbersome to set it up from scratch. It seems that npm packages break easily and inter-dependencies are hard to manage based on the nature of a scripting language. If I were to chose another static site generator again I might prefer the ones using a statically typed language.


The upgrade from the dying Pico to the flexible Metalsmith is now complete.

My website will now be generated by a static site generator and doesn't depend on PHP anymore.

Since the new way allows me to structure my code pretty nicely, i also opened up a public git repository, containing the buildscript for my website.
Maybe someone struggeling with Metalsmith will find it useful to have a glance at an actually working site. I don't really expect that this will happen, but it doesn't hurt to make it public either (still have to add licences though).

You can find the source code here.

Why Libre.fm Can Not Replace Last.fm Yet

Since I've started using Linux I'm constantly trying to switch to open source
alternatives whenever it's possible. One of the services I really rely on, but haven't replaced with a matching open source counterpart is Last.fm. I totally love the features Last.fm offers: most of all I enjoy the music tracking & statistics and I dig its recommendation system.

But since I'm more or less forcing myself to switch to free software, I was always on the lookout for a new replacement. About 2 years ago I found Libre.fm which seems to be exactly what I wanted, but being still under heavy development it appeared to be kind of half-baked.

Nevertheless trying to test it out, I wanted to transfer my scrobbles from Last.fm to Libre.fm, which wasn't as easy as I thought. Being pretty unexperienced when it comes to terminal programs and (in this case) Python, I somehow couldn't do it. As I didn't want to start from zero I just stayed with Last.fm and lived happily after.

Or atleast I would have, but about a month ago I somehow ended up on Libre.fm again. They kind of relaunched their whole service, or I thougt so, as they've got a completely new (and in my opinion pretty nice) design. So I decided to give it another try and this time I was able to transfer the major part (~1000 tracks were lost) of my scrobbles from Last.fm to Libre.fm using lastscrape.

The initial enthusiasm was gone pretty quickly as I saw that Libre.fm still can't display the album cover of any of the artists I scrobbled.

Libre.fm Main

But that's just an unimportant feature after all. So I checked out the other stuff that Libre.fm allowed me to do.. which wasn't much. The stats are preset, you can't configure anything about them.

So you've got to live with:

  • your most played artists in the last 6 months
  • your top tracks in the last 6 months
  • your scrobbles per day for the last month.

That's what really bothers me about their service, as I'm mostly interested in the overall statistics and not in the statistics for the last 6 months. Also I'd really like a download button for my data, since they already promote that the data belongs to the user and that you can take it with you when you leave (a download button would be rather nice to import your playcounts into a new music manager).

What I approve of is that they provide you the option to forward every scrobble to Last.fm, which is a pretty kind step of them, they are playing far nicer than Last.fm here.

Long story short: I'm really interested in switching to Libre.fm, when they finally manage to provide some nicely presented, extensive statistics.