Florian Schrofner

@fschrofner

Mobile developer based in Salzburg, Austria, who loves cooking, cycling and coffee. I plan to blog mostly about technical stuff, but also randomly about personal opinions and experiences.

5,632 words

https://schro.fi @fschrofner Guestbook
You'll only receive email when Florian Schrofner publishes a new post

Finally a tool to properly mirror the screen of your Android device

Our Android team at LOOP has envied the iOS team for a very long time now.
Not because of their toolchain or the small amount of devices they need to support, no it is a much more simple thing we were lacking that the iOS team had: a tool to properly share your mobile device's screen on a PC.
While the iOS team could simply use AirPlay to show the screen of their iPhone to the client, a native protocol which works flawlessly most of the time, we had to resort to some shady third party solutions like ApowerMirror to have a similar thing in our ecosystem.
Additionally, the tools we found are normally paid software, of which we only used the free version. Having watermarks in your shared screen is not necessarily the most professional impression you could make.

We already thought about implementing such a tool on our own, as it seems like an achievable goal, but thankfully, I've recently discovered a tool called scrcpy (better write that down, if you might want to use it in the future). An opensource solution, which does not require you to install an app on your phone, but rather uses classic adb to connect to your phone and mirror its screen.
The videostreaming is super smooth and resizing/scaling works like a charm. Even better: as you can use adb wirelessly, you can also use scrcpy wirelessly!
So in principle we got a similar solution as iOS now, it just took us quite a bit longer to find it :)

How StandardNotes Solves 2FA

Two-factor authentication has been around for quite a while now and I guess almost everybody who spends some time on the internet has already encountered it in one form or another.
The basic idea of 2FA is to increase the security of your account by forcing you to authenticate yourself using a second factor. This is normally done by requiring you to enter a code that you received via a different channel. So far I've encountered three different ways to retrieve said codes:

  • Using an application that supports the one-time password protocol (OTP)
  • Sending you the code via text message
  • Sending you the code via e-mail

Arguably the best option is to use the OTP protocol, as it is more secure than either of the other two and cheaper than sending a text message. In fact, the cost of text messages is probably the reason why I've only seen this option on really popular sites (e.g. Facebook), although most of those additionally offer authentication via OTP. The option to retrieve the code via e-mail is interestingly the only option for 2FA on GOG, I'm still not sure why they did it that way.
All of the smaller websites implementing 2FA are solely using the OTP protocol. As it is an open standard, there is a vast array of different OTP applications, the most popular one probably being the Google Authenticator, but there are also opensource solutions like FreeOTP, which is developed by RedHat.
Once you've setup the OTP app, it will continuously generate new codes for you, so the next time you want to sign in you will have to check the currently valid code and enter it in order to complete the login.

The problem

Now to the tricky part: as your codes are solely generated by the OTP application, this inherently means that once you break the device or you simply uninstall the app, your 2FA is gone for good. You have to use one of the backup codes they've provided to you so that you can sign in again and then reconfigure your app.
That's exactly the reason why I haven't been using 2FA to its full extent. Since I'm working as Android developer and I'm trying out different custom roms from time to time, I regularily reset or break my phone. I don't really mind, that's just the way it is, but setting up 2FA again was actually one of the worst parts. That's why I even changed my 2FA method to text messages on Facebook, as I just didn't want to set it up each and every time.

StandardNotes' solution

StandardNotes, an awesome, encrypted note service, now recently added a new extension called TokenVault, which exactly solves this problem. It basically adds an OTP client on top of their end-to-end encrypted service, which shows you those one-time passwords that you need. Synchronisation across devices is automatically handled by their service, so you can use it on your PC as well as on your mobile phone, resting assured that only you can access those codes (they've completely opensourced their code).
To me this is likely the ideal solution, as it allows me to graciously break my devices, without loosing access to my online accounts. In fact, I can still access my web accounts without having access to my phone at all, as having access to StandardNotes' web client is sufficient.

Now I'm only waiting for somebody to implement a bookmark manager on top of StandardNotes..

Domain Switch & Leaving Posteo Behind

Recently I switched my domain name (though flosch.at will still continue to work for a few months) as well as my main mail address & provider. This change came quite rapidly, but I think in one way or another it makes sense as a next step from where I started.
In this blog post I'll try to layout the reasoning behind my decision.

The realisation

It all started when a colleague of mine recently started to research e-mail providers, which value his privacy more than his current one (which aren't hard to find if your provider is called Google). As far as I know he still hasn't decided which one he wants to go for, but ProtonMail and Posteo are strong contestants.
Anyway, we talked about pros and cons of different e-mail providers and I did some research on my own to find arguments for the mail provider of my choice, which I've been using for the past 5 years already, called Posteo.
In my opinion Posteo was (and for most people still is) a totally fine choice. It only uses open source software, provides a strong, optional encryption concept, also offers calendar as well as contact sync and is as cheap as a trustworthy mail provider gets (1€ per month).
One argument against it, though, which I encountered several times online, is that it does not support using your own domains. This means that you are bound to use the ones that Posteo provides (which are all in the format of @posteo.*, where * is about any TLD you can imagine).
That didn't bother me back when I created my Posteo account, but now that I've read the discussions it caused, it got me thinking.
I didn't mind that Posteo owned the domain, I trust them enough to let me keep my e-mail address as long as they exist. But that's the point: as long as THEY exist, not me. If they cease to exist, which is not the most unlikely scenario I could imagine (see Lavabit), I would be most utterly fucked.
I'm not sure if I would be able to change my mail address all of a sudden at all the services that I use. At the very least it would a job so cumbersome to do, that I would probably curse myself multiple times.
After this realisation, I kept finding additional reasons why it could be a bad idea to lock yourself into one specific provider. For example, they could also increase the prices to exorbitant levels (I also would not assume they would do anything as bad as this) and I simply would have to pay, as I could not change my mail in time.
And finally, what if i just want to switch provider for the sake of it? What if some other provider simply offers some fancy cool feature that I would like to try out? Or offers a better looking interface and/or app (a topic I particularily have an eye on as mobile developer)? I couldn't switch.
After having spent so much time researching the more privacy-conscious and open solutions for my needs, I locked myself into one mail provider.

Consequences

I already had a domain (flosch.at), but I didn't like it as much as I did when I first registered it. And now that I started to think about using it as my mail address, I liked it even less. So the first thing i had to do, was to find a new domain, which I would also be fine with to use it as mail address.
One important thing was that it had to be short. I don't want to type my full name into the command line, nor do I want it at the end of my e-mail address. So, I did some research on available top level domains and tried to find some, which cleverly mix with my name and the TLD to make it short. At first I didn't find any, but then I discovered the finnish top level domain .fi.
Whenever there are multiple Florians in a group I'm part of, people tend to refer to to me using a nickname version of my lastname: Schrofi . I checked and found that schro.fi was still available, a rather cheap domain and it ticked all the boxes, so I purchased it.
Setting up the new domain for my website was a piece of cake, I just updated the DNS records to point to my IP, updated the nginx configuration, ran certbot to get SSL certificates and it was good to go.
Now I only had to find a new mail provider and setup mailing correctly (which I've never done before).

Research again

One thing was certain: I do not want to run my own mail server. Mail is just too important to risk that i do something wrong (=stupid) and will not be able to receive any mail (and yes I know about mail-in-a-box).
Additionally, I would argue, that most mail providers probably even cost less than running your own server would, if you keep it running for the sole purpose of hosting your mail (which I would do, to make sure it does not interfere with anything).
So, that out of the way, I started researching again to find a mail provider which supports custom domains.

The important features I was looking for were:

  • supports custom domains
  • offers calendar & contact sync
  • does not cost more than 30 - 40€ per year

This limited my choices by quite a bit. Protonmail does not offer a calendar at the moment and additionally costs more than 3 times as much as Posteo, so it was out of the question.
Tutanota does not offer a calendar nor contact sync (though the first is planned to be released soon).
The ones that were closest to my requirements were Mailfence and Mailbox.org.
While I think Mailfence would have done an equally good job to host my mail, I simply went for Mailbox.org just because I liked the user interface better.

Setting it up

Contact and calendar sync were seamless. I exported everything from Posteo, imported it on Mailbox.org, changed my account in DAVx⁵ and off we go. Configuring the custom domain was a bit trickier, but good thing the Mailbox.org wiki provides an extensive step-by-step tutorial on how to set it up with their service.
After following these steps and also cross checking with CodingHorrors blog post about emails, I managed to pass all checks from Port25's authentication verifier service.

==========================================================
Summary of Results
==========================================================
SPF check: pass
"iprev" check: pass
DKIM check: pass
SpamAssassin check: ham

This seems to be good enough for most mail providers, but not for Outlook.com. For whatever reason each and every one of my mails ends up in the spam folder, eventhough I explicitly whitelisted the sender and marked the mail as "not spam".
Apparently, other mailbox users are having the same issue. I'm still not sure if it is related to my custom domain being unknown to Outlook's mail system or mailbox.org e-mails being classified as spam in general.
Anyway, I will keep watching this issue closely and hopefully it will resolve automatically. For now I'm occupied changing all of my accounts to point to my new e-mail address, knowing that this should be the last time I will ever have to do this.

Gentoo Does Not Like Elliptic Curves

This post was originally posted on the 7th of February 2017 on my personal website.

I recently switched my hosting provider from HostSailor to Scaleway (using their VPS solution). Mostly because of the cheaper prices, as well as because they are Europe based (finally I can pay in Euros!) and the fact that they provide SSD servers instead of normal HDDs, which results in a performance boost. But I also made the switch because of the operating systems they provide, most notably: Arch Linux.

Unfortunately, though, I had to find out, that their Arch Linux image is currently broken and can not be built at the moment. So I figured, I could start with something new and maybe try a new distribution again. I was pretty certain that I do not want a standard release distribution, as the release updates I had to do with my Fedora server really became a pain in the ass. If I ever come back to standard release based distributions, I will certainly look for something with a less frequent release cycle, like CentOS or Debian. But for now I was looking for a rolling release distribution to drive my server. The only distro they provided, that fulfilled that restriction was: Gentoo.

Getting into Gentoo

After reading the Gentoo handbook to get an overall idea about how to do things, it wasn't actually too complicated to get things running. The update times were, of course, way longer compared to Arch, as Gentoo is a source based distribution (meaning that all packages are most of the time completely built from source), but that isn't actually that much of a put-off for a server system, which is running 24/7 anyway.

After setting things up, like openssl, openssh, mosh, tmux, emacs, nginx, etc. everything seemed to be working fine, so I started to point my domain towards the new server. I created new certificates using certbot for Let's Encrypt and finally I verified if everything is working okay, by issuing a Qualys SSL Test. That's where things started getting weird.

Somehow I didn't receive a bad rating on the SSL test, but I could see that a lot of clients were failing to connect at all. Interestingly those, that were able to connect, only used a normal Diffie-Hellman key exchange, instead of the elliptic curve one, although I explicitly placed those right at the beginning of my cipher suite. I also checked the available ciphers with sslscan and found out, that my server does not provide any elliptic curve suites. Those either failed or were rejected. I checked my nginx configuration again and again, but I couldn't find anything wrong there.

Exploring the cause

Doing a lot of research, I finally found the possible cause for my elliptic curve fiasco. As I, and apparently a lot of other people too, didn't know that elliptic curve cryptography is actually patented, I didn't even think about this possibility. The current situation about ECC patents is unclear, so some people handle it differently than others. Most binary Linux distributions seem to include ECC in their openssl binaries by default, that is why you maybe never heard about that issue before.

Gentoo however, does not include ECC in their binary distributions. "Wait a minute?" You might say. "I thought Gentoo is a source based distribution?" Well, yes it is.. but sometimes it isn't. I didn't know that either, but Gentoo actually uses binary distributions in some cases by default as well. There is a USE flag called bindist, which enables binary distributions. This also applies to openssh/openssl and their ciphersuites. So as long as this USE flag is enabled, you won't get any ECC support.

However, if you build openssl from source, the patents seem to be fine with that. I don't know why, but that seems to be the situation for now.

Fixing Things

To get things running like normal, it should actually be enough to forbid the "bindist" flag for either openssl/openssh or for the whole system, whatever you prefer. One way this can be done, is by adding -bindist to your make.conf and rebuilding those two packages. After I did that, I tried to scan my site again using sslscan. And ECC still did not work.

I checked my nginx configuration again, restarted it several times and I couldn't figure out what the heck was wrong with it. Finally I had the idea to remerge nginx again, so that it knows that it can use those ciphersuites. Apparently it does so during compile time, not run time. After that I finally got the ECC cipher suites and nearly no client failed to connect in the Qualys test (except the really old ones).

So Gentoo seems to be working for now, let's see which issue I run into next.

Let's Encrypt Encrypts The Internet

This post was originally posted on the 18th of November 2015 on my personal website. Please note that these recommendations are probably outdated.

Letsencrypt has launched its public beta, and I (more than a year after my last blog post) have finally found something to talk about again.

But first things first. Some of you might not even know what the hype around Letsencrypt is about or what Letsencrypt itself actually is. So I'll try to explain shortly why it is something to have an eye on.

Encryption on the internet

You have hopefully already encountered several sites on the internet that use encryption (recognisable by the leading https://). Encryption has been a vital part of it for nearly two decades now, but was mostly used on sites that absolutely require it (like e-banking or shopping). This is primarily because of two reasons: first using encryption requires more computational power, second getting a certificate can be quite expensive.

The first issue lost a bit of relevance as technology advanced and we got more computational power at our disposal than ever before. The second issue is the one that is being attacked by Letsencrypt.

A certificate is basically used to proof to a connected client, that you are who you claim to be. Without such a mechanism anybody could intercept the traffic (given he/she has got the needed infrastructure) and act like someone else. So instead of visiting your website, the client could land on a completely different, maybe even malicious, website, still thinking that it is yours. This basically would also nullify any encryption approaches, as the attacker could just initiate two encrypted connections: one to the client, one to the actual server, the client is trying to reach. Then s/he can forward the responses from one connection to the other, adding some slight modifications to the content.
So that this doesn't happen, we need someone who will guarantee to the client, that everything is fine and noone is intercepting the traffic and/or exchanging the contents of the website.
This is the role of the certificate authorities.

What they do is: they verify the ownership of a website and then provide the website owner with a certificate, which he can use to sign his/her messages. Then the client can look up if the traffic coming from this domain was signed with a certificate approved by a certificate authority that he trusts (= his browser trusts).

In order to verify your identity, the certificate authorities require you to send them some kind of proof of your identity, like a scan of your passport. They will then check your identity against the data which was used to register your domain (its whois entry). If the two match each other, they will provide you with a certificate for the specified domain. Of course they require you to pay a certain amount of money, especially since they also need to pay their employees, who will verify your identity.

Letsencrypt's approach

So what does Letsencrypt do differently?
Letsencrypt does not have any employees that will verify your identity, neither does it require you to send them any verification of your identity. Instead of comparing the domain info and your info, it just skips that step and directly verifies that you own that domain. It does so by requiring you to install their software on your server, which will then change the content of your website, that is reachable under your domain. They will then check if they can verify those changes using their backend software. Currently Letsencrypt supports Apache and nginx (still experimental), as well as a standalone approach, which will require you to stop your webserver for the process. In order to guarantee security, those certificates are only valid for a remarkably short period of time (~90 days), and they are already planning to even lower that period further. This clearly shows that they want you to automate this process, instead of manually requesting this certificate every time.

In my case I had to use the manual process, as the nginx component is still experimental, but I'll gladly switch to the automatic version once it is stable. Anyhow the process was still pretty straight forward. After disabling nginx, i just had to enter the following commands to get started.

Update from 2019: You should now easily be able to install certbot from your distributions package repository. No need to checkout the git repository.

git clone https://github.com/letsencrypt/letsencrypt.git
cd ./letsencrypt
./letsencrypt-auto certonly

Now you have to enter the domain names you want to get a certificate for, wait a little and then you're all set for an encrypted website.

Getting TLS right

SSL Labs Rating

Getting the certificate is only half of the work. Next you need to reference the certificate inside your server settings and define the right ciphers, so that your website really gets secure. As you are hosting your own webserver, I just assume that you know how to configure the certificates themselves correctly (if you are using nginx, you can take a look at the example below).

It's the ciphers where it really gets interesting. Ciphers basically define how the webtraffic to your server gets encrypted. If you use ciphers that are too fancy, you risk that a lot of clients won't support your ciphers and can't reach your website, on the other hand, if you include a cipher that is too weak, it can potentially break your whole encryption effort. At the beginning of the communication the client and the webserver agree on a certain cipher suite, which will get used henceforth. So if you support weak or even broken ciphers, a potential attacker can try to downgrade to that cipher and decrypt the whole communication.

A good starting point to know which ciphers to support is the Mozilla Wiki. I just went ahead and picked the cipher suite they recommend for "Modern compatibility". However I had to adjust the suite a little bit, as they are still using DHE in their suite, which is not that safe any more. At least not safe in the way of "These keys, the paper showed, can be cracked within a year for around 100 million US dollars.". So to counter that issue, we simply have to kick the DHE suites out, so that only the elliptic curve variations of the Diffie-Hellman key exchange (=ECDHE) suites remain. ECDHE uses a similar kind of algorithm, but a different kind of math behind it (elliptic curves), so that it can provide the same level of security with much smaller numbers. We can use this, to guarantee a much higher security, with about the same size of numbers.

Now we have a good cipher suite to start with, but this suite leaves a lot of clients behind. Therefore we have to add an algorithm, that is old enough for these clients to support, but secure enough so that the communication can not be decrypted. The suite that comes to our rescue is DES-CBC3-SHA. We can add this suite at the very end of all the other cipher suites (but before the forbidden values, which are marked with a !). After that we have to remove the !3DES rule from our suites as well, because we are using 3DES inside the newly added suite. So that's how the cipher suite should look like afterwards:

ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:\
ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-SHA256:\
ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:\
ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:\
ECDHE-ECDSA-AES256-SHA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK

Now that we have defined the cipher suite, we need to make sure, that the newly added 3DES suite will not be used by old clients, if they already support newer (and better) cipher suites, that do provide perfect forward secrecy. Unfortunately those clients often don't know what's best for them and use the older 3DES suite, as they think it is more secure. To countermeasure that, we simply need to tell our server, that he should insist on his order of ciphers. We do so by adding ssl_prefer_server_ciphers on to our configuration.

Finally, if you really want to completely secure all traffic to your server (and get a super fancy A+ rating on your encryption), you should add HTTP Strict Transport Security to your configuration by specifying add_header Strict-Transport-Security "max-age=15724800; preload" inside your config. However note, that this will ensure that your server is only reachable via HTTPS, so you could easily break some things by that. My final configuration looks just like this:

server {
  listen 0.0.0.0:443;
  listen [::]:443;
  ssl on;
  ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
  ssl_prefer_server_ciphers on;
  ssl_ciphers "ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:\
    ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-SHA2\
    56:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-\
    RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AE\
    S256-SHA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK";
  add_header Strict-Transport-Security "max-age=15724800; preload";
  ssl_certificate /etc/letsencrypt/live/flosch.at/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/flosch.at/privkey.pem;
  server_name flosch.at www.flosch.at;
  index index.html;
  root /srv/http/flosch.at;
}

That's it! You should now have a secure webserver. You can test it yourself by visiting the SSL Server Test website by Qualys SSL Labs.

Switching From Pico To Metalsmith

This post was originally posted on the 5th of November 2014 on my personal website.

Probably no one recognized the changes I've made in the last few months, but the website now switched to a different CMS under the hood.

In fact it isn't really using a CMS anymore, but a static site generator called Metalsmith.

The CMS I was using before, was called Pico and it actually served me quite well, but I still had some issues with it.

One of these issues isn't actually Pico's fault and most likely even a reason to go for Pico to some: Pico is using PHP.

Don't get me wrong, PHP can be great, but in my case it just seems a bit overpowered (I'm just trying to host some static sites) and can even add a bit of insecurity to the whole webserver.

Another issue was that it was way too cumbersome to enhance Pico's functionalities.

The recommended way of making a blog, was to write the PHP script yourself (to be fair, they provide the whole script on their website). I somehow never managed to get it just the way i wanted it to be.

And finally the most important issue, that actually motivated me to do the whole site over again after I had just finished it: Pico is dead.

Update from 2019: Pico is maybe not as dead as I thought it to be back when I wrote this blog post. There has been an update here and then, but it's community is still not the most active.

Pico's dead, baby

I didn't really want to believe it, after I had just put so much work into my website, but I somehow knew that it was true.

After the next few months i tried to slowly switch my CMS, while still keeping an eye on Pico's development.

The original Pico developers dev7studios have dropped development completely and are now putting Pico into the arms of the community.
But there doesn't seem to be anyone who's really willing to keep this project alive. They just changed the Github account that manages the repository, moved the website to a different domain and changed the licence text, keeping anything else untouched.

This has been the situation for about 7 months now and I don't really expect it to change soon.

Metalsmith to the rescue

After my decision to switch to a new CMS was final, I quickly discovered that a static site generator would perfectly fit my needs.

Some hours of research later, I made Metalsmith the static site generator of my choice, and I haven't been disappointed yet.

Metalsmith uses a pretty interesting approach to make it the best fit for nearly everyone's needs: it's completely pluggable.

That means that Metalsmith, the core itself, only provides an API to work on a bunch of files and the rest is provided by different plugins.

This website for example uses plugins like handlebars (templating engine), markdown (to parse markdown files), permalinks (to organize the link structure), convert (to convert images for different resolutions) and tags (to provide tag functionality for blog posts).

These plugins are either coded by the original Metalsmith authors, or by any other individual that cares to enhance Metalsmith.

Another great advantage (for me) in comparison to Pico is that Metalsmith uses NodeJs and therefor Javascript, which I am at least a bit familiar with.

All in all Metalsmith seems like the ideal solution to me and I hope it does not die as fast as Pico.. but even when it dies, it's not really dead then, since once the site is generated, it will stay online and won't break unexpectedly (which you can't guarantee for a PHP script).

Another update from 2019: Although technically I can still get it to build my website, it is very cumbersome to set it up from scratch. It seems that npm packages break easily and inter-dependencies are hard to manage based on the nature of a scripting language. If I were to chose another static site generator again I might prefer the ones using a statically typed language.

TL;DR

The upgrade from the dying Pico to the flexible Metalsmith is now complete.

My website will now be generated by a static site generator and doesn't depend on PHP anymore.

Since the new way allows me to structure my code pretty nicely, i also opened up a public git repository, containing the buildscript for my website.
Maybe someone struggeling with Metalsmith will find it useful to have a glance at an actually working site. I don't really expect that this will happen, but it doesn't hurt to make it public either (still have to add licences though).

You can find the source code here.

Why Libre.fm Can Not Replace Last.fm Yet

This post was originally posted on the 24th of April 2014 on my personal website.

Since I've started using Linux I'm constantly trying to switch to open source
alternatives whenever it's possible. One of the services I really rely on, but haven't replaced with a matching open source counterpart is Last.fm. I totally love the features Last.fm offers: most of all I enjoy the music tracking & statistics and I dig its recommendation system.

But since I'm more or less forcing myself to switch to free software, I was always on the lookout for a new replacement. About 2 years ago I found Libre.fm which seems to be exactly what I wanted, but being still under heavy development it appeared to be kind of half-baked.

Nevertheless trying to test it out, I wanted to transfer my scrobbles from Last.fm to Libre.fm, which wasn't as easy as I thought. Being pretty unexperienced when it comes to terminal programs and (in this case) Python, I somehow couldn't do it. As I didn't want to start from zero I just stayed with Last.fm and lived happily after.

Or atleast I would have, but about a month ago I somehow ended up on Libre.fm again. They kind of relaunched their whole service, or I thougt so, as they've got a completely new (and in my opinion pretty nice) design. So I decided to give it another try and this time I was able to transfer the major part (~1000 tracks were lost) of my scrobbles from Last.fm to Libre.fm using lastscrape.

The initial enthusiasm was gone pretty quickly as I saw that Libre.fm still can't display the album cover of any of the artists I scrobbled.

Libre.fm Main

But that's just an unimportant feature after all. So I checked out the other stuff that Libre.fm allowed me to do.. which wasn't much. The stats are preset, you can't configure anything about them.

So you've got to live with:

  • your most played artists in the last 6 months
  • your top tracks in the last 6 months
  • your scrobbles per day for the last month.

That's what really bothers me about their service, as I'm mostly interested in the overall statistics and not in the statistics for the last 6 months. Also I'd really like a download button for my data, since they already promote that the data belongs to the user and that you can take it with you when you leave (a download button would be rather nice to import your playcounts into a new music manager).

What I approve of is that they provide you the option to forward every scrobble to Last.fm, which is a pretty kind step of them, they are playing far nicer than Last.fm here.

Long story short: I'm really interested in switching to Libre.fm, when they finally manage to provide some nicely presented, extensive statistics.