CarConnectivity and the ID.Buzz

I just got myself a brand new car: an ID.Buzz with seven seats so that I can fit the whole family at once. I’m very happy with the car this far, but since it has connectivity, I want to see if I can integrate it into HomeAssistant.

To do this, I wanted to use the CarConnectivity project by Till Steinbach. It is a Python package that comes in a few parts. The main project, a Volkswagen connector, an MQTT bridge and a HomeAssistant MQTT discovery helper.

Having played with the software for a bit (and reported a bug that Till fixed asap – I’m impressed!) I decided to setup the whole thing on my little RaspberryPi that runs a few little services I use around the house.

Preparing this, I setup a new user and installed the software in a Python virtual environment:

sudo adduser carconnectivity
sudo su carconnectivity
cd
mkdir carconnectivity
cd carconnectivity/
python -m venv venv
source venv/bin/activate
pip install carconnectivity-connector-volkswagen==0.5a1 carconnectivity-plugin-mqtt carconnectivity-plugin-mqtt_homeassistant
vim carconnectivity.json

Using the vim command, I created the CarConnectivity configuration file. Update usernames, passwords and IPs to your needs. I will experiment with the interval parameter, as I don’t want to discharge the 12v battery by querying the car too much.

{
        "carConnectivity": {
                "log_level": "error",
                "connectors": [
                        {
                                "type": "volkswagen",
                                "config": {
                                        "interval": 1800,
                                        "username": "hello@example.com",
                                        "password": "secret"
                                }
                        }
                ],
                "plugins": [
                        {
                                "type": "mqtt",
                                "config": {
                                        "broker": "my-mqtt.local",
                                        "username": "user",
                                        "password": "secret"
                                }
                        },
                        {
                                "type": "mqtt_homeassistant",
                                "config": {}
                        }
                ]
        }
}

Having configured the service (and having run it manually to fix my mistakes) I created the carconnectivity.service systemd service shown below (in /etc/systemd/system):

[Unit]
Description=Car Connectivity to MQTT
After=network-online.target

[Service]
Type=simple
User=carconnectivity
Group=carconnectivity
WorkingDirectory=/home/carconnectivity/carconnectivity/
Environment="LC_ALL=sv_SE"
ExecStart=/home/carconnectivity/carconnectivity/venv/bin/carconnectivity-mqtt /home/carconnectivity/carconnectivity/carconnectivity.json

[Install]
WantedBy=multi-user.target

And then I started and enabled the service.

sudo systemctl start carconnectivity
sudo systemctl enable carconnectivity

Finally, I had a look at the status and made sure that everything looks ok.

sudo systemctl status carconnectivity

And, viola, the car shows up as a device in Home Assistant. Magic!

Migrating Databases

Last week I decided to clean up a bit of digital cruft. That is, I moved a few of my websites onto a single VPS, saving quite a bit of monthly server hosting costs.

What I did was that I moved VPSes from Linode (Akamai) to DigitalOcean, but also migrated a full web hotel from One to DigitalOcean (converting email accounts to email forwards).

As this is something that I do very rarely, I decided to document the process here so that I don’t have to look everything up again next time around.

The grunt work was about migrating a number of L*MP services to a LEMP server. There are a couple of tasks involved here, mainly migration of databases and getting WordPress running in a subdirectory using Nginx. The rest of the exercise had to do with the moving of nameservers and waiting for DNS propagation to get certbot to provide certificates for the new location.

Migration of MySQL databases

The migration of a database between machines can be broken down into three stages:

  1. Dumping the old database
  2. Creating a new database and user
  3. Sourcing the database contents into the new database

I choose to do it in these three stages, as I’d like to keep the old database dump as an additional backup. The other option would be to transfer the database contents in a single step, merging steps 1 and 3 into one

Nevertheless, I use mysqldump to dump the database contents, and then bzip2 to reduce the size of the dump. This is efficient since and SQL dump is quite verbose.

mysqldump -u username -p --databases databasename | grep -vE \"^(USE|CREATE DATABASE)\" | bzip2 -c - > dumpname.sql.bz2

This is derived from the answer by Anuboiz over at stack overflow. The resulting file is then transferred to the new server using scp together with the actual website.

The next step is to create a new database and a new database user. Here, I assume MariaDB (using the mysql commands), as my main target is WordPress. For other database engines, e.g. Postgresql, please check the docs for exact grammar, but the SQL commands should be very similar.

sudo mysql
mysql> CREATE DATABASE databasename;
mysql> USE databasename;
mysql> CREATE USER 'username'@'localhost' identified by 'password';
mysql> GRANT CREATE, ALTER, DROP, INSERT, UPDATE, DELETE, SELECT, REFERENCES, RELOAD on databasename.* TO 'username'@'localhost' WITH GRANT OPTION;
mysql> EXIT

Check out this digital ocean tutorial for details on the above commands.

The next step is to read the database contents into the new database. For this, we need to unzip the sql dump, e.g. bunzip2 dumpname.sql.bz2, which will result in a file called dumpname.sql. Please notice that bunzip2 unzips the file and removes the original, zipped, file. If you want to keep the original, use the -k option.

Once you have the dumpname.sql file available, you can read it into the database with the newly created user using the source command as shown below.

mysql -u username -p
enter the password here
mysql> USE databasename;
mysql> SOURCE dumpname.sql;
mysql> EXIT

Now you should have a new database with the old database contents on the new server, with an associated database user. For WordPress sites, make sure that you reflect any changes in the associated wp-config.php file.

WordPress in a subdirectory using Nginx

The other piece of the puzzle that was new to me was to run WordPress from a subdirectory, e.g. example.com/blog/, rather than from the root level, e.g. example.com/.

Removing most of the nginx server configuration, the following parts does the magic:

server {
        root /var/www/thelins.se;
        index index.php index.html;

        server_name thelins.se www.thelins.se;

...

# For root
        location / {
                try_files $uri $uri/ /index.php?$args;
        }

# For subdirectory
        location /johan/blog/ {
                try_files $uri $uri/ /johan/blog/index.php$args;
        }

        location ~ \.php$ {
                fastcgi_split_path_info ^(.+\.php)(/.+)$;
                fastcgi_pass unix:/run/php/php7.4-fpm.sock;
                fastcgi_index index.php;
                include fastcgi.conf;
        }

...
}

The trick was to ensure that the subdirectory try_files statement refer to the correct index.php. Notice that this has to be done for each WordPress instance, if you happen to have multiple WordPress installations in various subdirectories on the same domain.

Conclusions

Its a bit of hassle to migrate a lot of web sites at once, but the monetary saving from moving the low traffic sites onto a single VPS, and the simplification of the management and monitoring by moving all VPSes to a single provider makes it worth it.

Upgrading NextCloudPi

So I finally got around to upgrading my NextCloudPi to version 20 with the hub and all. I really like it so far.

It also seems that I bumped into a know issue. All of a sudden I could not upload files over 8kB. It turns out that for some reasons the sys_temp_dir in /etc/php/7.3/php.ini had shifted from my USB media (a portable HDD) to the default location (/var/www/nextcloud/tmp) during the upgrade. After a quick move back to /media/USBDrive/data/tmp, everything is back to normal.

I found the solution over at the nextcloud forums.

Advent of Code and Learning

So, I decided to do Advent of Code this year too. I usually get stuck part of the way, but I still think that it is a fun exercise.

This year the plan is to use python and pytest the whole way through. Every day that i learn something that I want to remember, I add a til.txt file in that sub-directory. You can follow my progress and learnings in the git repository.

The lessons this far includes:

  • When using readline to read lines, the line-break is included, so len(text) will be one character more than expected. Strip your strings!
  • When doing number of ifelifelifelif, make sure to include an else, even though you know that all cases are covered. I run assert False in the else clause.

As you can see, these are on the level of small snippets of wisdom right now. I’m sure it will be more interesting as the problems become more complex.

The API wars – 16 years later

It is more than 16 years since Joel Spolsky wrote How Microsoft Lost the API War. The bonds of the win32 API lock-in is broken and the free web is here to take over.

The web has come a long way in the past 16 years. Richer APIs, dramatic performance improvements, and an ubiquity that surpasses anything else that we as a human race have experienced. Easy of deployment is king and the easiest deployment of all is to simply browse to a web page.

Creating web apps has always been riddled by browser compatibility caveats. Various services have been around to test rendering across browsers and versions, and frameworks to address common scenarios have evolved to create a write-once, deploy-everywhere story.

The modern web browser has become our universal runtime environment. It is what Java and .net aspired to on a crazy scale. However, it is not only a runtime environment. It is the perfect client server setup to provide everything as a service.

With the focus shifting from the browser to the actual contents, the value of controlling your own browser engine has become less and less attractive, and last week, Mozilla begun what I think is the final downward spiral of the last alternative to the Google led Chrome family of browsers.

(There are so many things I’d like to say about this. For instance, you should know about the Mozilla manifesto, as well as their funding being secured for the next three years. But I digress).

A browser engine is a hugely complex beast these days. An incredible number of backwards compatibility hacks, while ensuring high performance on both rendering and Javascript execution. Add a broad range of APIs for integration into the native host platform. Combine that with privacy and security concerns, and the code base is starting to turn into a beast.

Now, it seems that Google controls the leading browser engine and thus, holds the direction of the web as we know it in their hands. Google has not only won the search, contents, and personal data collection wars. They also won the API war.

Having a single, almost monopolistic entity controlling all these aspects of life makes me feel very uncomfortable.

I’ve started my own personal de-googling journey, and I know that there are many others doing the same. Taking back ownership over their email, shifting from Google Drive and Google Apps to alternatives such as Nextcloud. But also moving from platforms such as Twitter to federated alternatives such as Mastodon.

A lot of this is probably seen as nostalgia from an earlier generation growing old. The web has moved on and many parts of what I love about the internet are no longer in broad use. For instance, small forums are migrated to Facebook groups, IRC is taken over by freemium alternatives such as Slack, RSS feeds become less and less common, and so on. The web is being centralized and has been so over the past decades.

However, I believe that the tide can be turned.

On the contents side, early adopters are moving to federated and self-hosted services where data lock-in is impossible. Privacy concerns become more common outside of the technology sector. What is needed is great alternatives that are easy to deploy. Examples that I use are Nextcloud, ttrss, fripost, and Signal.

But what can be done about the API war?

An attractive possibility, in my view, is the raise of WebAssembly. It enables the deployment of complex applications into the browser, really turning the browser into the universal run-time environment. It does so for compiled languages and at great performance.

What about deploying a bare bones wasm run-time environment, and then deploy the browser into it. That way, the complex beast that is the browser of today, turns into a much more manageable animal that is the wasm run-time.

What would this change? Short-term, very little. Even if the Chrome engine is compiled to wasm and executed inside an outer shell, the experience and value is still delivered through a very complex code base controlled by one of the most dominant companies in human history.

Long-term, it would mean that the ease of deployment would apply to not only the web, but to the wasm run-time. We would shift from the HTML/CSS/JS world to a wasm world.

Not only would this mean that the universal run-time becomes smaller and more manageable to maintain by multiple parties, it also opens the opportunity to shift to a optimized way to run software (the hardware requirements of the modern browser isn’t really environmentally friendly – it drives energy usage, as well as hardware obsolesce).

Now, all that is needed is time. An idea without execution, is merely a dream. I might be a dreamer, but I think that this is the way forward.

The Embedded Talks

The foss-north conference strives to have an assortment of various talks. The point is that visitors should see something unexpected and that the conference should attract all types of visitors to ensure that we as a community can meet across various industries and problem spaces.

This time I’ve selected three talks about embedded systems from foss-north 2020. The talks touch on building embedded systems around Linux. If your reader does not show you the embedded videos, make sure to follow the actual page or go to our conf.tube channel to see all the contents.

First out was Ron Munitz talk on understanding and building minimal Linux systems. This talk proved to be a real deep dive into the Linux kernel – including setting up a debugger to the kernel itself.

The next embedded speaker on the program was Chris Simmonds. He discussed if going with Yocto or Debian is best for your embedded Linux project. This an interesting topic – how much is customization worth compared to other aspect such as build-time.

The embedded set of talks ended with Drew Fustini talking about running Linux on the RISC-V. This talk dives deep into the hardware part of embedded systems, but also Linux. By being able to run Linux on RISC-V, which is open hardware, we are very close to an completely open eco-system.

The three talks are already available on conf.tube, and the presentation material can be found by following the links to each speaker. For those of you who prefer YouTube, the talks will be made available shortly on the foss-north channel. Subscribe to get notified when they are.

foss-north – or doing many things at once

When placing this year’s foss-north event over a quarter break I knew that I would be busy both at work and at the conference. Little did I know what was beyond the horizon ;-)

As a consequence of the COVID-19 situation, the event has to be converted from a physical meeting to a virtual event. This means many things to an organizer: renegotiating all sponsorship contracts, renegotiating with the physical venue, setting up the infrastructure for a virtual event, rescheduling all speakers, and so on.

We at foss-north are lucky. All sponsors continue to stay with us and the venue was very cooperative when it came to rescheduling the event.

I have started to document our virtual conference setup so that other conferences in the same situation can learn. Pull requests are welcome!

This Sunday we decided to stress test the infrastructure by running the lightning talks. This is a good test case, as it involves a maximum number of speaker transitions, as well as more frequent QA sessions. From an organizer perspective, this is really like running a full day of the conference in 90 minutes.

I’m happy to tell you that the talks went well! You find them below. Following the links you find slides as well as recordings of the sessions.

Develop better software with usability testing by Andreas Nilsson
Running Android on the Raspberry Pi by Chris Simmonds
The Yocto Project 10 minute quick-start guide by Ron Munitz
Getting started with your smart, connected, vehicle project by Dimitris Platis
Seven years in Tibet^W^Wat Home by Kristoffer Grönlund
Linux on RISC-V by Drew Fustini
Singularity container platform by Anders Björklund

We’ve also been able to get most of the conference schedule in place and just have a few rough edges to fix before the big event. I am extremely pleased with how this has turned out. We still have a stellar speaker setup and I hope that you will all join in and watch the streams. The event is free for all and open to all and runs from March 29 – April 1.

NextCloud on Pi Adventures

I spent yesterday *finally* setting up a NextCloud instance of my own. It’s been on my todo since I installed fiber at home and got a decent Internet connection.

I started out with Rasbian Lite and combined it with the NextCloudPi install script from ownyourbits. I then used certbot to install certificates from let’s encrypt before migrating the data directory using these instructions.

After that it was happy account creation time, before realizing that I could not upload files larger than ~10kB. Very annoying.

After having duckduck-ed and browser issues and articles for hours, I finally found that that the /etc/php/7.3/fpm/php.ini file contained a reference to the data directory.

sys_temp_dir = /new/path/to/data/tmp

This one-liner cost me about four hours to find, so hopefully this post saves someone else that time.

Change of Plans

TL;DR; foss-north IoT and Security Day has been cancelled, or at least indefinitely postponed, due to health reasons.

For the past three weeks (from August 11, to be exact) I have had a fever that I couldn’t really shake. At the same time my wife had pneumonia for which she was successfully treated. Antibiotics is treated with care in Sweden, so I basically waited for my CRP tests to return a high enough value for my doctor to be convinced that I had an infection.

On Friday 24th I got my first round of antibiotics. They did not help, so on the morning of the 27th I returned and got another, stronger, antibiotics. I was also told to go directly to ER if I got any worse. I did. On Thursday morning I landed in ER.

It turns out it was not pneumonia at all, but blood clots throughout my lungs – way too close to a proper game over for comfort. It took me four days to stop degrading, and six days before I could leave the hospital. Right now I’m on ordered rest for at least two weeks. Something I apparently need, as I’m super tired as soon as I do the smallest thing. Right now my exercise consists of walking around the block, ~400m, twice a day.

Hence, there is no way I can arrange the foss-north event planned in the end of October. I’d like to thank all the sponsors who signed up, and those which whom I postponed the meetings. I would also like to thank everyone who submitted talks – the line-up would have been amazing. Finally, I’d like to thank the friendly people who helped cancel everything – it really took a heavy load of my chest.

This is a hugely frustrating situation to me as an individual – I want to work and I want to run, but I guess it is time to slow down for a while and then come back stronger. There will be another foss-north, and I will run 10km trail under the hour. Just not this year.

fosdem 2019

The first weekend of February means Belgium, Brussels and fosdem. To those of you who has not been there yet, it is a huge, chaotic, crowded, but also wonderful event.

But first I was met by a huge snow storm and the following chaos. :-)

I’ve been to fosdem a number of years now, and I was brave enough to take to the stage last year. In the early days, I spent most time in various dev rooms, either hacking myself or listening to talks. For me, fosdem has changed from this to more of a social event. I’ve spent hours talking the the K-building, made sure to meet people I’ve interacted with online for the first time, and generally hang out and enjoy the company of a lot of smart people.

Another side mission of mine this year was to do some foss-north promotion. As you might know, I’m organizing the foss-north event, and I had the opportunity to meet with both speakers and sponsors during fosdem (call for papers close in ~1.5weeks, just sayin’). I also took the opportunity to hang some flyers at the venue, so hopefully some people discovered the event that way.

As I pointed out earlier, the weather was not that great, but for a few moments on Sunday morning the sun peeked out between the clouds and you could almost feel a sense of spring in the air.

I did not attend that many talks this year, but I did really appreciate Jon maddog Hall‘s talk Fifty years of Unix and Linux advances.

After the event I took the opportunity to visit Brussels with some friends. I finally got around to visit Atomium. Such an amazing place! I love the mix of the 50’s architecture and the contemporary exhibitions in some of the spheres. This place was way better than I expected it to be.

So fosdem delivered again. Chaos, so many meetings with new people as well as old friends and acquaintances, great contents, and a generally great experience in Brussels. I’m already looking forward to next year’s event!