Akademy 2020

I had the pleasure of speaking at Akademy 2020 this weekend. This year Akademy is virtual, but I still got the feeling of a very interactive event. Interesting questions, greenroom for the speakers, and generally a nice experience. Big thank you to the organizers!

The video below should start roughly when I go on stage.

For the interested listener, you can find the slides here: https://e8johan.se/presentations/2020-09%20akademy%20v1.1.pdf.

The API wars – 16 years later

It is more than 16 years since Joel Spolsky wrote How Microsoft Lost the API War. The bonds of the win32 API lock-in is broken and the free web is here to take over.

The web has come a long way in the past 16 years. Richer APIs, dramatic performance improvements, and an ubiquity that surpasses anything else that we as a human race have experienced. Easy of deployment is king and the easiest deployment of all is to simply browse to a web page.

Creating web apps has always been riddled by browser compatibility caveats. Various services have been around to test rendering across browsers and versions, and frameworks to address common scenarios have evolved to create a write-once, deploy-everywhere story.

The modern web browser has become our universal runtime environment. It is what Java and .net aspired to on a crazy scale. However, it is not only a runtime environment. It is the perfect client server setup to provide everything as a service.

With the focus shifting from the browser to the actual contents, the value of controlling your own browser engine has become less and less attractive, and last week, Mozilla begun what I think is the final downward spiral of the last alternative to the Google led Chrome family of browsers.

(There are so many things I’d like to say about this. For instance, you should know about the Mozilla manifesto, as well as their funding being secured for the next three years. But I digress).

A browser engine is a hugely complex beast these days. An incredible number of backwards compatibility hacks, while ensuring high performance on both rendering and Javascript execution. Add a broad range of APIs for integration into the native host platform. Combine that with privacy and security concerns, and the code base is starting to turn into a beast.

Now, it seems that Google controls the leading browser engine and thus, holds the direction of the web as we know it in their hands. Google has not only won the search, contents, and personal data collection wars. They also won the API war.

Having a single, almost monopolistic entity controlling all these aspects of life makes me feel very uncomfortable.

I’ve started my own personal de-googling journey, and I know that there are many others doing the same. Taking back ownership over their email, shifting from Google Drive and Google Apps to alternatives such as Nextcloud. But also moving from platforms such as Twitter to federated alternatives such as Mastodon.

A lot of this is probably seen as nostalgia from an earlier generation growing old. The web has moved on and many parts of what I love about the internet are no longer in broad use. For instance, small forums are migrated to Facebook groups, IRC is taken over by freemium alternatives such as Slack, RSS feeds become less and less common, and so on. The web is being centralized and has been so over the past decades.

However, I believe that the tide can be turned.

On the contents side, early adopters are moving to federated and self-hosted services where data lock-in is impossible. Privacy concerns become more common outside of the technology sector. What is needed is great alternatives that are easy to deploy. Examples that I use are Nextcloud, ttrss, fripost, and Signal.

But what can be done about the API war?

An attractive possibility, in my view, is the raise of WebAssembly. It enables the deployment of complex applications into the browser, really turning the browser into the universal run-time environment. It does so for compiled languages and at great performance.

What about deploying a bare bones wasm run-time environment, and then deploy the browser into it. That way, the complex beast that is the browser of today, turns into a much more manageable animal that is the wasm run-time.

What would this change? Short-term, very little. Even if the Chrome engine is compiled to wasm and executed inside an outer shell, the experience and value is still delivered through a very complex code base controlled by one of the most dominant companies in human history.

Long-term, it would mean that the ease of deployment would apply to not only the web, but to the wasm run-time. We would shift from the HTML/CSS/JS world to a wasm world.

Not only would this mean that the universal run-time becomes smaller and more manageable to maintain by multiple parties, it also opens the opportunity to shift to a optimized way to run software (the hardware requirements of the modern browser isn’t really environmentally friendly – it drives energy usage, as well as hardware obsolesce).

Now, all that is needed is time. An idea without execution, is merely a dream. I might be a dreamer, but I think that this is the way forward.

Adventures in (Dyn)DNS

So, I made the silly move to rely on my hardware supplier to provide me with a dynamic DNS service. Naturally, this offer expired, and I could no longer reach my home server. Because of Murphy, this naturally took place when I was away from home with no access to anything.

So – how does one find the way back home?

Luckily, I have a VPS that I log in to now and then. After a quick duck-ing (duckduckgo is my friend), I found the last command which was the first piece of the puzzle. Now I had a list of potential IPs.

Did I mention that I travel a lot?

There were quite a few IPs there. Pre-COVID-19, it would have been worse. Still, I found a few likely candidates based on frequency of use. Then I found this handy list of IP blocks in Sweden. Now I could tell my mobile data provider (Telenor) from my fibre data provider (Bahnhof).

Quickly adding my home domain and the suspected IP to /etc/hosts on my laptop allowed me to confirm my suspicions. Once in, I could setup duckdns for dynamic DNS, change the CNAME record of my domain, and now all is operational again.

I learned two things from this:

  1. Don’t rely on the time limited offers of hardware vendors for even the most trivial service. They are all trying to convert you into a as-a-Service deal and make you pay an annual fee. (i.e. read the fine print).
  2. I was really happy to use a CNAME record to redirect a subdomain of mine to my home server, so even when using a dynamic DNS service, I could switch to another dynamic DNS service. (this was pure luck – no foresight from my side was involved).

Also, while on the the topics of experiences. If you have the possibility, you should use bahnhof as your ISP. They have a track record of opposing surveilance laws and work to protect the privacy of their customers. Also – I’ve had zero issues with them since switching some 15 years ago, so I can recommend them from that perspective as well ;-)

It finally arrived

After waiting a bit over a month, followed by an agonizing week when the new gizmo was at my DHL pickup point and I was some 2h by car away in our summer house.

What gizmo? A Pinebook Pro!

The Pinebook comes in many layers. Like, properly many. I guess this means that it is safe during transport. At least mine arrived without any bruising despite a long journey from Hongkong to Alingsås.

After powering the system on it took quite a long time for the system to reach the Manjaro logo, but once up and running, things move along at a decent pace.

Initial impressions are positive. I had to crank up the backlight a bit, but I’m sitting outdoors (it is overcast). Right now I’m installing the initial set of software updates (some 400+MB to download) while I type this. I also set the keyboard layout to Swedish. I have an ISO keyboard model, so all the keys are there and I don’t mind that the keycaps say something else than what they type.

On the topic of the keyboard. I was warned about the keyboard feel. I was also told that the Pro-model is better than the original Pinebook (which I’ve only used for ~5s at fosdem). To be honest, the keyboard is decent, but not on par with my Dell XPS13, nor my Sculpt Egonomic keyboard.

I still have the night time hacking test to perform – will my wife accept this keyboard clicking in the early morning hours? She preferred the MacBook Pro over the XPS13, so let’s see how this fares ;-).

I also have to see if I can adopt to Manjaro Linux, or if I’ll go to Debian, which I run on all my other machines. It has been years since I tried any alternative distro, so I’ll give it a few days at least to see how much I will miss apt-get – at least it runs KDE Plasma ;-)

foss-north kdenlive workflow

As some of you might already have noticed, we’ve complemented foss-north with a new pod / vod / vlog – I’m not sure what to call it. Basically, it is a video based pod cast (making it available as a audio only pod-cast is high on the todo). Our main focus right now is a series on licenses and copyright, but there is more to come.

As a part of this, I’ve started editing videos in kdenlive on a weekly basis, and I’m very happy with it so far.

In this blog, I want to share my workflow. It is probably far from ideal, but it does the work for me.

I usually start with a set of presentation slides that we’ve used to direct the discussions. These are exported as pdf, which is then converted to 1920×1080 pngs for consumtion in kdenlive.

I do this in two steps using ImageMagick, as the results seems nicer by first rendering too large images and scaling them down.

convert -verbose -density 300 ../open\ projects-1.pdf -quality 100 -sharpen 0x1.0 11.png

mogrify -resize 1920x1080 *.png

The session is recorded using OBS from our Jitsi instance, but we also encourage each participant to record their audio separately, as it makes it easier to fix things afterwards. (foss-north now self-hosts a Jitsi instance – check out https://github.com/e8johan/virtual-conf-resources to learn about how to setup virtual conferences).

You would be surprised over how many times we’ve run into issues with one or more sound recordings. We’ve had:

  • Too low volume (inaudible)
  • Too high gain (noisy)
  • Local echo of the rest of the participants in one recording (no use of headphones)
  • No recording (forgot to press record)

I’m sure the list will grow longer as we record more episodes :-)

Before I start cutting the recording, I use one of my favorite features in kdenlive. First I set the Jitsi recording as the audio reference as shown below.

Then for each audio track, I tell kdenlive to align it to the reference. This will position it correctly in relation to the Jitsi recording, meaning that I can fade in and out of individual recordings without having to worry about any time shifts.

Finally, I select all the audio recordings and group them. This means that all editing I do (cuts, movements, etc) is applied to all channels.

Now it is just a matter of listening for trouble (you can spot awkward silence in the visualization of the audio tracks), press i to mark the beginning of a section, press o to mark the end, and then shift+X to cut it out.

In general, I try to edit as little as possible, but tightening some parts by removing silence, and sometimes remove failed parts when we’ve decided to start over a section.

Finally I add the pngs as a video stream, our pre-recorded intro sequence, and a YouTube friendly end-screen and click render and go to bed :-)

foss-north: Enablement Talks

During foss-north 2020 we had a group of talks related to using free and open source in various settings. I call them enablement talks. Someone with a more salesy mind might have said success stories.

This year we had tree such talks. One from about SVT’s (the Swedish public TV broadcaster) video streaming platform by Gustav Grusell and Olof Lindman, one from arbetsförmedlingen (the Swedish public employment service) by Johan Linåker and Jonas Södergren, and about Screenly OSE by Viktor Petersson, a digital signage solution.

We’ve also decided to experiment with a series of shorter videos, and we started by explaining licenses.

Photoframe Hack

Sometimes you just want to get something done. Something for yourself.

You do not intend it to be reused, or even pretty.

You build a tool.

My tool was a photoframe with some basic overlays. I wanted the family calendar, some weather information (current temperature + forecast), time, and the next bus heading for the train station.

To make this acceptable in a home environment, I built it as a photoframe. You can find the sources in the hassframe-ui repository on my github.

A hidden feature is that if you tap the screen, a home automation control panel slides up. That way you can control all the lights, as well as heat in the garage and an AC in the bedroom. Very convenient.

All this is built using QML. Three somewhat useful models are available:

  • IcalModel, taking a URL and parsing whatever it gets back as ICAL data. It is a very naive parser and does not care about things such as time zones and other details.
  • YrWeatherModel, uses yr.no‘s public APIs to pull out a weather forecast for a given location.
  • ButStopModel, uses the APIs from resrobot to look for departures to the train station from two bus stops close to my home and then merge the results into a model.

I also have a bunch of REST calls to my local home assistant server. Most of these reside in the HassButton class, but I also get the current temperature from there. These are hardcoded for my local network, so needs refactoring to be used outside of my LAN.

All of these interfaces require API keys of one kind or another – be it a proper key, or a secret URL. These are pulled from environment variables in main.cpp and then exposed to QML. That way, you can reuse the components without having to share your secrets.

All in all the code is quite hacky. Especially main.qml. I refactor out parts from there now and then, but the photoframe works, so its not anything that I prioritize.

Currently it runs on a Raspberry Pi on top of Raspbian. I want to build an optimized Yocto image making it less hacky and more pre-packaged. Perhaps there will be a rainy day this summer and I’ll get around to it. Burkhard has prepared the instructions needed over at embedded use.

The Cost of no Architecture

Like many others, I enjoy various reverse engineering and tear-down stories. Personally, I mean things like iFixit tear-downs and Ken Shirriff’s blog, so I started following this tweet thread by foone.

This continues with another tweet sequence about getting software running on the remote control. Having enjoyed these tweets, I started thinking.

The Harmony remotes are quite expensive in my mind. I can’t find any exact numbers for the number of sold devices, but I found this 2018 Q4 earnings report. Looking at the net sales, I guess the remotes are either “Tablets & Other Accessories” or “Smart Home”. They represent sales net sales of ~107 and ~89 MUSD over 12 months. Let’s pick the lower number and just look at magnitudes. The Harmony 900 seems to have retailed for ~350 USD back when it was new. So, if all the Smart Home stuff was harmonies, we’re looking at 250k units over a year. So I’m guessing the magnitude is around 10k – 100k units annually – but the Harmony 900 is from 2013, so I assume that it sold closer to the lower number, if not below. The market was new and so on.

Then we look at the tweets again. What have we got? Let’s put aside security issues, unencrypted communications, and other clear mistakes and just look at how the device is built.

Flash to drive the UI, double web servers on-board, Lua, QNX and what not. A 233 MHz CPU and ~64MB of FLASH – for a remote control. From an engineering perspective, this sounds like a fun system to work on – from an architecture perspective, it looks like a ball of mud.

Back in 2013, QNX might have been a good choice compared to Linux. Today, with Yocto and similar tools for developing embedded Linux systems, it feels like an odd choice to add a license cost to such a device. But no biggie. Back in the day this was not an unreasonable choice (and still isn’t for certain applications).

The Flash stuff. There were alternatives back in 2013, but sure, there were plenty of developers at hand and things like Qt QML was still probably a bit clunky (I can’t recall the state of it back then – it required OpenGL ES, which I guess was a big ask back then).

But the mix of techniques and tools. The on-board web servers. The complexity of a small system and the costs it brings to maintenance and testability. If this is the foundation for Harmony remotes and a platform that has been used for the better past of the past decade, I wonder if the added engineering costs for architecture the platform to be more optimized early on would not have paid off in lower maintenance costs, as well as lower hardware costs.

I know how it is when you’re in a project. The deadline is there in big writing on one of the walls. You can get something working by stringing what you have together with duktape and glue. The question I’m asking myself is more along the lines of how do we run embedded systems engineering projects? Where did we go wrong? Why don’t we prioritize the thinking and refactoring over the just-get-this-thing-out-of-the-door?

The answer is time to market and such, but over a decade of building on a ball of mud, the economical numbers start adding up in favour for the better engineered product. For continuous improvement. For spending time thinking about how to improve the system as a whole.

The Internet Talks

I’ve previously written about the licensing and embedded talks of foss-north 2020. This time around, I’d like to share the recordings of the Internet related talks.

Internet is a very broad topic, so it is hard to classify talks as not being Internet related these days, but the following three talks stand out.

The first speaker is an old time speaker at foss-north, Daniel Stenberg. He has spoken at foss-north several times, but never about his main claim to fame: curl. This time he righted this by delivering a talks about how to Curl better.

Maintaining privacy on the Internet is a big topic. This is a field where it is hard to deliver black or white answers. Elisabet Lobo-Vesga presents DPella, a query language for differential privacy. Using this technology, it is possible to make the tradeoff between how private the user is vs how detailed the data returned is.

The Internet talks end with Patrik Fältström. One of the people who has been around the Swedish Internet scene the longest. He talks about Keeping Time. It is a journey into leap seconds, atomic clocks, the speed of light and other hassles when keeping clocks in sync over a large network.

The talks are already available on conf.tube, and the presentation material can be found by following the links to each speaker. For those of you who prefer YouTube, the talks will be made available shortly on the foss-north channel. Subscribe to get notified when they are.

The Embedded Talks

The foss-north conference strives to have an assortment of various talks. The point is that visitors should see something unexpected and that the conference should attract all types of visitors to ensure that we as a community can meet across various industries and problem spaces.

This time I’ve selected three talks about embedded systems from foss-north 2020. The talks touch on building embedded systems around Linux. If your reader does not show you the embedded videos, make sure to follow the actual page or go to our conf.tube channel to see all the contents.

First out was Ron Munitz talk on understanding and building minimal Linux systems. This talk proved to be a real deep dive into the Linux kernel – including setting up a debugger to the kernel itself.

The next embedded speaker on the program was Chris Simmonds. He discussed if going with Yocto or Debian is best for your embedded Linux project. This an interesting topic – how much is customization worth compared to other aspect such as build-time.

The embedded set of talks ended with Drew Fustini talking about running Linux on the RISC-V. This talk dives deep into the hardware part of embedded systems, but also Linux. By being able to run Linux on RISC-V, which is open hardware, we are very close to an completely open eco-system.

The three talks are already available on conf.tube, and the presentation material can be found by following the links to each speaker. For those of you who prefer YouTube, the talks will be made available shortly on the foss-north channel. Subscribe to get notified when they are.