I listen to Planet Money and The Indicator pretty regularly. One theme that's come up a few times is the issue of the productivity mystery. This was the topic for The Indicator yesterday. The mystery is: As technology advances, we should get more productive. However in the last few decades we have not observed that periodic advancement. The formula is roughly: Stuff produced / Time spent working Like mentioned in this episode, I believe there is no mystery.
It's 4am Saturday. You jolt awake to the blaring of an air raid siren. You make a mental note to change your pagerduty ringtone before logging in to see that the data pipeline is fucked. While working to find the root cause of the lost messages and late deliveries, you consider selling everything you have and starting a goat farm somewhere remote. Hours after fruitless shuffling the consumers have all caught up and producers are no longer dropping messages.
I recently received my Novena desktop edition which I ordered during the crowdsupply campaign. I've been anxiously awaiting it since the beginning of February. The box arrived in good condition with no obvious signs of being dropped or damaged. Opening it, I was greeted immediately with the schematics booklet which I proceeded to show off to my coworkers. I loved that the novena logo is everywhere on the hardware too, it looks great.
I came across an interesting problem recently which was made more complicated by the lack of good documentation and the inability to narrow search results due to broad search terms. Additionally, it was made worse by the apparent lack of understanding surrounding how these programs interact. The problem had to do with the way that DNS resolution is handled on linux systems: /etc/resolv.conf This file contains the nameservers glibc uses when calling getaddrinfo in socket programming.
For a while now, I've wanted better insight into network behavior on my home network. While I've been a long time advocate of OpenWRT as an alternative to proprietary embedded management systems, it's frustrating to work with such limited hardware. Things like logging and packet capture become cumbersome because you have to forward those to other machines to consume, store or analyze. So this lead me to begin looking at other options for a home router.
As an experiment I wanted to see if I could deterministically build the Linux kernel twice in a row. My goal was to have two kernels where the bzImage result hashes to the same sha256 hash. I saw that a patchset had been merged a while back, but the script provided didn't work out of the box for me. (And why should it!? It was written in 2011!) It got me going in the right direction, which is all I needed to get it to work.
I attended the second cryptography meetup at Cloudflare last Wednesday and was once again impressed by the turnout. It was only the second talk and they've already had prominent members of the crypto community speaking including Adam Langley from Google, Trevor Perrin who worked on TextSecure and Brian Warner from Mozilla. The talks were all fantastic but Trevor's talk about application level encryption and the challenges of group encryption the most interesting.
netstat is one of my favorite linux utilities and is always one of the first tools I use when starting to debug any network related issues. One of my favorite options that netstat provides is the -p flag. This flag is used to see which programs are talking on which sockets. On to the real problem. I was investigating an issue with a OpenWrt router I was having which was that it was listening on a port which I wasn't expecting.
I spent a good amount of time cleaning up git-fat this weekend. I finally got around to finishing the backend interface to enable multiple backend implementations. Now it's much nicer to add another transport medium than it was when I first added HTTP as a backend. Additionally, having an interface made testing quite a bit nicer since I can now use the copy local backend instead of configuring rsync on the host I'm testing on.