2018-09-12

I haven't been doing much technical on the side since starting my new job at Even. There, though, I've been able to dive into some cool things, including taking point on reliability and some observability efforts. I've got a whole blog post on the subject that I could write (and I intend to but we all know how that goes) on the subject of error handling in Go.

Instead of programming, I spent a few weeks

  • backpacking in Emigrant Wilderness,
  • climbing in the Emeralds
  • backpacking up Horsetail Falls and summiting Pyramid Peak
  • doing a weekend of multipitch climbs (10 pitches in total!)

I did order a new T480 that's pretty well stacked for the next couple years: 1T NVMe drive, 32G of memory, and an i5-8350U. With my ansible config and home directory in git, it was pretty quick to set up and I didn't go through ops fatigue. Other than that, I hadn't really done much of anything.

The itch to do more has been growing, though. I flapped around a bit figuring out what to do, starting on the book Programming Rust and working through Operating Systems: Three Easy Pieces, and so forth. I forget what actually got me to consider it, but I signed up for the Self Driving Car intro nanodegree on Udacity. This course and I have a bit of a history, as I started taking the first iteration of the class in 2012 - when I was stationed in Africa with the Army - but wasn't able to complete it. I'm pretty excited to be taking it. It's interesting to compare the programming interface from the 2012 version of the class (e.g. this post) and the Unity-based 3D simulation that we're using now.

Importantly, completing it will guarantee admission to the Robotics Engineering and quadcopter programs. There's the risk of not completing it, c.f. the AI and ML nanodegrees, but I worked through the entire 7-week foundations course over a weekend and it seems reasonable. We'll see, I guess.

Alongside this, I've got the bug to actually build a robot again. The problem I have now is that I don't have an IMU and GPS for the RPi, and will have to build a custom mount for it. At a minimum, I could use an ultrasonic sensor array to build a world view or put the LIDAR Lite on a pan/tilt sensor but localisation will still be difficult. I'll also have to figure out the camera situation.

This being said, I did figure out that you can set up an Arduino as a ROS node, and I do have ready made shields for the Arduino that I could use to do this. I've been thinking about using this setup for the URS array, too. I'm not sold on it, though.

Speaking of ROS, I've been learning how to build things with it. So far, it seems to me that it's "just" a core pubsub registry/router and an API for sending messages and completing actions. I've been finding that I really like it, and it's an excuse to write more C++. So far it's nothing too crazy. I was trying to adapt one of their examples to my own idea, but they're using a weird class-based approach that I don't like so I need to rewrite it and I just haven't gotten around to it.

Using ROS feels more like "grown up" robotics. The robots I've built in the past were all simple arduino things without any real brains; this is a chance to actually do some AI programming and whatnot.

On the work front, it's the first time in a long while that I've had to work primarily in someone else's codebase. It presents its own set of challenges, but I'm finding it rewarding. The two weeks of on call is not, however. But it's not bad enough to be a problem.

I've been dealing with a migraine and corresponding low sleep, so this is a little less coherent than usual.


Tags: , , ,