The Simple Joy Of Learning To Play Piano

Back in January, I started to teach myself how to play piano. I'd played before when I was a kid — like many people do — but I was never very good and it didn't stick once I'd stopped taking lessons. I'd had a keyboard for years, stowed away at my parent's place, but I'd rarely ever used it. I had occasionally tried to pick up piano over the years, but just like when I was a kid: it didn't stick.

My paino setup

This time with the keyboard visible, and in the center of the room, I hoped things would be different. It's been almost four months now, and I am happy to report that I am still practicing and importantly: I'm getting better!

Luckily, I have some music theory under my belt, and I've played guitar, both solo and in bands for years, so I know my way around musically. That said, neither of those skills prepare you to actually play the piano. It helps to know where middle C is and how to make a chord, but neither of those skills help you contort your hands to actually play the notes and chords you want.

That said, I do have two tips that I'd like to share.

Set Simple Goals, Then Iterate

Back in January, I tried to start by simply practicing scales. You know? The thing everyone hates? Well, it turns out they're super useful.

I started with a simple 5-note scale. I played it over and over again: up the scale and back down as smoothly as possible, one hand at a time. Then I would move it up a key and repeat the process. Not all keys are beginner friendly, so I would often skip complicated keys and stick to the easy ones. Once I had that down I started adding in a few little flairs. Instead of just playing the scales I would keep the time and tempo, but work in a few extra notes or I would start at the root and work up, but on the way back down I'd end on the 5th instead of the root. Little changes.

When something is difficult it can be pretty demoralizing if you can't see yourself making progress. That's where small goals come in. Each time I'd sit down to practice (and after a short warm up) I would set a small goal for myself. Sometimes the goal would be so minor that it would hardly seem worthwhile, but I always tried to explicitly set a tangible goal.

It's a slow process, but I am getting measurably better and I can see the results each and every time I sit down to play.

Don't Put it Away

I learned how to play guitar in high school and while my teacher gave me lots of advice, one thing he said always stuck with me. It was advice about how to make sure you keep practicing.

Never put your guitar away, and keep it within reach. That way you can play it whenever you have even a little bit of downtime. Play it while your computer is loading, while you're waiting for a text message, or even while you're watching a video. You don't need to play a song. Even just strumming a chord, or picking a melody can help.

Those little moments of practice add up.

To this day my guitar is within arms reach of my desk, and I play it when I need a break, when I need to think, or even just when I'm bored.

I feel the same idea has helped me learn piano. The keyboard is in the middle of my living room. I have to walk past it to get water, and whenever I do I think about playing. Often times I will play before or after eating, even just for a few minutes.

The little moments really do add up.

Using Pushover For Super Simple Sysadmin Alerts

For those who don't know, Pushover is a really great tool that allows users to easily set up and send push notifications to a smartphone. The setup is super simple, and all you need is their app and a little scripting know-how.

I've used Pushover for years to help me monitor my apps and services, and over the years my uses for Pushover have evolved and grown more and more integral to my ability to keep track of the various apps I run. To me, Pushover has gone from a nice-to-have integration to an absolute necessity.

I use Pushover to alert me of all kinds of things. Just to give you an idea, here are a few examples of some of the things I currently use Pushover for:

  • Potential queue backups in Pine.blog
  • Reporting daily user signups for Nine9s
  • Alerts when critical background jobs fail
  • Alerts when nightly builds fail to deploy
  • Alerts when a manually-run, long-running job completes

Because Pushover is so easy to integrate with basically any codebase (and even one-off shell scripts) I use it all the time for everything from simple alerts to complex and critical reports.

One particular use I'd like to call out from that list above is the nightly build alerts. Adventurer's Codex has a test environment that we use to sanity check our code before a full deploy. We used to have the test environment redeploy after every single merged pull request, but that system proved incredibly fickle and error prone, so we switched to a simple nightly build. The issue with any automatic build system is that unless you have a detailed live dashboard of deployment statuses (which we do not) it's hard to know if/when a given build has finished deploying or if it encountered an error. That's where Pushover comes in.

Nightly Build and Deploy Script

This script runs as a cron job every night. It attempts to deploy the latest version of the application and if that fails it sends a notification to Pushover.

PUSHOVER_USER="xxxx"
PUSHOVER_KEY="xxx"
PUSHOVER_URL="https://api.pushover.net/1/messages.json"

TITLE="AC Nightly: Build Failed to Deploy"
MESSAGE="The latest build on Nightly has failed."

log() {
  echo "[$(date)] $@";
}

alert_admins() {
  curl -X POST $PUSHOVER_URL \
    -H "Content-Type: application/json" \
    -d "{\"title\": \"$TITLE\", \"message\": \"$MESSAGE\", \
        \"user\": \"$PUSHOVER_USER\", \"token\": \"$PUSHOVER_KEY\"}"
}

./docker-bootstrap.sh upgrade --env nightly
STATUS=$?

if [ $STATUS -eq 0 ]; then
  log "🚀 Build completed successfully!"
else
  log "Uh oh. There was an issue. Alert the admins!"
  alert_admins
fi

My nightly build script for Adventurer's Codex includes a section after the deployment has completed that checkes the status code of the deploy command and if it is not 0 (i.e. it failed) then it sends me a notification. Bam! Now, every morning that I don't get a notification, I know things are working as intended. If I ever wake up to a notification, then I know I have work to do.

What Happens in the Background is Ignored in the Background

Crucially, I use Pushover to alert me about problems with background tasks. Modern web apps include lots of always-running or periodic asynchronous behavior, and because — when they fail — they don't directly result in user feedback or a big, loud error page mistakes, bottlenecks, and bugs often go unnoticed or unaccounted for.

Pushover solves those issues. It's trivial to write code that checks for bad behavior or that catches difficult-to-reach-but-critical bugs and just send off a notification.

I used to use email for this sort of thing, and while email is still a good solution, the setup is actually more involved. Most VPSs aren't allowed to send emails directly anymore (due to concerns over spam) and configuring an email provider is just as much work if not slightly more work than using Pushover. In some cases email is more flexible and might be better for larger teams, but I almost always reach for Pushover these days instead of email.

It's just that good.

Crossing The Wording Threshold

A few years ago, I wrote a post about the number of words this blog contained. Well, that was then; this is now, and those counts have changed pretty drastically in the intervening time.

Using the same method as before, I can now report that this blog contains just over 100,000 words spread across 270 posts! A pretty significant achievement.

$ find archive/ -name "*.md"|xargs -I {} cat {} | wc -w
  101042

I also remade the previous graph for comparison.

A histogram of the binned words per post

It seems like I've written more 300-400 word posts than before which explains the smoothing out in the middle section of the chart.

For those keeping score at home, the new longest post is this one from 2017.

A Terrifying Realization

It occurred to me that my last post about this topic was back in 2017—5 years ago!—and that means I'm very quickly approaching my 10 year blogging anniversary.1 It's hard to believe it's been so long, but I guess it has.

The first post on this site was back in December of 2012, which is just months away. I'll save the reminiscing for the retrospective post, but for now I'll just admit that it's a lovely coincidence that I reached 100,000 words on my 10 year anniversary.

1 If you count my first two blogs (which are now gone from the web) it has already been ten years, but let's not do that.

That Time I Lost Control Of A Server

Good security hygiene is essential for software developers. The thing is: we tell ourselves that, but rarely ever do we actually experience the effects of bad security hygiene. While deterrence is the point of good hygiene, it's helpful to walk through the real world consequences of bad hygiene and not just talk about the theoretical side of things.

So let's talk about the time one of my servers was hijacked.

Disclaimer
Right off the bat I need to note that the server in this story was not connected to any product or service that my company, SkyRocket Software, runs. This was a personal toy server that had no connection with anything I make or sell.

The Story

I've run servers for lots of projects over the years. I manage servers for Pine.blog, Nine9.cloud, d20.photos, and Adventurer's Codex. I also run several servers for client projects, this blog, and for toy projects. One of those toy projects is a server for an annual holiday Minecraft extravaganza with some friends of mine.

Once long ago, I was going through the process of setting up a new Minecraft server (this was before Linode provided easy-to-deploy Minecraft servers). I wanted to set up the server before going out that evening, and so I was in a bit of a rush. Being in a rush, I didn't bother to set up the server according to Linode's excellent guide on Securing Your Server. Instead, I just set a short trivially guessable root password, logged in as root, and got to installing Minecraft.

About halfway through the process I needed to leave, so I disconnected from the server and went to dinner.

When I returned home that night, I found an email in my inbox from Linode telling me that my server had been forcibly shut down because they'd determined that it was being used to send spam emails and help DDOS another site.

I had been gone only about three hours, but it had taken less than one hour from when my server had been instantiated to when it was compromised. It had happened shortly after I'd logged out.

Immediately I felt terrible for falling victim to such a simple, brute-force attack, and the experience has been one of those that makes me appreciate the often painful security hoops we sometimes need to jump through as developers.

Lessons Learned

Having fallen prey to what was likely a simple brute-force attack on my root account, I promised myself that I would never again fall prey to such an attack. Now, whenever I set up a server I always set aside plenty of time to do so, use long and complex passwords, disable root logins over SSH, and follow that Linode guide I mentioned earlier. Other experiences, like those with spambots, have made me more cautious and careful about the functionality my sites expose and how they expose it (Pine.blog doesn't offer free blogging & image uploads for a reason).

Keychain can generate long passwords easily

Keychain can generate long passwords easily, though I wish it could make even longer ones.

All in all, I count myself unlucky that I had to learn my lesson this way, but I count myself very lucky that I learned it by losing control of an unimportant and trivially replaceable server.

The internet is a very hostile place to those who aren't prepared for it. This is true on a societal level, and on a technical one. If you ever need a reminder of just how dangerous it is, try having Fail2Ban email you whenever it blocks a hostile IP address or watch your access logs for bots trying to break into your site (usually using maliciously crafted Wordpress/Drupal URLS). Things like that happen all day, every day; we just don't usually see them.

Anyway, that's the story. Hopefully everyone reading this takes it to heart so that this story remains but a cautionary tale and nothing more.

Hacks Can Be Good Code Too

Writing code is, like everything in life, all about making tradeoffs. Code can be quick to write, but at the same time unreadable; it can be fast, but hard to maintain; and it can be flexible, but overly complex. Each of these factors are worth considering when writing Good Code. Complicating this is the fact that: what constitutes Good Code in one situation may not be ideal in another.

Good Code is not universally so.

It is incredibly difficult to explain why one set of tradeoffs are worth pursuing in one case but not in another, and often times reasonable people will disagree on the value of certain tradeoffs over others. Perhaps a snippet of hacky string parsing is good in one place, but not in another. Often times, the most significant cost of solving a problem "The Right Way" is time.

When deciding whether to do something The Right Way or to cheat and simply hack something together, I often try to consider the exposure the given code will have. Consider these questions:

  • Do other systems touch this code?
  • How many developers will need to interact with it over time?
  • How much work would be involved in building out the correct approach?
  • How much work would be involved in building out the bad approach?
  • How valuable is the intended feature?
  • How much additional maintenance does the bad solution require?

Each of these answers helps me decide what kind of code I should write. These questions neglect multiple other factors (e.g. performance, readability), but they are a good starting point.

In a recent example I needed to modify the blog engine that powers this site as well as a few others. I wanted a simple feature that would count the number of articles on the side as well as the total number of words in every blog post, and display those values on the home page. As I've said before the blog engine for this site is very old, and has been rewritten several times. It's well beyond needing a massive rewrite, but that's not something I really want to do right now.

The blog engine is written in Python, provides a command-line interface, and uses Git Hooks both client and server-side to build and deploy itself.

I originally considered writing this feature in Python: counting the number of words in each article, adding a new context variable to the template rendering process, and then rendering the pages as normal. But that would require touching substantial pieces of the codebase (some of which I no longer understand). It would probably take me all evening to dive into the code, understand it, make the change, and test it. To be honest, this feature was not worth wasting an evening on. So I decided to just hack something.

As I said, I use Git to deploy the site. So I just added a new line to the HTML template:


<p>
    This site contains {+ARTICLE_COUNT+}
    different writings and {+WORD_COUNT+}
    total words. That's about {+PAGE_COUNT+}
    pages!
</p>

And then I added a new step to the pre-commit hook that runs after the template rendering process, but before the changes are committed and the site is deployed.


WPP=320
WORDS_N="$(find archive/ -name "*.md"|xargs -I {} cat {} | wc -w)"
WORDS=`printf "%'d" $WORDS_N`
ARTICLES=`printf "%'d" $(find archive/ -name "*.md" | wc -l)`
PAGES="$(( WORDS_N / WPP ))"

TMP_HOME=`mktemp`
cp ./index.html $TMP_HOME
cat $TMP_HOME |
    sed "s/{+ARTICLE_COUNT+}/$ARTICLES/" |
    sed "s/{+PAGE_COUNT+}/$PAGES/" |
    sed "s/{+WORD_COUNT+}/$WORDS/" > ./index.html

Let's check in and see how this hack fit my criteria above:

Do other systems touch this code? No
# of Developers? 1
Time for Correct Approach? 2-3 hours
Time for Bad Approach? 10 minutes
How Valuable is the Feature? Very
Additional Maintenance Burden? Not much

Is this elegant: absolutely not. Did it take basically zero time? Yes. Have I thought about it since? Not until writing this post. Would I have done this on a team project or a commercial product? Absolutely not. It's a feature for my personal blog engine and a feature that is specific to one particular low-value site that I run.

In this case, a hack is an example of Good Code. That's because Good Code is a relative construct.

At Vs. On: A Story Of Semantic Data Modeling

As most good software developers eventually learn: time is hard. Time-based bugs are incredibly common and are sometimes difficult to solve. There are a ton of misconceptions about how time and dates work in the real world and the simple solution is rarely correct for any significant length of time. Performing calculations based on times and dates can get messy, but so can simply storing them. There's a lot to be wary of when building out a data model with timestamps involved, and as always, a lack of consistent naming can cause a ton of problems.

Over the years I've come to use a specific terminology for dates and time in my data models. In general, I prefer not to use data types in variable names, and I prefer my code to read as passable English where possible. This means, I tend to avoid names like: date_created or published_ts which contain the data type in the name, and I avoid names like: created which give me absolutely no indication of the type or what it is used for.

Instead, I prefer to take cues from the English language. For timestamps or any data type that represents a precise moment in time, I use the suffix at. For dates or times that represent more abstract things like wall time or calendar dates, I use the suffix on.

As an example let's say I have the following data model:

class BlogPost:

    # ... other fields ...

    created_at = TimestampField()
    updated_at = TimestampField()

    posted_on = DateField()

This convention tells me that I should expect the posted_on field to contain a date or time but not both, and that it represents an abstract notion of time, whereas both the created_at and updated_at fields represent a specific moment.

I arrived at this convention through asking myself questions about the data in plain English. Consider the following questions:

  1. Q: When was this post published?
    A: It was published on the 25th of January.
  2. Q: When was the post record created?
    A: It was created at 12:00 PM on January 24th.
Disclaimer This convention doesn't always work because usually people would use at to describe any time (e.g. "I arrived at noon"). But once I settled on the convention, it wasn't confusing. It just doesn't always read nicely.

Knowing when to use a timestamp vs. a calendar date or wall-clock time is another issue (and a complicated one), but at least with this convention, I know which one I'm dealing with.

Now that I think about it, it might make sense to name timestamps with an aton suffix since question #2 technically uses both at and on.

Generating Deterministic, Procedural Artwork With Pdraw

I've been messing with procedural artwork lately, and I've decided to discuss the fruits of my labor. Behold pdraw: a script that generates cool line art from arbitrary text!

Pdraw.py

About two weeks ago, Numberphile released a new video about visualizing the digits of Pi using procedural artwork. I was taken by the idea and decided that I would, for fun, simply replicate their technique in Python and use it to plot random sections of Pi. It was a sort of goof-off project to occupy an evening. At the end of that evening, I had finished my script (through several iterations) and plotted various sections of Pi. I could now plot any stream of numbers. Then it hit me: all digital data can be represented as a base 10 string of numbers. I could draw anything.

The next evening, I set about making it so that my script could accept various command-line configurations and convert any text into its base 10 equivalent (with a small tweak for artistic reasons). Once I had that, it was time to start plotting anything I could think of. I've tried drawing binaries, zip files, websites, and a lot more.

I've open sourced the code for pdraw — which uses no dependences because Python is cool like that — so you the technically inclined reader can draw your very own text streams and see if they produce anything interesting.

I've found that the cooler drawings tend to arise from text containing between 250-2000 characters, although larger files can be cool too. What's especially interesting is that you can almost see the structure of the data in the drawing as you watch it draw. For example, my blog archive page is basically a blob of header information, then a long and repetitive HTML table, and then another blob. This structure appears in the drawing when you plot the HTML of the page; the table rows appearing as little loops followed by lines in random directions.

$ curl https://brianschrader.com/archive/ | ./pdraw -e

Check it out, and let me know what you do with it. I'd love to know if anyone finds more cool things to do with pdraw, or if you generate some particularly cool drawings with it.

I Solved The Same Bug Twice And Didn't Know It

Human memory is incredibly lossy; the brain an imperfect storage medium.

I've written before about how I include links in my code comments to resources that helped me find unintuitive or convoluted solutions. These links are essentially the footnotes of my systems; both documentation and a debugging paper-trail for poor, future souls to follow.

However, I am an imperfect soul, and so not all of my hackish solutions are cited. This fact bit me the other day when I discovered a strange bug with a new feature in Adventurer's Codex.

The Technical Details

The actual issue was fairly nuanced and understanding it depends heavily on the details of our infrastructure, but the simple version goes like this:

We have two endpoints, one that returns a resource and one that returns the schema for our entire API. For whatever reason, in this particular case, requesting the resource changed the output of the schema the first time the resource was requested. This is obviously unexpected. The structure of an API shouldn't change when you call it right? Well, that's the thing. Technically the API wasn't actually changing, but the schema was ever so slightly different: the format of the schema for that resource had changed.

A funny joke

The schema is a hierarchical description of all of our API endpoints, with paths to describe how each resource is related. In this case, the schema would output two different paths to the same resource, but the data at both paths were identical.

For technical reasons, these paths matter to us. They must remain the same.

Following the Breadcrumb Trail

After some DuckDuckGo-fu failed to turn up any useful results, I turned to the rest of the codebase. The problematic endpoint was very simple. Surely we'd implemented other similar functionality which didn't cause this behavior. Sure enough I found two such cases. Both of which contained the same strange, seemingly useless line of code.

Once I had added that seemingly useless code, the schema no longer changed. I had fixed the bug! I had discovered how to fix the bug, but not why. I could have stopped there. Someone else might have, but I needed to go further. I needed to find out how this unrelated line solved my problem.

For now, I had found a clue.

Being the primary author of this particular codebase I knew that such a strange implementation would probably have at least a code comment or a comment in the implementing commit that explained the weirdness. There was no code comment unfortunately, and neither was there one in the commit. However, by diving through the history of that particular file I found that there had been a code comment, right where I would have expected it to be, when the code was written; but it had been removed in another commit a few months back—by me.

The Fatefult Commit

Unfortunately, the comment didn't link to an answer on the internet and subsequent searches have turned up nothing of use.

For now, I had found another clue—and a big one at that. I had also discovered something scarier: I had already found, fought, and beaten this bug before, and it was I that deleted the vital clue.

I had discovered that I was not just the detective but the victim and the murderer, and that the case had been closed three years prior.

Answers Lost to the Mists of Time

As strange as it may seem, I find that this kind of thing happens more often than we may like to admit. I am in a situation where I maintain most of the code I write, and so I get to live with my mistakes for years—going on a decade now. When I wrote the offending code and the comment that explained it, I was deep in the bowels of DRF building out the Adventurer's Codex backend from whole cloth. Since then I've moved on to building other things. That code has now sat for years untouched, working as designed; and some things which should not have been forgotten were lost.

At time of writing this mystery is still unsolved; the clues leading to an end I cannot see. However I now know that at one point in the past I did know the cause of this issue and how to solve it. The solution lives on, but the cause is lost: a coder's greek fire.

The answer may be lost to time, as even three years is enough time for some links to rot and trails to run cold. In what now feels like another lifetime, a past version of myself held the answers I now seek. Perhaps one day a Future Me will know what Past Me had found.

Take A Break, Script Something

Lately I've been putting in a fair amount of work improving Adventurer's Codex, and we have some very exciting updates coming soon (no spoilers 🤫). A lot of the changes required modifying our existing single-page app or API backend, but one involved the creation of a totally new repository and a new set of public-facing pages. It's those pages that I want to talk about.

The problem was fairly simple: given a set of data that changes very infrequently, create a set of web pages that display the details of each record in the dataset. Simple right?

Now, there's two ways I could have built out this functionality:

  1. Dynamic pages that serve the content using templates served by Django
  2. Static pages built once and served by Nginx
Additional Context

I wanted this functionality to be separate from our main project, so the first solution would involve setting up a new Django app and database as well as building a system to import the data from JSON files.

I elected to do the second solution because of it's simplicity and because the data changes so rarely. When all was said and done, I had three scripts — two bash scripts, and one Python script — and a set of Jinja templates. Running the main build script would download the dataset if it didn't already exist, parse the data, and build the pages. The whole process takes about 10 seconds to download and generate over 1200 pages. After it was done, I even set up a Docker container to build and serve the pages with Nginx. In total the project is 295 lines of code (1k if you include the HTML templates), and basically never needs to be updated again; if the data does ever change, we could simply re-run the script or rebuild the contaner.

Scripting is a Much-Needed Break

This is one thing I love about scripting as opposed to application development. In app development, you construct a codebase and then you need to live with it long term, adding new functionality and deprecating old functionality. Scripts, on the other hand, are low risk, high reward: you write them once to solve a specific problem and then rarely touch them again. I use a lot of simple scripts on a daily basis, some of which I wrote nearly a decade ago and haven't touched since.

Scripts are programming candy whereas app development is the real meat and potatoes. In a script you can take shortcuts, be a bit messy, and forgo worrying about the complexities of large software. Once the script works, there's not much else to do: just ship it.

Whenever I get the chance to take a break from developing large apps and just do some quick scripting, I leap at the opportunity.

The Road To Glass & Stone

I don't think I've ever talked about this here before, but I play in a band. And that band, The Fourth Section, just released its first EP, 'Glass & Stone'; and it's available now.

It's been almost two years since we formed, just before lockdown in March of 2020, so technically this is our pandemic EP. I'm super happy that we've finally gotten a release out there and available for everyone.

Glass & Stone

I've released music on Bandcamp before, but this is the first time that I'd ever gone through the process of getting music onto the big platforms. It's both a time-consuming process, and still incredible that a bunch of indie artists, with no label backing, can simply fill out some paperwork and then distribute music world wide. We often lose sight of it, but the internet is really magical sometimes.

The EP was recorded, mixed, and mastered at Cacho Studio in Tijuana, and they did a great job. My thanks to everyone at the studio for their hard work.

A Virtual and Untrodden Road

Since we formed right before the pandemic lockdowns hit the U.S., we couldn't practice at a studio in-person at first. For nearly a year we rehearsed virtually or acoustically—as well as masked and social distanced—at a local park. The three of us used Jamkazam for our virtual practices and it works well enough for that purpose, but it's no substitute for actual, plugged-in rehearsal at a practice studio.

We're back to in-person rehearsal now, and it feels great. Lots of new stuff is in the works and hopefully there will be more to show in the coming months.

Having gone through the process of writing, composing, and distributing music once, we now have a pretty good idea of what it takes to make an EP (or an album) happen, and we're really stoked to do this whole process again (and soon). As with most things, the first try is usually the hardest. There's always the initial, one-time setup that has to be done, and there's just so much you don't know the first time. That is all past now. We know what it takes, and the setup is done. Releasing more stuff is a lot easier now.

And that is what we plan to do.