Going Indie

A complete guide to becoming an independent software developer

From starting a company and staying motivated, to designing, building, and launching your first product and beyond.

Coming in 2020!
Sign up for the newsletter to stay up to date!

House Judiciary Committee recommends Interoperable Social Media

Today the House Judiciary Committee released its report detailing the numerous anti-competitive practices employed by the big tech firms: Apple, Amazon, Facebook, and Google. The report details why these firms are under investigation, what role they play in their respective markets, and whether they have achieved monopoly status in those markets (spoiler, it says they have). The report looks at the web search, web advertising, social media, e-commerce, and mobile software distribution markets, their history, and their future. It's a long read, but you should check it out, at least through the Executive Summary section.

Importantly, the authors also make a number of recommendations aimed at fixing the problems they identified in the report. This is where things get interesting. The report recommends a lot of what open web folks (like myself) have been wanting for years. Here's a few of the most relevant recommendations (emphasis mine):

  • Structural separations and prohibitions of certain dominant platforms from operating in adjacent lines of business;
  • Nondiscrimination requirements, prohibiting dominant platforms from engaging in self-preferencing, and requiring them to offer equal terms for equal products and services;
  • Interoperability and data portability, requiring dominant platforms to make their services compatible with various networks and to make content and information easily portable between them;
  • Safe harbor for news publishers in order to safeguard a free and diverse press;
  • Prohibitions on abuses of superior bargaining power, proscribing dominant platforms from engaging in contracting practices that derive from their dominant market position, and requiring due process protections for individuals and businesses dependent on the dominant platforms;
  • Strengthening private enforcement, through eliminating obstacles such as forced arbitration clauses, limits on class action formation, judicially created standards constraining what constitutes an antitrust injury, and unduly high pleading standards.

p. 20-12

This is great news! These reforms would, in my opinion, do a lot to level the playing field that currently tilts towards benefitting these large incumbents. Each and every one of these companies benefitted by taking advantage of the power of the Open Web in their early days and most still do in some form, but they contribute nothing back and they actively work to undermine the things that make the Web and the Internet great. The report explicitly calls out Facebook's lack of interoperability and recommends that social media companies be forced to interoperate and provide data portability in the same way that phone carriers are currently required to do.

As a result, these markets are no longer contestable by new entrants, the competitive process shifts from “competition in the market to competition for the market.”

This dynamic is particularly evident in the social networking market...

In response to these concerns, Subcommittee staff recommends that Congress consider data interoperability and portability to encourage competition by lowering entry barriers for competitors and switching costs by consumers. These reforms would complement vigorous antitrust enforcement by spurring competitive entry.

a. Interoperability

Interoperability is fundamental to the open internet. It is present in email, which is an open, interoperable protocol for communicating online regardless of a person’s email service or the type of the device they use to send the email.

An interoperability requirement would allow competing social networking platforms to interconnect with dominant firms to ensure that users can communicate across services. Foremost, interoperability “breaks the power of network effects”...

p. 384

Open Web folks won't be surprised by any of these recommendations. We've been wanting them for years, but it appears that Congress is finally paying attention. There's a lot more in this report than just social media market reforms, but in my opinion these reforms are the most exciting and the most impactful to our discourse on the Web. Hopefully now that the wheels of government are turning, they move to enact some of these long-awaited and way-overdue reforms and give us back the Open Web we want.

Assumptions and Variable Names

As developers, we make a lot of assumptions about the world. We have to. The world is messy, unorganized, unsorted, and chaotic, and so is the data that this world generates. It's nigh impossible to process data in an orderly fashion if you can't organize it and make meaningful distinctions between different categories. Consider how much more difficult it would be for a music service to recommend titles if we didn't group music into genres, or how utterly meaningless it would be to say that COVID case counts were rising or falling if you couldn't say where or when. Developers are one of many groups of people who's job is largely to categorize and process data. We employ different methods than other disciplines, but the principle is the same. The problem is that almost any attempt to categorize the real world is fraught with peril. The world doesn't fit nicely into groups. It feeds back into itself in knotted and tangled ways. Few natural categories exist, and this means that in order for us to categorize the world, we need to construct those categories ourselves. These categories are build on assumptions about the world, but they're only assumptions. They can and will be broken, and when our assumptions no longer hold, they cause bugs.

What does this have to do with code?

A lot actually. When we write code we give names to various data points. We call one bit of memory a username and the other an email_address. Sometimes, like with more fundamental computer-science concepts, we can mathematically or physically guarantee that certain data is what it claims to be. Other times, we simply define a byte as 8-bits or a given variable as an int and not a string. Importantly, these definitions are assumptions. They assume that the hardware the code runs on works a certain way or that the system can be expected to do what the OS claims it will do, but that's a topic for another time.

Many bugs are the cause of failed assumptions. Some languages try to reduce the number of assumptions that a developer needs to make by guaranteeing that variables defined as a certain type will always hold data that is that type, but fundamentally, there are much bigger problems plaguing software than type checking. For example, type checking can guarantee that a given variable called html_string contains a string value and that it always will, but it can't guarantee that the string is actually HTML. It could be an email address or it could just be invalid HTML. Both are strings, sure, but that's not the whole story.

We often make the mistake of asserting more certainty in our code than is rightfully there. When we accept data from a user, we can't guarantee what the data is until we've validated it. When parsing batch data or data gathered from the Web, the situation is the same. Pine.blog encounters this a lot. As a feed reader, Pine.blog must parse feeds from the Web at large, but RSS and Atom feeds in the wild are notorious for being malformed and invalid (and sometimes just plain wrong). I've even come across a site that returned a PDF when requesting its RSS feed. Until the data is validated, you can only assume what the data contains. Years ago, I started coming up with ways to help me identify when I'm making assumptions in my code in an effort to reduce bugs, improve clarity, and minimize assumptions.

In the Pine.blog source code, there are quite a few examples of this explicit assumption-making process, especially in my variable names. When Pine.blog first receives data from a request, it needs to try to parse that data, but it can't do that until I know what kind of feed it is. To do this I have a series of functions that use a bunch of heuristics to check the data and determine what it contains.

def is_probably_an_rss_feed(tree):
    pass

def is_probably_an_atom_feed(tree):
    pass

def is_probably_a_json_feed(tree):
    pass

The important thing here is the word probably. These functions don't attempt to actually parse the data, so they don't know for sure. By explicitly qualifying what these functions do I, as the programmer, understand the assumptions I'm making when I act on that information.

I do this a lot actually. It's common for my variables to contain the words probably or approximate if I'm not 100% sure that the data is valid or correct. Variables that contain these words immediately cause concern and force me to think about the potential failure modes whenever I attempt to manipulate them. If something says that it is an html_string than you don't usually think to second-guess that fact, but until you know that for sure, you may want to name your variable probably_an_html_string to better reflect your knowledge at the given point in your process.

Pine.blog Approximate Update Frequency

Handling Approximations

As a guide to users, Pine.blog tries to determine the frequency that a given feed contains new items. Twitter users may be familiar with Unladen Follow which does the same thing for Twitter accounts, or how the podcast app Overcast does the same thing. This feature lets potential followers know how often a given feed will have new posts. This value is generally pretty simple to calculate, but because it's something determined by Pine.blog and not set by the site owner, this value is descriptive not prescriptive. It describes what is likely the update frequency based on past publishing habits. This measure cannot completely predict a site's future behavior, it's just a guess. To reflect that, my code calls this variable approximate_update_frequency, because it's just that approximate. Some would probably prefer the word estimated, which is certainly clearer, but the point is the same. The variable name conveys just as much confidence as possible without giving other developers (including future me) the false impression that the data is any more certain or guaranteed than it actually is.

Developers like guarantees. We like to know that data won't change on us without warning and that things are what they claim to be. This is why so many developers care deeply about variable naming. No one likes variables that are outright incorrect. If you saw a variable in a codebase called bank_account_number, but upon inspection, you saw that it contained a user's first and last name, you would be understandably confused and irritated. The original developer of that code either didn't account for a certain case, incorrectly assigned that data to the wrong variable, or they simply lied to you. The same is true when we name a given variable html_string, but it turns out to contain invalid data. The variable name lied to us. By naming variables you're making assumptions and you're making promises to yourself and to later developers about what the variable contains. If you're not sure about what the data is, or can't guarantee that fact, then you should probably say so.

The Indie Dev Life Podcast

Today I'm excited to announce my new podcast. Indie Dev Life is a show about the ins and outs of indie software development, and episode 1 is out today in all the right places.

I've wanted to make a podcast for years, but I've never found a topic or theme that I felt I could adequately discuss. Luckily, that changed when I finished writing my upcoming book: Going Indie. There was so much that didn't make it into the final draft, and a podcast is the perfect place to expand and explore the more complex, technical and nuanced topics I didn't get to in the book.

The first episode is an attempt to dispel any myths about Indie Development and help convince you to go independent yourself.

I'd love any feedback on the show, the audio quality, or the format and I'd appreciate any topic suggestions. I hope you'll all give Indie Dev Life a listen. If you like the show, please subscribe and give it a review on Apple Podcasts.

Git Hooks for Fun and Profit

I love Git hooks. For those who aren't aware, Git hooks allow you to specify actions that will be automatically taken whenever certain Git commands start or complete. Git hooks are great for simple, easily forgettable, automate-able tasks. In most projects, I use Git hooks to automatically run preflight checks before I'm allowed to commit any changes to a codebase. Usually this means that the codebase is properly formatted, dangling imports are removed, and basic style checks and tests pass. If these checks don't pass, the commit fails.

That said, Git hooks can do so much more. As I've mentioned many times, this site, along with GoingIndie.tech and IndieDevLife.fm are static sites. They're just files served by apache. Because of that, both sites aren't able to take advantage of a lot of really cool blog ecosystem features like ping change notifications. These notifications are typically sent from blogging systems to search engines or news aggregators to let those services know that the site's content has been updated (i.e. a new post was just published, etc). These notifications help services more quickly discover and disseminate that new content to users. Pine.blog supports this feature and Wordpress blogs automatically send these notifications to Google, but my simple static site couldn't.

Then I realized that Git hooks can solve this problem!

Both sites are just Git repos that use a post-receive hook to check out the latest version to a directory served by apache. I commit a new set of changes, push those changes to the remote repo on my server, and that hook runs and copies this new version into wherever apache is expecting. All I need to do is add a little snippet of code to that same hook to send Pine.blog a notification, because by definition: whenever a Git commit is received, the site has changed.

# Send an XML-RPC extendedPing notification to Pine.blog
echo "<methodCall>
    <methodName>weblogUpdates.extendedPing</methodName>
    <params>
        <param><value><string>brianschrader.com</string></value></param>
        <param><value><string>https://brianschrader.com/</string></value></param>
    </params>
</methodCall>
" | curl -H "Content-Type: application/xml" -X POST -d @- \
    https://pine.blog/api/xml-rpc/ping

Adding this simple curl script to my post-receive hook did the trick! Now my blog posts will more quickly appear on Pine.blog! Git hooks for the win.

The Little Engine that Could

I originally wrote the blog engine for this site in 2014. I've added a few little features and fixed a couple of bugs over the years, but most of the code hasn't been touched or improved since it was originally written. Over the past few weeks though, I've improved the engine dramatically. I've fixed a number of long-standing bugs, improved some of the functionality, and added multi-site and podcasting support. That said most of the code is still identical to how it was in 2014. It's crazy to me just how much value I've gotten out of that code. Not only did it teach me how to make blogging software and helped me get a handle on Python, it has powered every blog post I've written since.

After nearly 7 years, the site recently needed an overhaul. I wanted to set up a new site for my book at goingindie.tech and I originally considered just using Jekyll, or even hand-coding a single HTML page, but I eventually settled on adapting my existing blog engine to support multiple sites using a YAML configuration file. A lot of the site-wide variables were just hard-coded at the top of one of the Python files, so moving them to a YAML config was easy. After a few other fixes were in place, everything just sort of came together. I had two sites working on one blog engine.

I love seeing how code evolves over time, and how old code changes us in turn. After nearly 7 years, I'm still using the same, old blogging engine writing posts on this site. I try not to embark on refactors very often, mostly because I don't think they're valuable most of the time. But that means that, aside from a few modernizations and improvements, the work I did in 2014 is still paying off.

On Uber, Lyft, and Labor Law

A storm has been brewing in California. No, not the Coronavirus pandemic or the massive fires, though both are incredibly important and widespread. California is trying to reign in a few powerful tech industry players. What we're witnessing now may become either a cautionary tale or a key example of just how these battles can be waged in the future against even bigger and more powerful giants.

Uber and Lyft have both circulated the idea that they will soon halt operations in California after a state judge forced them to comply with A.B. 5, the California law that requires businesses like Uber and Lyft to classify certain workers as employees instead of contractors. The law, which went into effect in January and has been debated for over a year in the state, would force Uber and Lyft to classify most of their drivers as employees. This change would ensure that those drivers maintained a minimum wage, health benefits, and other benefits under state law, none of which are available to contractors.

After the law went into effect, Uber and Lyft sued and have been both pursuing a legal case and supporting a ballot measure that explicitly excludes ride-share companies from A.B. 5. According to the San Diego Union Tribune, in early August a state judge, "ordered the companies to classify their drivers as employees rather than independent contractors," when they'd prefer to wait until the fate of their ballot measure is decided in November. They also argue that they don't have enough time to comply, even though they've been given months before the law went into effect and eight more months afterwards to comply.

This legal battle is unlike the one waged over the California Consumer Privacy Act (CCPA) which also went into effect recently, and was primarily targeted at data-brokers like Facebook, Google, and others. CCPA, which merely requires a few key, common-sense measures, did not directly hinder the operations of Facebook, Google, or others. It simply made their practices more transparent to users and was slightly annoying for them to implement. A.B. 5 is different. The law represents a fundamental threat to Uber and Lyft's current business model. Both rideshare companies, to varying degrees, rely on huge investor subsidies and loopholes in labor laws to make their business viable. Uber alone, loses over $1.5 billion each quarter. Let that sink in. Both companies are growing, but to do so they require investors to subsidize rates and they rely on underpaid drivers to balance their revenue model. What neither company wants to say, but that is abundantly clear from their reactions, is that they cannot exist as multi-billion dollar companies if they had to comply with California's labor laws, and they can't attract massive amounts of venture capital if they can't grow at current rates. To be fair, I'm sure that Exxon-Mobile, Walmart, Google, and Apple would be far more profitable if they could ignore labor laws too. Paying people a living wage is expensive, as is giving them health care, so companies don't want to do it, but that's why we have these laws.

During the initial debate of A.B. 5, Uber and Lyft, as well as many other rideshare and delivery apps, made their case to the voters in California and to the legislators that passed the bill, but they lost. Now both companies are threatening to take their ball and go home rather than accept that perhaps their entire business model is flawed and should be fixed. Uber and Lyft could reclassify their workers and still be enormous companies, but not as enormous as they are today, or they could choose to pout telling their users that it's all or nothing. I don't want to see Uber and Lyft leave California or disappear (even though Uber's corporate culture is often disgraceful and cause for separate concern). They offer a useful service. I've used both companies a lot over the years. I've also used Uber Eats, Postmates, Doordash, and other delivery companies to get a burrito and to satisfy a craving for Saag Paneer at 2AM. But that doesn't mean that I think their service is so valuable that they should be immune from laws that other companies are subject to. Taxi companies and delivery drivers have been around for a long time. Those endeavors can be profitable, and they can be mutually beneficial for both the company and the workers. This, however, isn't the framing that Uber and Lyft are building around this debate. In their eyes, either they get a pass on obeying labor law, or they go away. But it's important to remember, that's not the only choice they have. It's not the only path they could take. It is, however, the one they've chosen to take.

I'll just say this: if your company can only exist if it violates civil rights or labor law, then I don't think you should exist. - my post on Pine.blog

Two is Better than One

It finally happened. After 6 years (!) of blogging on this site, I finally felt the need to add a blogroll and sidebar. Changes like this come slowly. For one, I had to update the custom code that runs the site. But it also comes slowly for another reason: it wasn't broken, so why would I fix it? This site has worked fine with a one-column layout for years. It's only now, when I wanted to shove more into the navbar than would comfortably fit, do I feel that I needed to make this change.

Behind the scenes is the real magic. I now have the ability to feature my posts automatically and publish hidden 🤫 posts that don't appear on the feed, the archive, or the home page. I've wanted that feature for a while and I've basically been hacking something similar together for years to support my about page. Keep watching for more developments.

Novels and Insurmountable Tasks

I've wanted to write epic fantasy for years. In college I fleshed out a pretty substantial world, a unique magic-system, and an overarching conflict, but I never got deep into the characters, their journey, or their individual story-arcs. I discovered that I like building worlds and systems, but crafting plots and characters is a lot more difficult. The world, and the magic-system especially, has stuck with me, and although I'd make significant changes to it now, I still think that the idea is solid at its core. I'd love to write it someday.

Writing fiction, especially fantasy, just seems like such a monumental task. So much of the sci-fi and fantasy novels that I've read (especially from authors like Brandon Sanderson) are filled with epic stories across multiple viewpoints over decades of each character's life spanning thousands of pages. Putting something like that together, even with years of time, seems like something I just don't have the strength of will to even dream of accomplishing.

In building and launching my own apps and services like Pine.blog, Nine9s, d20.photos even Adventurer's Codex, I've learned that while I can build and maintain large projects, they take a long time to come to fruition. One of the great things about software though is that you can launch with a subset of features and improve it as you go. This means that even for huge projects like Pine.blog, I could see results, launch features, get feedback and share my work-in-progress all while improving it and staying motivated to continue. The idea of spending years writing a manuscript thousands of pages long, editing it, tweaking it, and being unable to publish it until every box was checked is just not something I think I could force myself to do. I'd get bored or burn out.

But thousands-of-page-novels are just one form of the genre. There's another, oft overlooked form: novellas.

The Wizard of Earthsea Trilogy

I love the Wizard of Earthsea trilogy. I first discovered the books in high school and eventually re-read them after college. They're a series of short, kid friendly books about a wizard who goes to a wizard school to learn magic and his adventures in the island world of Earthsea.1 Each of them is less than 200-pages long and clock in at around 50,000 words. Compare that to Brandon Sanderson's Way of Kings which clocks in at 398,460 words across over 1,280 pages and Earthsea looks tiny. To me though, it also starts to look attainable.

Writing fantasy isn't really comarable to writing non-fiction, but at an average of ~1,000 words per day a novella of similar length to Earthsea would take about 50 working days to write, just about two months working full-time. That's completely possible, even part-time a first-draft could be completed in a few months and ready for feedback, editing, review, and possibly publication.

I don't want to give off the impression that I'm going to become a fantasy writer any time soon, I'm not, but this realization did rekindle my interest in such a prospect quite a bit. Sure, the structure of a series of novellas is different than a single monumental tome, but if one would be impossible and the other attainable, then it's not really a decision worth considering.

So much about a hobby or side-project is about staying motivated through to completion and if a 1,000 page novel is out of the question, maybe it's not the novel part that needs rethinking.

1 These books predate Harry Potter by over two decades and inspired the magic system in Christopher Paolini's Inheritance Series.

The Fall of Civilizations Podcast

I love history. I grew up watching the History channel (back when they actually had real history shows) and PBS documentaries. These days those kinds of things don't really hit the spot anymore. I've turned to podcasts instead. There's a ton of great history podcasts out there and I've written before about my love of this History of Rome podcast, its successor The History of Byzantium podcast, the British History Podcast and the fantastic Revolutions podcast. But there's another show that I've really fallen in love with recently: The Fall of Civilizations Podcast.

Each episode of TFoC is a narrative re-telling of the fall of a particular civilization. The, typically 2-3 hour long, episodes start with an overview of the culture, exposing you to how it felt to live at that time and in that place, and the second half covers how this civilization fell. Each show has voice actors, music, excerpts from songs or stories shared at the time, and is chock full of well-researched, excruciatingly detailed history.

That's not why I love the show though. I love it because I've left every episode feeling some degree of the same feeling: a mixture of wonder, sadness, hope, and intense loss for the cultures of the past who watched their world come to an end. It's cheery stuff. That may not sound very... good, and perhaps it isn't, but it reminds me that history is full of tragedy, and loss, but also full of hope and positive change. It reminds me that our world has not become what it is by accident or without pain and suffering. I just finished the episode about the fall of the Aztec empire, a tale I'd studied in school, but never truly understood at the level I feel I do now. As with every episode, I know how it ends, but it's always the journey that matters.

This podcast, as well as all of the others mentioned in this post come highly recommended. Each of these shows is so captivating in its own right that I've, on many occasions, just sat with a cup of tea, coffee, or a cocktail and just listened, as if to a radio play, for hours. It's almost as if these kinds of narrative histories are like the oral tales that ancient tribes would tell around a fire: the stories of cultures, ancient and great, and the tale of how they fell.

d20.photos: A Public Domain D&D Image Repository

I bought the domain d20.photos on Nov 23rd, 2018 with the goal of building a free-to-use, public-domain image hosting service for D&D, Pathfinder, and other fantasy RPGs. Today, that goal is realized.

d20.photos Logo

Finding images for your D&D campaigns is really difficult, especially if you're looking to sell your campaign. Most artwork isn't licensed in a way that makes it easy for low-budget creators to use and often there's no way to easily find images for settings or places in your game. d20.photos aims to change that by providing a free, community-driven, human-curated image hosting service for D&D/Pathfinder related images. d20.photos aims to be a one-stop-shop for all your image needs. Since all images on the site are released into the Public Domain, you can be sure that you're ok to use, re-use, modify, them and even include them in your paid campaign or story.

I've been collecting images for years (over 100) with the goal of eventually adding them to a service like this. I have a lot more to upload, and anyone in the D&D community can do the same.

Abstract Images

One common problem for campaign or story writers is that while there are a plethora of photos on the Web that they can use in their games, pictures of the real-world are often too real. I know I will almost always choose a painting or other artwork over a photo even if they're harder to find. d20.photos tries to solve this problem as well.

Whenever a new photo is approved, a computer-generated version of the image is created by a wonderful open-source library called Primitive by Michael Fogleman. The library uses primitive shapes (in this case triangles) of various colors and sizes to reproduce the original image. These primitive, or abstract, versions are often really beautiful and have a certain fantasy air about them. Adventurer's Codex actually uses these primitive images too on the landing pages.

I find that these abstract versions "feel" more appropriate for a fantasy game and while Primitive does struggle with images with a ton of fine detail (like photos with lots of trees), it's certainly better than nothing.

The Environment (Again)

Like Nine9s.cloud, d20.photos is hosted in London in a datacenter powered by 92% clean and renewable energy. d20.photos runs on the same size of server as Nine9s, so I didn't need to recalculate the environmental impact. Unless I just can't make it work, I think I'm going to be hosting most of my software in the Linode U.K. datacenter from now on, or at least until one of their U.S. partners commits to using renewable energy in the same way. It's not a big thing, but it's a thing I can do.

I hope d20.photos is useful to you, and if it is, I'd love to hear about it. The site is donation based, so if you like what you see, please consider supporting it. If enough people do, it'll be a lot easier to justify improving it in the future.

RSS