BiteofanApple Archive About Apps Microblog Photos Links
by Brian Schrader

Changing Tides

Posted on Sun, 09 Jun 2019 at 02:43 AM

It's a big day: is now free to use! has been out for over a year now and it's been getting better and better over that time. However, although quite a few people have signed up, most have stopped short of signing up for a premium subscription. At first I thought that adding a free trial to would help, but that doesn't seem to do much to encourage signups.

I want to be useful to as many people as possible because I think and a lot of other Open Web tools (i.e., Mastodon, etc) represent what social networking should be. In order to reach a wide audience, needs to let people know what it is and then convince them to use it. I've made pretty good progress on that first goal:'s traffic numbers keep going up and responses are generally positive, it's just that those numbers don't really translate into subscriptions.

In that light, no longer requires a subscription to use: just create a free account and you're good to go! If you want to organize your feeds into multiple timelines or start a blog* you'll still need a premium subscription, and don't worry, you still get a free trial.

At time of writing, the newest version of the iOS app is "Waiting for App Review", so you'll have to sign up on the website to start a free account, but you can use the app once you've created your account.

*Coming soon!

The Hidden Cost of Cheap Hardware

Posted on Wed, 23 Jan 2019 at 06:08 PM

Most times, when developers debate code-level optimizations, someone will eventually bring up the classic platitude:

Developer-time is more expensive than Compute-time.

And it's true: paying a developer to optimize code is generally more expensive than adding additional hardware to run the existing, slow code faster. Servers and storage are so cheap these days that most developers don't need to know or care about the actual hardware that runs the code they write. Hardware is fast, cheap, and available in huge surplus, but this overabundance of cheap computing power has caused this throw-more-hardware-at-it mindset to proliferate into other aspects of development, namely how systems are designed.

Let's face it, modern web stacks are complex

A typical web stack contains a lot of co-dependent software, and Developers, Admins, and DevOps will each have their own tools to improve, manage, and add some semblance of comprehensibility to a running system. But, each additional proxy, app instance, and physical or virtual server adds multiple possible points of failure to your production system and lots of additional complexity to development.

Over time, developers add more and more layers of software and tooling to their project until the hardware can't handle it anymore, and then instead of reevaluating their tools, they make the fateful decision to break the system out into smaller, isolated pieces, turning their simple website into a complex distributed system, because after all, adding hardware is cheaper, right?

"Complexity is a bug-lamp for smart people. We're just drawn to it."

Maciej Cegłowski, The Website Obesity Crisis (2015)

The hardware is cheap, yes, but the developer-time needed to design a system who's pieces communicate over dozens of network connections and physical or virtual machines often far, far exceeds the costs of keeping things simpler from the get-go. Most small and medium systems can exist on very little hardware, and that, in turn, keeps the project design much simpler than spreading less efficient work over dozens of machines.

None of this is to say that an unoptimized codebase is the same as a large or possibly over-engineered system, but there are parallels between them. Both are made possible because of the fact that we have cheap and abundant access to powerful hardware that can run inefficient code and all of the layers of abstraction that slow down and sometimes overcomplicate modern software. It might be then that developers working in these large systems, as with some developers working on inefficient code, might not realize just how powerful their hardware really is.

Empathy for the machine

Today's hardware is fast, really fast, and we can use that power to our advantage, but only if we, as developers, have an intuitive sense of just how fast it is.

In a totally related, I promise, anecdote: An old coworker of mine was complaining one day that a Perl script he'd written took too long to run, and he didn't know why. I asked him how long it took, and he said, "About 2 seconds, but it should be instant." At first I thought it was silly that he was spending so much time optimizing for 2 seconds, but what I didn't know was that this script was only processing a few hundred kilobytes of testing data. Eventually, it would need to process a few hundred gigabytes. We had a High Performance Computing Cluster he could run his analysis on, but he didn't want to use it because, as he put it, "This analysis isn't complicated, it should be able to run on my machine". He didn't want to move his work to the cluster because he'd have to add a lot of code to ensure it would run correctly in a distributed environment that was harder to debug. After he fixed the issue, processing the test data took an imperceptible amount of time, and he was able to run the entire analysis on his 6-core workstation in a little under an hour.

Without that kind of intuitive understanding of how much time a task "should take" it's extremely difficult to know when, or if, something is wrong. You might just assume that your analysis really does take 2 seconds, or that your webapp really does take 3 seconds to send a simple response, and that you need to use more powerful hardware to get it done. What's worse is that developing that intuition is harder and harder the further you are from the actual hardware. With layers of virtualization and tooling between developers and their hardware, it's difficult to perform controlled experiments, or do any sort of real comparisons between revisions. Your gut instinct is your only gauge.

We need some sort of an anchor

Defining Deviancy Down

Armed with cheap hardware and the conventional wisdom that adding more servers is cheaper and easier than optimizing what we already have, we've arguably made our systems slower, more complicated, and over-engineered for the problems they claim to solve. It's time we all take a look at the systems we build, and ask ourselves if they need to be as complex as we're making them. We, as a community tend to want to copy what the big companies do, but those enormous companies have different needs than the rest of us. Facebook, Netflix, and Google have different problems than the vast majority of sites. We don't need to use their tooling, apply their designs, or live with their compromises, but we often do exactly that.

What we need is some sort of test, one we can apply to our systems to anchor our thinking about what hardware our systems need day-to-day. I've half-joked several times that any website that has less than a hundred concurrent users should be able to run on my spare Raspberry Pi on my apartment's internet. If you're building a small-business site or internal tool, that same half-joke applies to your site too.1 Such a small, cheap system-on-a-chip is way too fragile for any real, production use, but it's more than powerful enough to be a good testing rig. Get one, deploy your production system on it, and see if you can serve a hundred users quickly and efficiently. Older systems have done more with a lot less.

We're not building a space ship here, just a website.2

1 My Raspberry Pi B+ (Quad-Core 800MHz CPU & 1 GB of RAM) is hooked up to a 150x15 Mbps connection, and runs a standard build of Debian Linux. If you can't host your website on that, then you're either building a fairly complex site with lots of computing demands, or have some pretty inefficient code.
2 Disregard this message if you actually are building space ships or otherwise very complex software that for obvious reasons cannot be run on a $35 SOC meant for teaching children. Web Devs building CRUD apps: you're not excluded.

I Love NSOperation

Posted on Sun, 23 Dec 2018 at 10:29 PM

I've talked before about how much I like using Apple's Grand Central Dispatch API for multithreading on iOS, but over the last year I've become a huge fan of NSOperation and it's become my preferred way to do multitasking on iOS over bare-bones GCD.

NSOperation (or just Operation in Swift) can be used a layer of abstraction over GCD that provides built-in dependency tracking, and task isolation. When combined with NSOperationQueue (OperationQueue in Swift) you also get powerful throttling APIs and more. Typically I've used Operations for background networking and processing, but the API is designed to be used for any set of discrete tasks including UI workflows and more.

As a Networking Layer

My most common use case for NSOperation is in doing networking. In for example I need to always ensure that a user's OAuth Access Token is valid before making a resource request, say for their timeline. That code looks something like this:

func updateTimeline() {
    // This task must always happen first. It ensures that the OAuth token
    // is going to be valid when I request it, or it attempts to refresh the token.
    let reauthorize = TokenReauthorizationOperation()
    // Now we attempt to fetch the user's timeline and add ourselves as a delegate
    // so the Operation will tell us when new data is available. We also set the
    // reauthorization operation as a dependency of the FetchTimelineOperation
    let fetchTimeline = FetchTimelineOperation()
    fetchTimeline.delegate = self
    // Add the tasks to the queue and process them asyncronously.
    // The custom delegate will be alerted when new data is available.
        [reauthorize, fetchTimeline],
        waitUntilFinished: false

What I've really liked about my NSOperation-based networking is that from the ViewController's perspective, it doesn't care what these tasks do or how, they're just notified when they've received results and I've finally stashed away my networking code into it's own little corner of the codebase, rather than in a custom controller or nestled inside the ViewController where it just gets in the way.

The FetchTimelineOperation takes care of fetching the JSON from the API and creating Core Data Managed Objects. Then my ViewController's FetchedResultsController just worries about displaying the changes to the user. It's simple, clean, and there's a clear seperation between the ViewController and the Networking Stack.


If there's one thing that frustrates my iOS development it's that Core Data Contexts aren't thread-safe. Originally, I thought that just meant that I couldn't write to the same Core Data store from another thread, but that's simply not the full story. Never read from or write to Core Data objects from a thread or context other than the one they came from. Better yet: do all your Core Data writing inside a performAndWait() {} block.

Keep in mind, these aren't so much issues with NSOperation as they are overall tips for using Core Data.

The Bad Way

When it comes to my Operations, what that means is that although you'd be tempted to write something like this:

class MarkPostAsRead: Operation {
    var post: Post
    init(post: Post) { = post
    override main() {
        let context = getManagedObjectContext()
        context.performAndWait {
   = true
            do {
            } catch {
                NSLog("Failed to save post read status for Post: (id)")

You should never do this. You're violating a number of Core Data's assumptions and you'll get a crash.

The Good Way

The best way I've found to do Core Data work in a Background Operation is something like this:

class MarkPostAsRead: Operation {
    var id: NSManagedObjectId
    init(postWith id: NSManagedObjectId) { = id
    override main() {
        let context = getBackgroundManagedObjectContext()
        context.performAndWait {
            // Get the post from CoreData
            var post: Post!
            do {
                post = try context.existingObject(with: id) as? Post
            } catch {
                NSLog("Unable to mark post as read because it doesn't exist.")
            // Mark it as read
   = true
            // Save the Context
            do {
            } catch {
                NSLog("Failed to save post read status for Post: (id)")

This method ensures that you're never passing managed objects between threads and you're only modifying that object within the background context you created for that purpose.

Keep in mind though, any FetchedResultsControllers you've made won't be immediately notified of the changes because they happened in a background context instead of the View Context they're using. To fix this add something like this into your Core Data Stack Code:

    func initializeCoreDataStack() {
        // ... Do startup work...
        // Listen for background context changes
            selector: #selector(contextDidSave),
            name: .NSManagedObjectContextDidSave,
            object: nil
    @objc func contextDidSave(notification: Notification) {
        guard let sender = notification.object as? NSManagedObjectContext else {
            // Not a managed object context. Just leave it alone.
        // Don't listen for changes to the view context.
        let viewContext = DataController.persistentContainer.viewContext
        if sender != viewContext {
            ensureMainThread {
                viewContext.mergeChanges(fromContextDidSave: notification)

Now the View Context will automatically merge changes from the background contexts when you call

Dispatching Concurrent Groups of Operations

In some cases your app will need to dispatch an operation that could need to dispatch multiple, concurrent suboperations. In this case I've found it really helpful to wrap the group of asynchronous operations inside of a synchronous operation that simply waits for them to complete.

class LotsOfConcurrentRequests: Operation {
    var urls: [URL]
    var results: [JSONObject]? = nil
    init(responsesFrom urls: [URL]) {
        self.urls = urls
    override main() {
        let suboperations = { url in
            return AsyncFetchURLOperation(url: url)
        // Add the tasks to the queue and wait until they're all done. Easy.
            waitUntilFinished: true
        // Gather the results
        results = { $0.result }

And that's pretty much it. NSOperation has basically replaced GCD for me in all but a few niche use-cases since NSOperation allows you to define complex workflows in a simple, clear way that you can invoke and control from any aspect of your app and it nicely separates your networking code from the other parts of the system. Version 1.3: Multiple Timelines 🎉

Posted on Wed, 19 Dec 2018 at 07:23 PM

A look at the new has come a long way since launch and today I'm announcing the next big step: Version 1.3, which is available now. This version is pretty jam-packed with features and improvements, but the most notable addition is the ability to organize the sites you follow using timelines.

Multiple Timelines has had a clean, easy to read timeline since the beginning, but after you start following lots of sites, it can be pretty cumbersome to have so many new posts in one timeline. I've wanted to support multiple timelines for a while now, and it's finally here. Most feed readers have a method of organizing subscriptions, but traditional "folder" methods don't allow for a feed to be in multiple lists at once. With, a feed can be in multiple timelines at once so you can organize your timelines however you like. doesn't use algorithms to decide what you see in your timeline. Instead shows you every post from the sites you follow. Traditionally, companies use those algorithms for two main reasons: to "increase engagement" (shudder) and because users follow so many things that they get overwhelmed by the sheer volume of content they see in their news feed.

A look at the new timeline.

But people are smart. If you give them the tools they need, they will use them. And multiple timelines are just one of a number of upcoming features that are coming soon to help users organize and find things they want to follow.

If you're a fan of the new changes to the timelines, get in touch. I'd love to hear from you.

Other Changes

In addition to multiple timeline support, version 1.3 also has a ton of new features for both iOS and the web. The web changes have been rolled out slowly over the last few days so you may have already seen them, but here's a pretty exhaustive list.

  • Web UI Refresh: The timeline, likes, and sites pages have all been updated with a cleaner, more modern look.
  • Improved Search Results (iOS/Web): You should see much more relevant search results and better support for advanced search syntax. (i.e. Using AND, OR, and ())
  • Swipe to favorite (iOS): You can now swipe right on a post in the timeline to favorite it, or swipe right on a post you've already liked to unfavorite it.
  • Better Featured Images (iOS): Small images (usually share buttons) will no longer show up in the featured images gallery.
  • Follow any feed from a site (iOS/Web): As mentioned above, you can now follow any feed for a site from the app or the web, not just the site's "Main Feed".
  • Improved Timeline Preview (iOS): The content of posts in the timeline supports inline links and markup.
  • Lots of bug fixes and enhancements. Maintainer Steps Away

Posted on Thu, 15 Nov 2018 at 09:20 PM

Michael Morris (11/9/2018 - Textual Newsletter):

It is with a heavy heart that I must announce that I will be stepping down as the only full time maintainer of Textual. Textual isn't as profitable as it used to be... [and] I do not see it recovering to the point I can continue doing it full time...

I also must step down because I am burnt out from doing the same thing for the past 8+ years... Textual will not disappear from the Earth completely. I still have plans to do infrequent small improvements though I can't make any promises when and what those will contain.

I'm really bummed to hear that Michael is stepping away from Textual and even more-so because it's partially for financial reasons. I've been a Textual customer for years now and I love the app; I plan to continue using it as long as it runs on my Mac.

Developing indie-software is difficult and 8+ years is a really good run. I wish him the best with his next project.

Textual IRC →

Mastodon and Microblogging

Posted on Sat, 10 Nov 2018 at 12:04 AM

Manton Reece:

We’re launching 2 major features today:

  • can now cross-post to a Mastodon user account, in the same way we cross-post to Twitter, Facebook, Medium, and LinkedIn. This takes a copy of your blog posts and sends them to a specified Mastodon account.

  • Your custom domain on can now be ActivityPub-compatible, so that you can follow and reply to Mastodon users directly on This also means someone can follow your blog posts by adding on Mastodon. (This username is configurable. Mine is

Really excited to see Mastodon integrations in and congrats to Manton on launching such a huge feature. His attention to detail is really appreciated. Here's just one example of it in action:

Muting in has been expanded to support muting individual Mastodon users, or entire Mastodon instances based on their domain name. We have also preloaded a common list of Mastodon instances that are muted automatically because of code of conduct violations.

Manton is very careful and deliberate about the design of features and this is, of course, no exception.

Coincidentally, I've had Mastodon integration on the list of features for a long time and I can't wait to get there.

Code Lasts Whether You Know it or Not

Posted on Sun, 04 Nov 2018 at 06:52 PM

When I first wrote the code to generate this site, and the 4 other times I've rewritten it before settling on the current implementation, I don't know if I thought I'd still be blogging, let alone still relying on that code over six years later. To its credit, the code still works well, the last time I touched it was to upgrade to Python 3 in 2016 to get full unicode support 🎉, and back in July to fix a bug with JSONFeed dates, but in 2018 it's definitely showing its age.

A pile of mostly undocumented bash and Python scripts and a bunch of fragile Python path hacks have allowed me to write these words and so many more over the past 6 years. To this day the site doesn't have a real archive page where posts are collected by year or month, it's just a giant, single page list of articles. Back when I wrote it, I didn't think I'd have enough posts to ever need that, or that if I did, I'd cross that bridge then. I didn't. I've swept it under the rug as a nice-to-have feature for years, and honestly if it became an issue, I'd probably just move to a real system like Jekyll or Wordpress; it'd be so much easier.

I'm reminded of something I saw on Twitter the other day:

An example of some old code that lives on

The code we write exists for as long as it's being used.

Recommendations, Echo Chambers, and

Posted on Sun, 04 Nov 2018 at 06:16 PM

In my last post I laid out three main problems that the blogging ecosystem has when competing with social networking sites. I also mentioned that aims to solve all three of them at once.

  • has a chronological, Twitter-like, unified timeline of posts from sites you follow.
  • You can easily connect your Wordpress blog with and post to your site from within the app or the website.
  • The directory makes it easy to browse and search for other sites to follow, and is free for anyone to use regardless of whether they use or not.

In addition to search, most social networks have some sort of recommendation system that gives users suggestions for new people, sites, or channels to follow. Recommendation systems are notoriously difficult to make well, and even "good" ones are now being heavily scrutinized for causing the isolated echo chambers you find on most social networks. If blogging and feed readers are to make a comeback, then they have to have an answer to the search and recommendation systems that all social networks have. has one of those: Search.

Traditionally, social networks rely on a recommendation system where some sort of machine learning algorithm looks at your interests and recommends things to you, but recommendation engines are often the source of the echo chamber trap that most users find themselves in. Apps and services like Overcast use a pretty simple recommendation engine that simply shows you podcasts and episodes that your Twitter friends have recommended. While is probably the most conservative about shelling out recommendations: their discover page is manually curated according to their community guidelines. This has the added benefit of being able to really control what kind of stuff gets promoted on the site, but it can be difficult to scale and it can't easily give users personalized recommendations.

How present the recommendations are also changes their effectiveness. Overcast and strategically place their recommendations in spots you'd only see if you were already looking for new stuff to follow, rather than omnipresently in the home feed or in banners on the side.

All of those systems have problems; all systems do. I don't want to have yet another echo chamber system, and I also want to promote oft-neglected forms of content like local news outlets and investigative journalism. This leaves me with a hybrid approach between editorial curation and Overcast's friend-based recommendations. I'm pretty far off from building this system now, but when I do get to it, I want to make sure I've thought about the consequences first.

Blogging has an Image Problem

Posted on Sat, 03 Nov 2018 at 12:02 AM

I've asked a few people recently about the differences between services like Facebook, Twitter, Instagram and traditional blogs. The answers are almost entirely conventional not technical, and this leads me to what I think is a big reason why blogging has receded in recent years: blogging has an image problem. It's supposed to be for everyone, but lots of people, who are sometimes extremely active on social media sites, are hesitant to start a blog.

From a technical standpoint, the combination of a blog (that supports WebMentions and/or ActivityPub) and a good Feed Reader can provide nearly all of the features people require from a modern social network (a combination that aims to provide). The technology exists, but to get started with blogging is still too complex. On Facebook or Twitter you can sign up and be posting in minutes, you can easily find other things to follow, and you can easily see what others are saying. Typical Feed Readers solve only one of these problems. You can follow other sites, but you can't post to your own, and you can't easily discover new sites to follow. Having a blog then means that you have to switch from reading a post in one app, to posting to your own blog in another, which makes blogging feel arcane and clunky. And even with both a good reader and a blog, it's still fairly difficult to find interesting things to follow. is my attempt to solve all three of these issues at once and make reading blogs and blogging on your own site just as easy as browsing your timeline and posting to Twitter.

Feed Readers and Local News

Posted on Fri, 19 Oct 2018 at 06:58 PM

Interest in local news outlets has been declining over time. Bigger, flashier news outlets with more resources can attract more users and more traffic. But local news provides so much valuable information to the people living in the communities they serve and arguably this news generally effects reader's lives more than national news.

I've been thinking a lot lately about how can help promote local news. Adding individual sections for each "locale" is obviously untenable since the directory is manually curated, but simply tagging a site as a "reputable local news outlet" might be enough. The issue then is deciding what reputable", "local", and "news outlet" mean from the outside.



Creative Commons License