BiteofanApple Archive About Apps Microblog Photos Links
by Brian Schrader

A New Adventure

Posted on Sun, 18 Aug 2019 at 01:25 AM

Last week, I bought a camera. I've wanted one for a while, but I'd never really looked around for something that would fit my needs. A few of my friends and family are actual photographers, and I knew that I didn't want to do what they do. I'd be carelessly jumping into a deep end to swim with reel big fish.

I haven't owned a dedicated camera in over a decade. Ever since 2010, the only camera I've owned came with an iPhone attached to it. However, while the iPhone X's camera is great, it's just not enough for a whole myriad of situations, and in the last year I've found myself ever more limited by what my phone's camera could and couldn't do.

There's an old adage: "the best camera is the one you have with you", and if I was going to have this camera with me, even a fraction of the time that I have my phone with me, it had to meet some pretty strict qualifications: It had to be really versatile (inter-changeable lenses was a hard requirement), it had to be compact enough to take almost anywhere, and it had to be a camera that could grow with me as I delve into this new-fangled world. So I talked to some people, read a bunch of articles, watched a bunch of reviews, and got this:

I've heard from quite a few people that Sony's α6000 and α6400 are really good for people who want a great camera, but aren't willing to dive into DSLRs. I don't know much about that second bit, but that description matched me to a T. As I've only had it a week so far, my opinions are still riddled with Ney Toy Syndrome. That said, I really like this camera. It's small and light enough to fit in the spare pocket of my daily carry bag, and it uses Sony's E-Mount which means that there's tons of compatible lenses, making it really versatile.

I ended up getting the α6000 mostly because of the price. The α6400 was more than I wanted to spend and anything below the α6000 didn't have a viewfinder, which was a dealbreaker. I'm still rocking the kit lens for now, but I plan on getting a 50/55mm Prime lens pretty soon. Before that though, I need to get a much more intuitive grasp of how to properly use what I have, and it'll probably take me a while to get the basics down.

A Checkpoint in Time

While I'll probably be posting most of the good shots I take to my photoblog, below are a few of my favorite shots from the last week. I'm posting them here so that they serve as a record of my skills as a photographer right now. Hopefully, as time goes on and my skills improve, I'll look back on these pictures like I look back on old code I've written: with fond memories, and mild disgust.

Don't Ban Infinite Scrolling, License Engineers Instead

Posted on Wed, 31 Jul 2019 at 05:46 PM

Yesterday, the Verge reported that Sen. Josh Hawley sponsored a bill that aims to reduce the tech industry’s use of "addictive design" practices by putting a ban on features like infinite scrolling and autoplaying video. While I applaud Congress for focusing on this important issue, the proposed solution is both naive and unproductive. When a surgeon is accused of malpractice, the appropriate solution is not to ban their use of the scalpel, it's to revoke their license.

I have long argued that Software Engineers (especially those at large companies that affect the lives of millions of people) should be licensed. Requiring a license to practice Software Engineering would finally place software in the realm of the other classical engineering fields and require practitioners to use their skills ethically and for the benefit of society. It would also help educate engineers on the risks and trade-offs with the decisions they make and give the government a lever to pull when trying to encourage ethical practices across the industry. Likewise, licensed engineers are able to refuse to implement unethical or unsafe designs when they fear that they could lose their license.

One common argument against this point is that the tech industry is filled with lots of independent developers and others who operate relatively small businesses and that licensing them would put an damper on overall innovation in the industry. However, while we do often require Civil Engineers to have a license, we don't apply the same logic to carpenters, and we can apply the same kinds of criteria to Software Enginering licensure.

We've required licenses for those practicing classical Engineering, Medicine, and Law for a long time1, and those licenses have helped governments and the licensed individuals themselves steer industry practice away from things that can be considered unethical or unsafe. Instead of trying to take powerful tools away from Engineers, we should instead be focused on enabling and educating them to make ethical and societally beneficial decisions. The Internet is part of the infrastructure of our modern world, so let's ensure that the people who build and maintain it have society's best interests at heart.

1 As an aside, we even license hairdressers and locksmiths. If unlocking cars or cutting hair requires a license, why doesn't building software that impacts 20% of the world's population?

The Social Web

Posted on Sun, 28 Jul 2019 at 06:35 AM

Things have been quiet here recently, and that's largely because I've been really busy working on lots of cool new stuff for Pine.blog. While the last few months have been full of steady progress, there's still a ways to go before the next set of features are ready for the world. If everything goes according to plan, the next major release should allow premium users to start their own blog on the site.

So far, Pine.blog has been mostly focused on being a good feed reader, and while I think it is, there's so much more that can be done with feeds and blogs than just read them.

Feed Readers are the First Step

I've wanted Pine.blog to support custom blogs since the beginning, and that's largely because I think that by making it easier to both read and write on the open web, we can give people a viable alternative to more traditional social networks and offer them an escape from the problems those platforms have.

Using tools and technologies that have existed for years, the web itself can be a social network, but in order for people to embrace that idea there have to be easy-to-use tools that are new and powerful, but also familiar and approchable, and importantly, there also have to be people to talk to; the web has to feel like a social network.

Pushing Forward

One of the things I've come to strongly believe is that as developers (or as anyone who makes a thing) we can work to push the world toward we think it should be, and I think that the Web can and should be our shared social network.

This is why I'm so excited by what I see as the future for Pine.blog. If we want people to move off of the platforms we think are the cause of so many problems in our world, we need to give those people a place to go. Most people aren't going to start a blog and use a feed reader unless we, as the people with the skills to do so, make the Social Web better than Social Media.

Updates on using NSOperation

Posted on Sat, 29 Jun 2019 at 07:31 PM

Last time I talked about NSOperation, I mentioned that I really liked using it to do things like networking and asynchronous operations in my app. Well, I've made a few tweaks to my BackgroundNetworkingController recently (alas a new name was not among them) and I think the new version is even more readable and expressive.

For those who don't know, NSOperation (or just Operation in Swift) is a class that when paired with (NS)OperationQueue allow you to order and track long running and often asynchronous tasks in your apps. You can use it to do API calls, animate transitions, walk a user through a process, and a lot more.

In my last blog post I demoed this piece of code as an example of how I use Operations in the Pine.blog app.

func updateTimeline() {
    let reauthorize = TokenReauthorizationOperation()
    let fetchTimeline = FetchTimelineOperation()
    fetchTimeline.delegate = self
    fetchTimeline.addOperation(reauthorization)
    BackgroundQueueController.queue?.addOperations(
        [reauthorize, fetchTimeline],
        waitUntilFinished: false
    )
}

And while that example is pretty straight forward, I knew it could be better. There's so many times when I just need to tell my app: do these things in the background, in this order, and tell me when you're done (and let me know if something comes up). Operations allow me to do that, but setting up delegates and adding dependencies seems like cruft that I don't really want to care about.

With that in mind, I added a new set of methods to my queue controller: performInSeries(operations:) and performInSeries(operations: with:). Using those instead, the above example becomes:

BackgroundNetworkingController.performInSeries(
    operations: [
        TokenReauthorizationOperation(),
        FetchTimelineOperation(),
    ],
    with: self
)

This code sets up the given operation with each previous operation as a dependency and assigns their delegate to self. Reading this it becomes pretty clear that I want these operations to be performed in the order I specify and I want them to alert me if anything comes up, which is exactly what I want.

Even the most complex uses in all of the Pine.blog app still retain their readability (aside from array concatenation). This example reauthorizes the user's token, then fetches the most recent posts in each of the user's timelines, finds any new posts that the user has liked, and finally calls the given completionHandler to let the app know that it has completed updating its data.

BackgroundNetworkingController.performInSeries(
    operations:
        [ ReauthorizationTask() ]
        + timelines.map { FetchNewestPostsInTimeline(timeline: $0) }
        + [ FetchLikesOperation(), BlockOperation { completionHandler?(self.fetchResult) }],
    with: self
)

There's still a bit further I can go with this, but I'm really happy with the readability and clarity improvement that these changes have made to my codebase.

Lots of Little Things

Posted on Sat, 29 Jun 2019 at 04:02 AM

Recently I've been focused on finishing the next set of major features for Pine.blog, but as happens from time-to-time I got a bit distracted and ended up knocking out a whole lot of little features that I've wanted to build for a long time. Today's update was entirely server-side, and I'll have a new update for the iOS app coming in a few days.

Wordpress Enhancements

In addition to making a lot of things just plain faster, I've also added a new feature for Wordpress users so that when users post on their blog, Wordpress will automatically let Pine.blog know and their posts show up much more quickly in their (and other user's) timelines.

For me, this feature involved yet another foray into undocumented XML-RPC APIs from over a decade ago. So much of the information just isn't easy to find anymore so building features that use them is more archeology than software development.

API Keys and Webhooks

Users that want to write scripts using Pine.blog (or build custom applications) can now get an API key quickly and easily that lets you access the full range of Pine.blog APIs.

For those who want to dive even deeper into their data with Pine.blog, users can now add a Webhook URL if they'd like to receive updates from Pine.blog whenever a feed they're subscribed to changes.

API Documentation Revamp

I've completely redone the Pine.blog API documentation. Hopefully this makes it much easier for developers to discover and use the Pine.blog/Feed Directory API. The new documentation includes much more detailed information about the throttling limits, what to expect from each endpoint, and much much more. If you're looking to use the Pine.blog/Feed Directory API in your app or service you can find out more from the documentation here.

A lot of these developer-focused features are test runs for much broader features that I have cooking in the background. Meanwhile I'm trying to keep the majority of what I do with Pine.blog a lot more user-focused.

Changing Tides

Posted on Sun, 09 Jun 2019 at 02:43 AM

It's a big day: Pine.blog is now free to use!

Pine.blog has been out for over a year now and it's been getting better and better over that time. However, although quite a few people have signed up, most have stopped short of signing up for a premium subscription. At first I thought that adding a free trial to Pine.blog would help, but that doesn't seem to do much to encourage signups.

I want Pine.blog to be useful to as many people as possible because I think Pine.blog and a lot of other Open Web tools (i.e. Micro.blog, Mastodon, etc) represent what social networking should be. In order to reach a wide audience, Pine.blog needs to let people know what it is and then convince them to use it. I've made pretty good progress on that first goal: Pine.blog's traffic numbers keep going up and responses are generally positive, it's just that those numbers don't really translate into subscriptions.

In that light, Pine.blog no longer requires a subscription to use: just create a free account and you're good to go! If you want to organize your feeds into multiple timelines or start a blog* you'll still need a premium subscription, and don't worry, you still get a free trial.


At time of writing, the newest version of the iOS app is "Waiting for App Review", so you'll have to sign up on the website to start a free account, but you can use the app once you've created your account.

*Coming soon!

The Hidden Cost of Cheap Hardware

Posted on Wed, 23 Jan 2019 at 06:08 PM

Most times, when developers debate code-level optimizations, someone will eventually bring up the classic platitude:

Developer-time is more expensive than Compute-time.

And it's true: paying a developer to optimize code is generally more expensive than adding additional hardware to run the existing, slow code faster. Servers and storage are so cheap these days that most developers don't need to know or care about the actual hardware that runs the code they write. Hardware is fast, cheap, and available in huge surplus, but this overabundance of cheap computing power has caused this throw-more-hardware-at-it mindset to proliferate into other aspects of development, namely how systems are designed.

Let's face it, modern web stacks are complex

A typical web stack contains a lot of co-dependent software, and Developers, Admins, and DevOps will each have their own tools to improve, manage, and add some semblance of comprehensibility to a running system. But, each additional proxy, app instance, and physical or virtual server adds multiple possible points of failure to your production system and lots of additional complexity to development.

Over time, developers add more and more layers of software and tooling to their project until the hardware can't handle it anymore, and then instead of reevaluating their tools, they make the fateful decision to break the system out into smaller, isolated pieces, turning their simple website into a complex distributed system, because after all, adding hardware is cheaper, right?

"Complexity is a bug-lamp for smart people. We're just drawn to it."

Maciej Cegłowski, The Website Obesity Crisis (2015)

The hardware is cheap, yes, but the developer-time needed to design a system who's pieces communicate over dozens of network connections and physical or virtual machines often far, far exceeds the costs of keeping things simpler from the get-go. Most small and medium systems can exist on very little hardware, and that, in turn, keeps the project design much simpler than spreading less efficient work over dozens of machines.

None of this is to say that an unoptimized codebase is the same as a large or possibly over-engineered system, but there are parallels between them. Both are made possible because of the fact that we have cheap and abundant access to powerful hardware that can run inefficient code and all of the layers of abstraction that slow down and sometimes overcomplicate modern software. It might be then that developers working in these large systems, as with some developers working on inefficient code, might not realize just how powerful their hardware really is.

Empathy for the machine

Today's hardware is fast, really fast, and we can use that power to our advantage, but only if we, as developers, have an intuitive sense of just how fast it is.

In a totally related, I promise, anecdote: An old coworker of mine was complaining one day that a Perl script he'd written took too long to run, and he didn't know why. I asked him how long it took, and he said, "About 2 seconds, but it should be instant." At first I thought it was silly that he was spending so much time optimizing for 2 seconds, but what I didn't know was that this script was only processing a few hundred kilobytes of testing data. Eventually, it would need to process a few hundred gigabytes. We had a High Performance Computing Cluster he could run his analysis on, but he didn't want to use it because, as he put it, "This analysis isn't complicated, it should be able to run on my machine". He didn't want to move his work to the cluster because he'd have to add a lot of code to ensure it would run correctly in a distributed environment that was harder to debug. After he fixed the issue, processing the test data took an imperceptible amount of time, and he was able to run the entire analysis on his 6-core workstation in a little under an hour.

Without that kind of intuitive understanding of how much time a task "should take" it's extremely difficult to know when, or if, something is wrong. You might just assume that your analysis really does take 2 seconds, or that your webapp really does take 3 seconds to send a simple response, and that you need to use more powerful hardware to get it done. What's worse is that developing that intuition is harder and harder the further you are from the actual hardware. With layers of virtualization and tooling between developers and their hardware, it's difficult to perform controlled experiments, or do any sort of real comparisons between revisions. Your gut instinct is your only gauge.

We need some sort of an anchor


Defining Deviancy Down

Armed with cheap hardware and the conventional wisdom that adding more servers is cheaper and easier than optimizing what we already have, we've arguably made our systems slower, more complicated, and over-engineered for the problems they claim to solve. It's time we all take a look at the systems we build, and ask ourselves if they need to be as complex as we're making them. We, as a community tend to want to copy what the big companies do, but those enormous companies have different needs than the rest of us. Facebook, Netflix, and Google have different problems than the vast majority of sites. We don't need to use their tooling, apply their designs, or live with their compromises, but we often do exactly that.

What we need is some sort of test, one we can apply to our systems to anchor our thinking about what hardware our systems need day-to-day. I've half-joked several times that any website that has less than a hundred concurrent users should be able to run on my spare Raspberry Pi on my apartment's internet. If you're building a small-business site or internal tool, that same half-joke applies to your site too.1 Such a small, cheap system-on-a-chip is way too fragile for any real, production use, but it's more than powerful enough to be a good testing rig. Get one, deploy your production system on it, and see if you can serve a hundred users quickly and efficiently. Older systems have done more with a lot less.

We're not building a space ship here, just a website.2

1 My Raspberry Pi B+ (Quad-Core 800MHz CPU & 1 GB of RAM) is hooked up to a 150x15 Mbps connection, and runs a standard build of Debian Linux. If you can't host your website on that, then you're either building a fairly complex site with lots of computing demands, or have some pretty inefficient code.
2 Disregard this message if you actually are building space ships or otherwise very complex software that for obvious reasons cannot be run on a $35 SOC meant for teaching children. Web Devs building CRUD apps: you're not excluded.

I Love NSOperation

Posted on Sun, 23 Dec 2018 at 10:29 PM

I've talked before about how much I like using Apple's Grand Central Dispatch API for multithreading on iOS, but over the last year I've become a huge fan of NSOperation and it's become my preferred way to do multitasking on iOS over bare-bones GCD.

NSOperation (or just Operation in Swift) can be used a layer of abstraction over GCD that provides built-in dependency tracking, and task isolation. When combined with NSOperationQueue (OperationQueue in Swift) you also get powerful throttling APIs and more. Typically I've used Operations for background networking and processing, but the API is designed to be used for any set of discrete tasks including UI workflows and more.

As a Networking Layer

My most common use case for NSOperation is in doing networking. In Pine.blog for example I need to always ensure that a user's OAuth Access Token is valid before making a resource request, say for their timeline. That code looks something like this:

func updateTimeline() {
    // This task must always happen first. It ensures that the OAuth token
    // is going to be valid when I request it, or it attempts to refresh the token.
    let reauthorize = TokenReauthorizationOperation()
    // Now we attempt to fetch the user's timeline and add ourselves as a delegate
    // so the Operation will tell us when new data is available. We also set the
    // reauthorization operation as a dependency of the FetchTimelineOperation
    let fetchTimeline = FetchTimelineOperation()
    fetchTimeline.delegate = self
    fetchTimeline.addOperation(reauthorization)
    // Add the tasks to the queue and process them asyncronously.
    // The custom delegate will be alerted when new data is available.
    BackgroundQueueController.queue?.addOperations(
        [reauthorize, fetchTimeline],
        waitUntilFinished: false
    )
}

What I've really liked about my NSOperation-based networking is that from the ViewController's perspective, it doesn't care what these tasks do or how, they're just notified when they've received results and I've finally stashed away my networking code into it's own little corner of the codebase, rather than in a custom controller or nestled inside the ViewController where it just gets in the way.

The FetchTimelineOperation takes care of fetching the JSON from the API and creating Core Data Managed Objects. Then my ViewController's FetchedResultsController just worries about displaying the changes to the user. It's simple, clean, and there's a clear seperation between the ViewController and the Networking Stack.

Gotchas

If there's one thing that frustrates my iOS development it's that Core Data Contexts aren't thread-safe. Originally, I thought that just meant that I couldn't write to the same Core Data store from another thread, but that's simply not the full story. Never read from or write to Core Data objects from a thread or context other than the one they came from. Better yet: do all your Core Data writing inside a performAndWait() {} block.

Keep in mind, these aren't so much issues with NSOperation as they are overall tips for using Core Data.

The Bad Way

When it comes to my Operations, what that means is that although you'd be tempted to write something like this:

class MarkPostAsRead: Operation {
    var post: Post
    init(post: Post) {
        self.post = post
    }
    override main() {
        let context = getManagedObjectContext()
        context.performAndWait {
            self.post.read = true
            do {
                context.save()
            } catch {
                NSLog("Failed to save post read status for Post: (id)")
            }
        }
    }
}

You should never do this. You're violating a number of Core Data's assumptions and you'll get a crash.

The Good Way

The best way I've found to do Core Data work in a Background Operation is something like this:

class MarkPostAsRead: Operation {
    var id: NSManagedObjectId
    init(postWith id: NSManagedObjectId) {
        self.id = id
    }
    override main() {
        let context = getBackgroundManagedObjectContext()
        context.performAndWait {
            // Get the post from CoreData
            var post: Post!
            do {
                post = try context.existingObject(with: id) as? Post
            } catch {
                NSLog("Unable to mark post as read because it doesn't exist.")
                return
            }
            // Mark it as read
            post.read = true
            // Save the Context
            do {
                context.save()
            } catch {
                NSLog("Failed to save post read status for Post: (id)")
            }
        }
    }
}

This method ensures that you're never passing managed objects between threads and you're only modifying that object within the background context you created for that purpose.

Keep in mind though, any FetchedResultsControllers you've made won't be immediately notified of the changes because they happened in a background context instead of the View Context they're using. To fix this add something like this into your Core Data Stack Code:

    func initializeCoreDataStack() {
        // ... Do startup work...
        // Listen for background context changes
        NotificationCenter.default.addObserver(
            self,
            selector: #selector(contextDidSave),
            name: .NSManagedObjectContextDidSave,
            object: nil
        )
    }
    @objc func contextDidSave(notification: Notification) {
        guard let sender = notification.object as? NSManagedObjectContext else {
            // Not a managed object context. Just leave it alone.
            return
        }
        // Don't listen for changes to the view context.
        let viewContext = DataController.persistentContainer.viewContext
        if sender != viewContext {
            ensureMainThread {
                viewContext.mergeChanges(fromContextDidSave: notification)
            }
        }
    }

Now the View Context will automatically merge changes from the background contexts when you call context.save().

Dispatching Concurrent Groups of Operations

In some cases your app will need to dispatch an operation that could need to dispatch multiple, concurrent suboperations. In this case I've found it really helpful to wrap the group of asynchronous operations inside of a synchronous operation that simply waits for them to complete.

class LotsOfConcurrentRequests: Operation {
    var urls: [URL]
    var results: [JSONObject]? = nil
    init(responsesFrom urls: [URL]) {
        self.urls = urls
    }
    override main() {
        let suboperations = urls.map { url in
            return AsyncFetchURLOperation(url: url)
        }
        // Add the tasks to the queue and wait until they're all done. Easy.
        BackgroundQueueController.queue?.addOperations(
            suboperations,
            waitUntilFinished: true
        )
        // Gather the results
        results = suboperations.map { $0.result }
    }
}

And that's pretty much it. NSOperation has basically replaced GCD for me in all but a few niche use-cases since NSOperation allows you to define complex workflows in a simple, clear way that you can invoke and control from any aspect of your app and it nicely separates your networking code from the other parts of the system.

Pine.blog Version 1.3: Multiple Timelines 🎉

Posted on Wed, 19 Dec 2018 at 07:23 PM

A look at the new Pine.blog

Pine.blog has come a long way since launch and today I'm announcing the next big step: Version 1.3, which is available now. This version is pretty jam-packed with features and improvements, but the most notable addition is the ability to organize the sites you follow using timelines.

Multiple Timelines

Pine.blog has had a clean, easy to read timeline since the beginning, but after you start following lots of sites, it can be pretty cumbersome to have so many new posts in one timeline. I've wanted to support multiple timelines for a while now, and it's finally here. Most feed readers have a method of organizing subscriptions, but traditional "folder" methods don't allow for a feed to be in multiple lists at once. With Pine.blog, a feed can be in multiple timelines at once so you can organize your timelines however you like.

Pine.blog doesn't use algorithms to decide what you see in your timeline. Instead Pine.blog shows you every post from the sites you follow. Traditionally, companies use those algorithms for two main reasons: to "increase engagement" (shudder) and because users follow so many things that they get overwhelmed by the sheer volume of content they see in their news feed.

A look at the new Pine.blog timeline.

But people are smart. If you give them the tools they need, they will use them. And multiple timelines are just one of a number of upcoming features that are coming soon to help users organize and find things they want to follow.

If you're a fan of the new changes to the timelines, get in touch. I'd love to hear from you.

Other Changes

In addition to multiple timeline support, version 1.3 also has a ton of new features for both iOS and the web. The web changes have been rolled out slowly over the last few days so you may have already seen them, but here's a pretty exhaustive list.

  • Web UI Refresh: The timeline, likes, and sites pages have all been updated with a cleaner, more modern look.
  • Improved Search Results (iOS/Web): You should see much more relevant search results and better support for advanced search syntax. (i.e. Using AND, OR, and ())
  • Swipe to favorite (iOS): You can now swipe right on a post in the timeline to favorite it, or swipe right on a post you've already liked to unfavorite it.
  • Better Featured Images (iOS): Small images (usually share buttons) will no longer show up in the featured images gallery.
  • Follow any feed from a site (iOS/Web): As mentioned above, you can now follow any feed for a site from the app or the web, not just the site's "Main Feed".
  • Improved Timeline Preview (iOS): The content of posts in the timeline supports inline links and markup.
  • Lots of bug fixes and enhancements.

Textual.app Maintainer Steps Away

Posted on Thu, 15 Nov 2018 at 09:20 PM

Michael Morris (11/9/2018 - Textual Newsletter):

It is with a heavy heart that I must announce that I will be stepping down as the only full time maintainer of Textual. Textual isn't as profitable as it used to be... [and] I do not see it recovering to the point I can continue doing it full time...

I also must step down because I am burnt out from doing the same thing for the past 8+ years... Textual will not disappear from the Earth completely. I still have plans to do infrequent small improvements though I can't make any promises when and what those will contain.

I'm really bummed to hear that Michael is stepping away from Textual and even more-so because it's partially for financial reasons. I've been a Textual customer for years now and I love the app; I plan to continue using it as long as it runs on my Mac.

Developing indie-software is difficult and 8+ years is a really good run. I wish him the best with his next project.

Textual IRC →

Archive

RSS

Creative Commons License