BiteofanApple Archive About Code Twitter
by Brian Schrader

Software Correctness and Software Engineering

Posted on Mon, 05 Oct 2015 at 04:50 PM

David R. MacIver:

You have probably never written a significant piece of correct software.

...the chances of having written whole programs which are [bug free] are tantamount to zero.

It’s not because we don’t know how to write correct software. We’ve known how to write software that is more or less correct (or at least vastly closer to correct than the norm) for a while now. If you look at the NASA development process they’re pretty much doing it.

The problem is not that we don’t know how to write correct software. The problem is that correct software is too expensive.

David's post covers quite a few very important points, and while I've covered this issue before, I think it bears diving back into.

Software Engineering isn't like any other form of engineering. It has nowhere near the maturity of the classical engineering fields, and it's poisoned by a "shipping culture" wherein getting software into the user's hands is more important than that software actually working properly. Now, this isn't to say that product companies and start-ups are wrong, but this culture has soured our mental model of how engineering works. Engineering is the process of designing, developing, maintaining, and predicting the behavior of some device or structure. According to the American Engineer's Council for Professional Development, the definition of engineering is (emphasis mine):

The creative application of scientific principles to design or develop structures, machines, apparatus, or manufacturing processes, or works utilizing them singly or in combination; or to construct or operate the same with full cognizance of their design; or to forecast their behavior under specific operating conditions; all as respects an intended function, economics of operation or safety to life and property.

That last part is important when it comes to Software Engineering. Forecasting the behavior of a system and being able to say it respects the user's safety to life and property is something that most developers don't consider when building their software. Why?

The rest of us aren’t writing safety critical software, and as a result people aren’t willing to pay for that level of correctness.

This is in stark contrast to, for example, the way that NASA develops software. At NASA, software is treated as just another branch of engineering/development. It's design goes through the same rigorous review that their rocket engines, and safety harnesses do. Why? Because a failure in software could cost billions of dollars, and possibly kill people. These are stakes that software like Twitter and iTunes will never have to face, thankfully (iTunes would kill us all).

David links to an article, originally from the mid-90s, detailing NASA's software development process. Their process is extremely boring, and has tons of overhead. The code is designed, line by line in pseudo-code before ever being typed into an editor. Engineers then just write the code exactly as it's outlined in 3,000+ line blueprints.

Charles Fishman:

That's the culture: the on-board shuttle group produces grown-up software, and the way they do it is by being grown-ups. It may not be sexy, it may not be a coding ego-trip — but it is the future of software. When you're ready to take the next step — when you have to write perfect software instead of software that's just good enough — then it's time to grow up.

It's the process that allows them to live normal lives, to set deadlines they actually meet, to stay on budget, to deliver software that does exactly what it promises. It's the process that defines what these coders in the flat plains of southeast suburban Houston know that everyone else in the software world is still groping for. It's the process that offers a template for any creative enterprise that's looking for a method to produce consistent - and consistently improving — quality.

This sounds, frankly, crazy, and no software focused company would want to adopt a system like this, and I can't blame them. It does not sound like fun, but it does sound like the code will be correct and largely error free. Obviously this method doesn't work for a lot of use cases in the world of Software Development, and that's ok. One of the bonuses of writing non-critical software is that it doesn't have to be 100% correct, it can have bugs and fix them over time. I think one of the best moments for a piece of software, though, is after it becomes fairly successful, and it decided to grow-up and start focusing on stability, consistency, and correctness. We see these waves of new features and lulls of stability releases in a lot of consumer software these days; Mac OS X is a notable example.

In my mind though, developers should take a page out of NASA's book, and take their products more seriously. Engineering software is a time consuming, precise operation, and it should be given the respect and care it deserves.

NASA is able to send a probe to Pluto, on a 15 year journey, collect the first ever pictures of the dwarf planet, and send them back to earth automatically with code written 2 decades ago that hasn't needed to be updated since it launched. That's damn near perfect software; that's real Software Engineering.

The economics of software correctness →

They Write the Right Stuff →

How to save your team from the evil testing demons

Posted on Thu, 24 Sep 2015 at 09:54 AM

So your team has succumb to the evil testing overlords. They constantly talk about Unit Tests, Continuous Integration, and Code Coverage. How is programming supposed to be fun if the code does the same thing every time? Where's the sense of adventure? Fear not, the art of cowboy coding is not dead. You can save your teammates from the testing demons with these tips.

  • The first mistake people make when trying to rid their teams of the evil testing demons is being too hasty. You have to destroy the tests from the inside.

  • Write tests for all of your modules, but make it so that the tests only pass in very specific use cases. This will cause confusion and plant that crucial seed of doubt.

  • Make test functions appear to test one thing, then actually test something completely different. The easiest way to do this is to label the test function incorrectly. That is, the test for do_get should be called test_do_post.

  • Write integration tests in place of unit tests. This will cause the unit tests to get really slow over time and make your coworkers think twice about running them constantly. This is important because once your coworkers are free from constantly testing, they'll start to question the utility of the tests as a whole.

  • Write functions that don't return anything. Instead have them modify internal state. Write functions that should return something, but instead put the return value in one of the parameters.

  • This one is key, write monolithic functions that accomplish a lot at once. Did you know that function calls are computationally expensive? Don't use them.

  • Write functions that take complex object as parameters, who's values have to be configured very particularly. The function should return the same object.

  • On the same note, pass complicated variables around into other functions. That way the tests for those functions will have to mock the complex object.

  • Use application state. Lots of it. Write code so that functions depend on very particular settings in the global state to be configured. Not only does this make them harder to test, the tests will need to mock this state which causes them to run slower, thus enforcing the idea that the tests aren't helping.

  • Did you know that functions that don't have parameters and that don't return anything are really difficult to test? Write lots of those. Remember, you're fighting for the future of programming. Down with the suites! Long live the cowboys and cowgirls!

  • Constantly blame the tests for not finding new bugs. Explain that it's impossible to predict user behavior. How can you test something that's impossible to predict?

  • On build days, commit code with failing tests that prevent the CI from auto-deploying the new build, then blame the stupid CI for not mocking your test cases properly. Assert that something is wrong internally with the CI system.

  • Testing requires a lot of tools and setup. Assert that you can't be 'agile' (a good buzzword) if you have to set up all of this stuff. How can you possibly keep up with Grunt, Travis, Mock, Mocha, Istanbul, Karma, and more?

  • Make the case, "I can't write tests if I don't know what the app is going to do yet." Everyone knows that it's impossible to think through the code before writing it. Code is art and you're an inspired artist.

  • Constantly remind everyone that they aren't doing real test driven development. Tell them, "You know, real TDD is where you write the tests first. Why are we doing this half way?" When they complain that they don't like writing the tests first, tell them that they may as well not do tests at all if they aren't that committed.

  • When your team members write tests, explain that, according to the rules of TDD, if they aren't writing tests that fail at first, then they don't really know if the tests are valid. Ask them if they write failing tests first. If they say no, then tell them that their tests are essentially meaningless.

  • Whenever you write new code, explain that you don't have time to write tests. You're on a deadline. If the people that have time to write tests want to do it then fine, but you're trying to get actual work done.

  • Insist that the tests need to be run against real data, and that generating data or storing fixtures will never be adequate. Testing, as an idea, is fundamentally broken.

  • Make environment checks throughout your code to ensure that it will only run in production, and can only be tested in production. Then make sure to exit early if any other environment is detected with no error codes.

  • Make any magic variable settings into database values.

  • Test the core language features; you can’t be sure your iterator variable will increment unless you write a test that validates you can add one to a variable. Doing this will slow your test suite down even more, adding to the irritation.

  • Defeat the testing demons from the inside. Whenever your writing superflous tests, use lots of mocks and test every tiny piece of your code (e.g. when making a settings dictionary, mock out everything except the dict creation and test just that one bit). This will help you write tons of mocks for each test. Changing anything will break lots of the mocks and make half the test suite fail with import path errors unrelated to the change.

  • Whenever your team members get too proud of their service to the testing devils, ask them if they test their tests. If they don't, how do they know that they work properly. They will quickly realize the paradox of testing and quit.

Remember, you're trying to show your team that testing is a failed idea. You're trying to bring back the good ol' days when programmers roamed free, not caged in the predictable, safe confines of test driven development. You're doing this for your team; someday they'll thank you.

Thanks to @AdamAndDevOps, @macromicah, @TheDudestMonk, and @Tanyxp for their additions.

A full size mirror

Posted on Sun, 20 Sep 2015 at 05:28 PM

Well it is done. A fully functioning mirror of this site is available over at, and it's hosted by GitHub Pages, so it's really fast.

Getting the mirror going was simple, because as I mentioned I alredy use Git for managing this site. Simply adding a new destination to publish to:

git add remote mirror <url>

and adding that destination to my deployment script was all I needed to do. Yay Git!

Thanks Manton for the awesome suggestion. It's important that we, as a community, try to persist our work. The web right now is ephemeral, so individuals need to take the steps make sure their work is preserved.

A mirror for posterity

Posted on Sun, 20 Sep 2015 at 03:47 PM

Manton Reece

The default outcome for any site that isn’t maintained — including the one you’re reading right now — is for it to vanish. Permanence doesn’t exist on the web.

Only 2 companies keep coming to mind: and GitHub. I believe both will last for decades, maybe even 100 years, and both embrace the open web in a way that most other centralized web sites do not.

Even though I self-host this weblog on WordPress, I’ve chosen to mirror to GitHub because of their focus on simple, static publishing via GitHub Pages. It has the best chance of running for a long time without intervention.

This is a really cool idea. I've never thought about mirroring my site, and since I already use Git to push updates, adding a mirror is just adding another remote.

Complete mirror of this blog →


Subscribe to the RSS Feed. Check out my code on GitHub
Creative Commons License
BiteofanApple is licensed under a Creative Commons Attribution 4.0 International License.