Category: Uncategorized


Industry Ideals Disconnect

There’s an alarming large disconnect between what our industry preaches as ideals, and what it actually produces. Ultimately, I believe this stems from developers wanting or needing to just get shit done. Working code is seemingly good enough, and anything more is an unknown. So we carry on.

I haven’t been quite so lucky. My first project finally shut down last month after being in production for about ten years. My second project still requires the occasional support request, also ten years later. Since those initial projects, I’ve worked on hundreds of projects of various sizes, some failing, and some slipping past deadlines endlessly.

It’s a tough job. There is an infinite number of things to know, and you have to balance all of them against your unique constraints. When we seek help, the resources offer shallow solutions. Tutorials aim to get you started, frameworks aim to accelerate, but nobody can paint you a full picture. We navigate an impossible maze of decision making balancing doing things academically correct, doing things quickly, and actually solving the business problems we were brought on to solve.

Unfortunately, what has happened is that we tend to favor publishing and reading content that shows us how to get shit done. Anything more is silly, and perhaps is reserved for the book authors and consultants who have clearly lost touch with reality and want to take advantage of us. For those who do accept some of these ideals, we cannot always practice them.

Ideals like…

  • Accurately model problem domain
  • Separate responsibilities, exposing clean interfaces
  • Layer applications with clear boundaries
  • Unit test every line of code or every logic branch
  • Business tests for every use case or interaction
  • Version every change cohesively and articulately
  • Support multiple communication channels (HTTP, CLI, etc.)
  • Communicate over those channels in various formats (JSON, XML, etc.)
  • Minimize cost of utilization (battery, memory, disk, bandwidth, etc.)
  • Support interaction in different locales
  • Protect users and their data from harm
  • Be simple enough to allow adoption and educate users
  • (searching things programmers should know should yield millions more)

Gaps like…

  • The vast majority of developers do not write tests.
  • The vast majority of developers lean on frameworks for design guidance.
  • Building applications from the inside out and testing the logic that matters first.
  • Many of the above ideals only come up when they become problematic.
  • Interacting with RESTful web services that promote discoverability. No popular frameworks or clients that I’ve seen can do this.
  • Not relying on poorly designed ORMs when persistence isn’t a concern yet, nor is SQL a problem until much later on.

Let’s start asking how we can do things better, rather than how we can do things faster. For 2014, my goal is to dive deeper into these topics, and to try and be more mindful of what I produce. Every snippet, article or piece of code I publish will be focused on raising awareness, rather than a cheap trick to save a buck.

These things can’t be afterthoughts.


Preparing for the last 10 percent

Most developers have experienced it before… the dreaded last 10%. At the start, you blast through the initial planning… most entities and processes are seemingly-well defined. The middle goes the usual speed with some ups and downs. Then, the end is near, and you just can’t seem to get there. Things start to slide, and that last 10% ends up taking the majority of the time.

Even with good tests to handle technical problems, it’s easy to back yourself into a fully painted application with a few major problems. Why does this happen?

We begin mapping our domain onto a simplified model, using tools like whiteboards, or some other easily moldable medium. Then, as soon as we are feeling comfortable, we begin to convert that mental model into something our application can use. This may be database schemas, ORM mappings, or perhaps just plain code. We figure that we have enough of the model figured out that we can begin to write our application. So we do. We write layers and layers on top of that incomplete and probably incorrect model. We built our dependencies in the wrong order.

Let’s look at the classic Domain Driven Design structure as an example, which essentially is composed of four layers: User Interface, Application, Domain and Infrastructure. Domain is our business logic. Application layer, optionally, wraps or packages the domain layer and acts as a more thorough service layer. User Interface translates input/output, allowing those services to be called from the another medium. The infrastructure layer implements or assists the domain layer with access to external dependencies. Even ignoring DDD for a moment, this is common, but perhaps your boundaries aren’t quite as obvious. Making boundaries obvious really helps keep responsibilities and dependencies in the right places.

With an application built on incorrect assumptions and incorrect data, it’s very easy to fall into the trap of fixing problems one at a time. Identify problem, identify solution, implement changes across entire system. Rinse and repeat as the incorrect model is slowly chipped away at. Not only is the feedback loop this way extremely slow (hours? days?), it’s like trying to plug holes in a leaky boat. Building a boat that is watertight is much easier before it’s in the water. And often, fixing issues this way leads to new issues because you’re just reacting to changes, rather than driving them.

How can this be avoided?

Write your domain layer, in its complete entirety, first. Don’t think about databases or even HTTP until this is done and thoroughly tested both in terms of test coverage and business cases. Define and create all of your use cases, and make sure your domain model holds up. Your first user interface that you build should be programmatically running through all of your use case objects. Tools like Cucumber or Behat are great for this.

One complimentary architectural pattern here is Entity Boundary Interactor from Uncle Bob. EBI essentially states that each use case is implemented with an Interactor. It works with business objects to perform some task. The interactor has a designed request object and response object, relevant to that interaction. Each user interface (or the user interface), is created with Boundaries. The boundary could be HTTP, command line; it doesn’t matter. For a HTTP interface, its job would be to convert HTTP requests to business requests, and business responses back to HTTP responses. On the other end of this, things like database access are also implemented with boundaries, translating requests for information back into business objects.

With this in place, implementing a real user interface and a real database should relatively easy and predictable, giving you a linear path all the way to the finish line. Until every use case is implemented and fully tested, it might as well be wrong.


Interface Bandwidth Utilization

How efficient are you at utilizing your interface’s communication bandwidth? As programmers, we communicate our intentions through our keyboard/mouse and into a text editor. Similar to writers, we primarily use the keyboard, and have 10 fingers to rapidly convert our thoughts to text. Traditional writing is typically very linear: write what’s on your mind, and go back and edit it afterward.

Programming is non-linear: it branches, and it’s highly iterative. We rapidly jump lines, files, tests, go in and out of a call stack, and are often looking at several things at once. This is very demanding physically. As soon as we reach for the mouse, we’ve introduced lag between what we want, and what we’re doing. We use a mouse because it can get us from A to B very quickly, allowing a far higher resolution (accuracy), while also capturing things like speed and acceleration. That is invaluable when drawing, or when playing video games, but not when editing text. Those few extra dots per inch do not matter, as text is probably 10pt large anyway.

With a modern mouse and a full keyboard, there is an enormous amount of input bandwidth. Professional gamers work several hours per day to try and maximize (and sustain) their utilization. Typically, they anchor their left hand on the left side of the keyboard, dedicated for movement and hotkeys; and the right hand planted on the mouse for perspective and precision. It can be very efficient, and we see some of them hitting up to hundreds of actions per minute. They have mastered their interface for insanely high speed and accuracy. As programmers, we should strive to reach a similiar level of mastery, but are we even set up correctly?

In the efforts of minimizing the cost of switching to the mouse, many have switched to using a trackpad on their laptop (or rather, not using an external keyboard). While this helps, it sacrifices most of the benefits of even using the mouse: the high bandwidth input. If you are still using an external mouse, here’s an experiment: move your mouse another 12-18 inches further away, and you’ll start to notice how awkward that transition is.

Okay, so that’s input, how about output? Are we actually making good use of the rich GUI that our IDE provides? The typical setup is either a full screen of code (text), perhaps with splits, or one primary pane and one or two small sidebars for a file tree or code outline. Arguably, we’d collapse them when we don’t want them, but that’s just too much work. But, basically, we’re still just looking at columns of text.

What is the alternative? A crappy terminal, and a text editor that is 30+ years old (or at least a modern updated version of it). Really.

By working in a program designed for low interface bandwidth, you can utilize a lot more of it without taking your hands off the home row. I like to compare this with first learning to type: when you are pecking, it’s very hard to be consistently fast. You may be able to peck certain combinations rapidly, but it’s not sustainable and it’s not very efficient overall. By learning to use the entire keyboard properly, you can move up to a consistent 80wpm+ if you work at it.

When I use a graphical text editor, I can really feel this difference. I’m typing really fast, things are going great, and then I’m stopped in my tracks when I have to go back and change something. It’s not natural, but we are used to it because we’ve been trained to type that way. It’s very disruptive to achieving flow (if you believe in that sort of thing). Worse, it can deter us from wanting to make changes because we’re either in the middle of something, or because making proper changes is physically awkward.

In an editor like vim, you can use movements that are more natural and contextual (jump to next function, delete argument, change line, etc.) rather having to manually do it as raw text. And you can make these change as you go just as fluidly as if you never stopped typing. It feels very natural. You can also repeat, combine or even script these actions together, and ultimately create your own crafted editor. Mouse movement is replaced with keyboard movement, which is much more predictable and reliable. Getting up to the same level of efficiency may take a few days or a few weeks, but you’ll get there, and you’ll be glad you did.

The output bandwidth hasn’t been so different either. At an average font size of, say, 10pt, you can fit nearly the same amount of information on the screen. You can differentiate syntax or interface elements with 256 colors and a variety of symbols to create boundaries, draw tables, or highlight information. You lose a little bit of space on the drawing side, but that loss is easily gained back many times over with the highly dynamic nature of it.

However, the even cooler thing is you can put whatever you want on the screen. You’re not limited by your editor’s plugins; you can create or use any scripts wherever you want, and the barrier to entry is much lower for creating your own. You can jump to different workspaces or views, just as you can in something like Eclipse, but dropping the enormous CPU and memory footprint. You can configure and save these, essentially creating a responsive IDE that you can bend to your will with just the keyboard. You can manage all of those splits, information panes, or windows on the fly, instead of having them always wasting screen real-estate.

Now, because that interface bandwidth is so low, we can easily pipe it through the network. You can perform your work over SSH on a low powered device over the air. If you are working on the go, this will greatly increase your battery life, or at the very least free up resources for your device for more important things. You can also set up pair programming or screen sharing without any special tools (just SSH), which means it is much faster than RDP or VNC. You can also package and version your unique setup, and use it on any machine. This way, it’s an investment that you can keep refining.

I understand that working from a terminal is not for everyone, but I think everyone should give it one serious try. It’s healthier for your wrists, and at least for me, has been much more efficient and more condusive to maintaining flow when programming. It has also given me a much deeper appreciation and understanding for how linux works. That alone has made it worth it.