Lessons from years of software engineering
Early in my career when someone asked me in a job interview what it is that I like about the practice of software engineering I would say something like “solving problems”, and I know many others say that as well. The older I’ve grown the more I’ve found that it’s not really about the problems, but the people who have those problems. Nowadays what I like about the practice is helping users solve their problems more efficiently with better software.
See, many students of computer science believe, myself included, that software development is an exact science. It’s not fuzzy like the social sciences, it’s more like math. Well, if you’re still a student with this belief, I’ve got news for you: it’s actually very fuzzy, and you will need to learn to embrace that fuzziness. Software is developed by people according to specifications created by people, to be used by people and to exist in an environment regulated by people.
Back in the university we had some software development course with a long group project where each week we’d get an assignment to add new features to it. I was frustrated: why wouldn’t they just tell us all the requirements in advance? It would’ve been so much easier to incorporate the future requirements if we had just known about all of them ahead of time! News flash: it turned out to be the best course to simulate how real software development actually works. You never get all the requirements beforehand, and even when you thought you did, they will change on the fly. This is an example of the fuzziness I mentioned.
I was reminded of these discoveries when watching the Cocaine and Conway’s Law presentation by Greg Wilson. He’s quite a big name in the industry, being an educator, author, and academic, but also having worked in the industry. Conway’s Law basically states that organizations produce software that mirrors the structure of said organization. If teams A and B never talk, then the parts of the system representing or implemented by A and B never talk either. Wilson argues that when there’s resistance to change when deploying a new piece of software, it’s largely because the people using that software instinctively but possibly subconsciously realize that the organization would also need to change now, which is of course always anxiety-inducing. Wilson also advocates for the “humanist” tradition to be included as the fourth cornerstone of computer science, the commonly recognized ones having been the mathematical, engineering, and scientific traditions.
The obvious danger here, however, is that students will stop listening at this point when something fuzzy is presented to them, as they didn’t think that’s what they signed up for. Wilson has this example student “Jay” who does not want to be lectured about DEI or any social aspects of the work, but who enjoys coding as problem solving. A proposed solution is to start with something that looks technical, but in the end leads to the human side of the business. Jay would also need to meet the users and other stakeholders of his piece of software to start caring about them, and this would happen naturally within these project courses, for example.
Corollary to the fact that software exists in a world of people, I’ve also been thinking that computer science is a funny thing in the sense that it’s almost nothing in and of itself. Software has no natural domain, and it inherits the messiness of whatever domain it’s being applied to. When you study computer science, you may land a job in almost any industry: finance, logistics, gaming, retail, telecommunications, nonprofits, chemistry, etc, and it’s of course fascinating to learn about these new domains as a programmer.
So, these have been my biggest revelations during the past 15-20 years. Software development is not an exact science, and it kind of only exists in the context of other industries. The problems also aren’t the thing, but how solving those problems helps people.