/ Non-Technical

The Wandering Guru Phase

It seems to me that when there's a new field of practice, it evolves through a few steps. By field of practice, I mean something very broad: Everything from Medicine to Astronomy to Swordsmanship to Software Engineering. I really mean, anything which is both something you do and and something you study in order to do it better.

The Individual Phase

This is the stage where the field is newly opened up, and there are no schools or how-to books. Practitioners are either figuring it out as they go, or their techniques are only learned by being passed from master to apprentice. Knowledge is only contained within small, isolated communities.

The Wandering Guru Phase

There are going to be practitioners who, from years of practice, both have extraordinary success, and develop an intuition for how the whole field fits together based on their experience. If the field has become large enough, there will be opportunities for these individuals to garner a reputation beyond just their local communities. They will write books, gather students, and found schools of thought. At this stage, there is literature surrounding the practice, but the evidence for what works and what doesn't is more anecdotal than scientific. Another mark of this stage, is that there are many gurus. Each one will promulgate different sets of techniques and different intuitions for what works best, some of which are mutually exclusive.

The Scientific Phase

If there are enough people using the techniques of the different gurus, it can become possible to gather experimental data on their results. Studies can show which of the Guru's intuitions are accurate and which ones might give good results, but for the wrong reasons. They can determine which of their techniques work no matter who performs them, and which techniques may produce good results for the guru, but don't seem to work for other people.

It's that in-between phase, the Wandering Guru Phase, which is of greatest interest to me.

Now, you might think that Wandering Gurus sound like something from an older era, when people traveled by horse rather than car. We are scientific people now with our iPhones and digital watches, and we no longer have need of such things. That's certainly not the case. There are some fields which are very hard to study with scientific rigor. It appears to me that Medicine only really graduated to the scientific phase in the late 19th century. Psychology only made the move within the last 50 years. Earlier psychologists, like Freud and Jung, seem much more like wandering gurus. Nutrition still seems to be in the Guru stage. Notice how new techniques in nutrition are usually promulgated by popular books with names like, "Lose Weight with Dr. Somesuch's Diet", rather than spreading as a quiet consensus in scientific journals. Many of the most important fields, such as Art, Philosophy, and Child Rearing, probably will always be in the Guru phase. It seems like any field where outcomes are difficult to measure and variables are hard to isolate is difficult to move beyond the Guru stage.

Now, computer science is one of the most numbers-based fields that can possibly exist. It centers around glorified adding machines. It's remarkable to me, then, that just about every aspect of best practice that isn't mathematically provable is firmly set in the Wandering Guru Phase. This applies to every aspect of project management and best practice. It includes Clean Coding practices like Object Orientation, Test-Driven Development, small, modular functions, SOLID and YAGNI, as well as processes like Scrum and Kanban, daily standups and Code Reviews. The scientific evidence that any of these work ranges from sparse to nonexistent: It's almost entirely based on anecdotal evidence and assertions by respected authorities.

This doesn't mean any of these techniques are wrong. It's both very difficult and very expensive to run any sort of experimental studies on programming techniques. Running a single-variable experiment with a proper control group is next to impossible. You can never run the same project twice with an identical team. Gathering many teams doing identical projects, enough for meaningful statistical analysis, is prohibitively expensive.

I think if there's a conclusion we should draw from this, it's to keep a healthy skepticism and a sense of humility where software engineering methodologies are concerned. Consider for example, peer code review: It's one of the best-respected software engineering practices in existence, with a relatively firm body of experimental evidence indicating that it produces positive results. I think any person who has been on a few different teams practicing code review will notice, its effectiveness depends very much on the details of how it is implemented. With a poor implementation, it can have little effect on defect prevention, and instead turn into a power game among the senior developers, or one more tool for micromanagement.

In other words, the evidence for any of these practices is still sparse enough that "Everyone is doing it" or "It's widely acknowledged as a best practice" is not sufficient justification to continue using it. It doesn't absolve you from the responsibility of determining whether the practice is effective in your specific case. If there's a practice which isn't working for your team, even if it's something as well respected as unit testing or daily standups, it ought to be reexamined.

image courtesy of Wikimedia Commons.