Click here to monitor SSC

I'm head of software engineering at Red Gate. I'm a big fan of functional languages, particularly Haskell.

Deliberate Practice

Published 26 October 2012 8:11 am

It’s easy to assume, as software engineers, that there is little need to “practice” writing code. After all, we write code all day long! Just by writing a little each day, we’re constantly learning and getting better, right? Unfortunately, that’s just not true.

Of course, developers do improve with experience. Each time we encounter a problem we’re more likely to avoid it next time. If we’re in a team that deploys software early and often, we hone and improve the deployment process each time we practice it.

However, not all practice makes perfect. To develop true expertise requires a particular type of practice, deliberate practice, the only goal of which is to make us better programmers. Everyday software development has other constraints and goals, not least the pressure to deliver. We rarely get the chance in the course of a “sprint” to experiment with potential solutions that are outside our current comfort zone. However, if we believe that software is a craft then it’s our duty to strive continuously to raise the standard of software development. This requires specific and sustained efforts to get better at something we currently can’t do well (from Harvard Business Review July/August 2007).

One interesting way to introduce deliberate practice, in a sustainable way, is the code kata. The term kata derives from martial arts and refers to a set of movements practiced either solo or in pairs. One of the better-known examples is the Bowling Game kata by Bob Martin, the goal of which is simply to write some code to do the scoring for 10-pin bowling. It sounds too easy, right? What could we possibly learn from such a simple example?

Trust me, though, that it’s not as simple as five minutes of typing and a solution. Of course, we can reach a solution in a short time, but the important thing about code katas is that we explore each technique fully and in a controlled way. We tackle the same problem multiple times, using different techniques and making different decisions, understanding the ramifications of each one, and exploring edge cases. The short feedback loop optimizes opportunities to learn.

Another good example is Conway’s Game of Life. It’s a simple problem to solve, but try solving it in a functional style. If you’re used to mutability, solving the problem without mutating state will push you outside of your comfort zone. Similarly, if you try to solve it with the focus of “tell-don’t-ask“, how will the responsibilities of each object change?

As software engineers, we don’t get enough opportunities to explore new ideas. In the middle of a development cycle, we can’t suddenly start experimenting on the team’s code base. Code katas offer an opportunity to explore new techniques in a safe environment.

If you’re still skeptical, my challenge to you is simply to try it out. Convince a willing colleague to pair with you and work through a kata or two. It only takes an hour and I’m willing to bet you learn a few new things each time. The next step is to make it a sustainable team practice. Start with an hour every Friday afternoon (after all who wants to commit code to production just before they leave for the weekend?) for month and see how that works out.

Finally, consider signing up for the Global Day of Code Retreat. It’s like a daylong code kata, it’s on December 8th and there’s probably an event in your area!

10 Responses to “Deliberate Practice”

  1. Timothy A Wiseman says:

    Excellent post. We all develop faster by deliberately stepping outside our comfort zones, and code katas are one way of doing it.

    Project Euler is another excellent way to practice techniques that are outside most people’s comfort zones, especially if you focus on solving some of them in multiple ways, trying for speed or unusual techniques as well as just arriving at a correct solution.

    Personally, I also find it helps to make notes of interesting things, especially ones that surprised me.

  2. Robert Young says:

    While honing one’s skills is a Good Thing, beware of the Flavour of The Month trap. Progress requires change, yet change is not necessarily progress. Given that normal machines today are inexpensively multi-processor/core and SSD stored, one might ponder why it is that [non|un]normalized schemas still predominate? Why is NoSql such a meme within the coder community? And so forth.

    Better to ponder what might be a more coherent approach to application development, than to do etudes to olde ways of doing things. This is, after all, a site devoted to Dr. Codd’s vision (although perhaps not always explicitly), not to coding gymnastics: Codd showed us how to replace lots of code with a little data.

    • Jeff Foster says:

      My point is that making a conscious effort to improve your skills through deliberate practise is a good thing.

      As an example, why not explore the NoSQL meme? A simple kata to explore that might be to write the data model for a blog in both a relational database and a NoSQL store. What are the design constraints in each model? How will the design change if you need to add threaded comments? How will the performance change as more comments are added? I don’t have the answers to this, but I’m sure that going through such a kata would help me understand it just a little bit more.

      • Robert Young says:

        “As an example, why not explore the NoSQL meme?”

        A particular perversion of datastores. No different, modulo some syntax, from your granddaddy’s COBOL/VSAM client driven paradigm. Even your granddaddy had CICS to manage the transaction; NoSql leaves you on your own. Why learn what’s been shown, by Dr. Codd and others, to be reactionary? The only justification I can think of is to burnish a CV with the FotM.

        If one’s point of view is through coder’s glasses, then, sure. If, on the other hand, it’s through datastore glasses, not so much.

    • Timothy A Wiseman says:

      Since you bring up SSDs, I took a look at their impact on performance on my blog a little bit ago. They certainly help performance a great deal, but not as much as optimizing the code. This is especially true since they aren’t exactly cheap yet and are more likely to serve as a cache in a more complex storage scheme rather than hold all the data. So, selective denormalization for performance can still make sense in certain cases (though for other reasons I think that should be a last resort and normalization should be the default.) Of course, a lot of non-normalization is caused just by lack of knowledge about the benefits of normalization and the techniques to achieve it rather than through conscious choice.

      But that doesn’t go to the point of this post, the best way to improve is to focus on things somewhat outside of your comfort zone and deliberately work on them, sometimes solving the same problems from different perspectives. That is how you learn the things you don’t know, compare techniques directly, and improve your skills.

      • Robert Young says:

        From my reading of your post, you didn’t refactor (or factor) the databases to 3/5 NF. The point of pursuing multi-processor/core/SSD machines is more about DB footprint shrinking and DRI increases (due to NF improvement). Simply moving a [un|de]normalized flatfile image-in-DB to SSD doesn’t play to this machine’s strength; dinosaur sized flatfiles won’t be supported in NAND based SSD (there are other NV storage which may, in the future), so it makes little sense to prove the worth of SSD using such datastores. Yes, it is more work than just moving files from the HDD to the SSD.

        Each particular SSD implementation (SLC vs. MLC vs. stupid controller vs. smart controller vs. SATA vs. PCIe vs. etc) has a sweet spot. For high NF databases, random R/W is what matters, as is the OS & DB engine’s capability to synthesize the joins which normalization imposes. The trade off among small data footprint, reliance on DRI/sp/triggers rather than (client) code, row synthesization performance matters.

        It does take most folks out of their comfort zone. The fundamental question is whether it even makes sense to pursue client side coding paradigms, rather than server side data structure. Engines support, some more than others, regular languages (C, java, and COBOL typically). And thus re-enforce the RBAR mentality we’re (aren’t we) trying to overcome.

  3. Keith Rowley says:

    I love this idea. Another way to use these katas could be in learning new programming languages, try solving the same problem in multiple languages to see how they differ and to stretch your coding skills in a programming language other than your primary one.

  4. paschott says:

    I’m also intrigued by this idea. I hadn’t considered just regularly sharpening the axe by solving a problem in a new way. I’ve looked up a couple of these sites for a little practicing.

  5. [...] previous blog, Deliberate Practice, discussed the need for developers to “sharpen their pencil” continually, by setting [...]

Leave a Reply