TLDR; Finding your weaknesses as a software developer career involves leveraging our analytical tendencies. It’s one thing to think about the content we want to learn. It’s quite another to think about how your study process is going, and how it could go better.
After finding out about metacognition (thinking about thinking) I decided to put it to the test. I need to up my game with DynamoDB at work. So how could I evaluate not only my expertise with DynamoDB, but also how effective my learning process was?
Longer version – understanding metacognition
I’ve worked with enough idiots to appreciate the Dunning Kruger effect, namely those who are capable think they aren’t and sadly those who aren’t.. well they tend to be over-confident.
What does this have to do with being a good developer? Well I’m a keen fan of the principle of inversion, whereby you take an idea and flip it on its head. In fact I used this idea when I wrote my book for onboarding new starters. In that instance, I figured that if the onboarding process was so shambolic, it was an opportunity. It meant there were probably patterns to find and fix.
So, with metacognition, if people who were over-confident couldn’t see their shortcomings, what about those of us with imposter syndrome (i.e. almost all developers I’ve met)? Maybe we could leverage that weakness and make it a strength? Maybe our own introspection and low confidence could be used to nail our training?
And then I wondered how this could apply to the workplace, and what practical strategies there were. How can I find what’s working and not working and pivot quickly when needed? I want to have something systematic, but at the same time not overly formal. So I started my research.
Metacognition – the key to finding your software developer blind spots
So I did a bit of googling on this idea and there’s a name for it – metacognition. Thinking about thinking. This article from brainfacts summed up the ideas quite nicely. They key thing is the separation of what you think you should do, vs the actual carrying it out. The article cited an experiment (the details aren’t especially important) where execution vs assessment varied wildly:
Comparing people’s answers to their actual performance revealed that although all the volunteers performed pretty evenly on the primary task of identifying the brighter patches, there was a lot of variation between individuals in terms of how accurately they assessed their own performance – or perhaps how well they knew their own minds.‘Killer Questions’ Science, 2010
Clearly it’s one thing to execute a skill, but how do we assess our execution? How do we assess our judgement, our accuracy, our competence? It turns out there is more than just one place where you need to do this, and I’ll cover this next.