Are there useful levels?
The maps I’ve built for lectures and short courses I’ve given have taken a specific form I learned from my sister, who is a high school science teacher, though she kind of cringes at the crudeness of what I do. My steps are:
- List out what I want the students to know and be able to do after finishing.
- Order that list so dependencies are ordered and then by priority of how important I think it is they learn something.
- Write down specific tasks and criteria for each item in the list that operationalize it.
- Set up a table with each item in the list as a row and columns for successive levels of achievement.
- Organize the specific tasks and criteria for each list item into the successive levels, limiting the amount per level.
- Throw away many of the tasks and criteria as not going to fit into any of the levels in this row.
- Accept that there’s only time to teach the first column.
If I were teaching a class or organizing a degree program, I’d try to produce a giant table like this. Large companies often generate a vague version of such a table as part of their leveling and promotion system where they use the rungs on the promotion ladder as levels of proficiency and the different rows as reasons to avoid promoting people and giving them a raise.
I am not convinced that there are useful levels. A degree program, in setting up a sequence of required courses and topics, has to impose levels, and the whole thing is chosen to plot a single course that will hopefully provide the most value to the majority of students. Companies use a sequence as tool for trying to making their promotion ladder navigable to employees, though even there when you get to higher levels there’s a tacit acceptance that it breaks down and people start talking about archetypes for staff+ engineers.
But if we aren’t trying to rationalize power structures and salary bands or figure out what to spend sparse classroom time on, if we have a young software engineer with their career stretching out ahead of them, are there useful levels? Is there some more principled basis we can base them on?
If we look at psychology for a possible basis, we immediately come across the Dreyfus model of skill acquisition which categorizes four qualities of a person using a skill: recollection of the rules governing the skill, recognition of aspects of the situation to decide what rules to use, decision making based on what is recognized, and whether they must monitor their performance. In this model, learners begin as novices who recollect specific rules that don’t vary by situation, recognize specific aspects of a situation, make a decision based on formally applying those rules to that situation, and continuously check themselves to see if they’re doing this correctly. With training they start to adjust their rules to be more adaptive to the situation and their recognition of situations becomes more nuanced and they reach competence. Then their grasp of decision making rules becomes ingrained and they no longer have to formally reason, and hey reach proficiency. Finally, their performance becomes largely habitual and they cease to have to monitor their behavior to see if they are within the framework of their rules and they become an expert.
The Dreyfus model isn’t particularly helpful to us. First, it probably isn’t particularly accurate. It assumes the presence of rules that learners are given and follow as novices. In its own field that’s questionable. Nor does what they describe match the understanding of learning that has developed in applied behavior analysis where limited behaviors are learned by operant conditioning and steadily refined, generalized, and chained together.
Second, it’s talking about acquiring specific skills, not years of the whole mass of skills of an engineer. There isn’t a single skill with rules describing it which we can describe as “software engineering.”
Another possibility is Ericsson’s work on mastery from studying elite athletes and students at music conservatories. That doesn’t provide us much besides deliberate practice for thousands of hours, and it turns out that only applies in what are called “kind” problems, that is, problems where you get immediate, correct feedback on how you’ve done that applies to next time. It can make your performance worse for “wicked” problems (see Epstein’s book Range).
I’m inclined to think that there isn’t a set of levels that make sense that we can apply across whatever categories we choose for guidance. At best we can produce a taxonomy that is helpful and some guidance on what is out there in each part of it.
« Back to Getting better at programming