IT training resources & rants

View My GitHub Profile

Code as documentation and assessment

Collecting useful, accurate and compliant evidence to assess complex tasks such as “Install and configure a server” can be challenging.

One tried and true method is to “Screenshot into Word”: students collect a sequence of screenshots as they perform each stage of the task, and later assemble those screenshots along with captions and other relevant material (tables, diagrams) into a nicely formatted word-processor document (eg. ODT, DOC, PDF). The result is a kind of “howto” document which is in fact an accurate enough record of what actually occurred.

Problems with this method:

IMHO, one promising avenue to avoid these problems and open up the possibility of more authentic, equitable and professionally relevant assessment is to require that, to the greatest extent possible, lab procedures be documented and submitted in the form of scripts or code. In other words, I am proposing a “Show Me the Code” approach to assessment.

There is no doubt that scripted automation is fast becoming the norm in the IT industry, and there are now a plethora of tools out there that are commonly used to do this. These include scripting languages such as Python, Bash and PowerShell, and configuration engines such as Ansible and DSC. The more we can get our students automating tasks in code, the better we are preparing them for a future career in IT.

If students captured procedures in code rather word-processor documents and submitted that code for assessment, it would succinctly, accurately (and literally) detail every step performed in a complex sequence. The code would of course require plentiful comments (an authentic industry practice), but rather than commenting on the content of a screenshot students would be commenting on the how and why of a particular command or function in the script.

Such scripts would be amenable to automated assessment (again, using scripts!) which could determine the presence of required data. This could be complemented with another type of assessment script which ran a series of tests against a running student machine to determine if in fact it has been configured correctly.

This is not to say that students should not initially learn to hand configure individual systems: of course, they must, especially if they are entirely new to the task. This way, they will also learn how time-consuming and error prone manual configuration can be, and why automation is a good thing.

Having students first perform a task manually, then convert that process into an automated script not only reflects authentic industry best-practice, but probably also goes a long way to satisfying the assessment requirement for suffiency (doing a task more than once in different contexts).

Another common performance criterion in many TAFE IT units is the test-debug-modify cycle. Having procedures embodied in a script is a huge benefit in this regard. Scripts can be easily modified incrementally to debug and modify specific procedures, without students having to remember and repeat an entire sequence of steps (error prone and time wasting).

Assessing scripts for content and structure greatly reduces (almost completely eliminates?) any English literacy effects. Code is code is code, regardless of the language of the coder. Every student is on a level playing field when all that matters is what is in the code.