Evaluating the Modules

Eugene suggested an exercise last Module Team Leaders Meeting that is aimed at:

  • helping to sharpen our “definitions” of the modules

  • informing the development of some “measures” of the modules for the purposes of later evaluating the modules

The exercise involves developing some simple likert-style scales indicating or describing what a high level performer would be like for that skill area, and what a poor performer would be like.

For example:

Timing in closing defensive gaps

1 2 3 4 5 6 7

Fails to see gap Slow to move / react Anticipates well


Imagery Ability

1 2 3 4 5 6 7

Reports no imagery Only somatic imagery Full recreation

The key question is, in your view, what is the best way to conceptualise the module, what is it developing / measuring?

A nuance here is the further question - is poor performance in that skill area typically seen as erring both sides of a good performance (a pendulum where performance may be too fast- too slow east either side of “spot on”) or whether you see it as a “hill”, with a range from poor through adequate to good at the top end (as with the above examples).

There is a model that we can use here to perhaps make this a bit easier, or at least to clarify the concept.

About 15 years ago I produced the “Stress Profile” concept in response to an international client’s program interests. It has never been formally published, but certainly discussed and debated in a number of settings. You have seen reference to it already in the Blogs and will find it here.

This profile example is that for a poorer performer.

I have added some further resources to the site under the menu "Crampton 2" that includes a discussion paper that will provide some further background about the break up of the component skills. We are better referring to them as "constructs".

The document presents the high scorer - low scorer continuum with some additional definitions. You will see that the terminology is mine, and I am open for debate and discussion. Those involved in the Delphi will see where I have made commentary there relating to these ideas.

So how did this rating sheet / set of constructs get to this point? Read the Stress Profile Background paper, and have a look through the Stress Profile material, and you will see it goes back to the "how do you eat an elephant?" question - one bite at a time.

The challenge is to conceptualise each ”construct” as a measurable set of characteristics. You will recall that our original discussions about evaluating the modules separately morphed into evaluating the combined project. We can now specify that more clearly as evaluating the take-up and effectiveness of the combination of the website and app.

Look for this document in Crampton 2.

Go to Crampton 2 to download a Word document that will allow you to create a “Stress Profile-like” version of each of our 4 modules. There is space to provide an overview of the module in one measure, and space to consider the CF2 sub-factors that load on each module.

You may choose to present the exercise to your team as an interactive project, or may choose to work through it yourself. First target would obviously be to complete the exercise for your own module, but it would be very useful to then work through the other modules so that you have an opinion of that area before the module leader is asked to discuss their perspective.

We will discuss this on an ongoing basis through the Module Leaders Workshops, and attempt to come to a consensus set of rating scales that will inform our next steps.

3 views0 comments