A scoring Rubric allows your team to assess an object on the same criteria and share those scores, enabling transparent and fast decision-making.
Step 1. Determine the criteria for assessment.
For each object (organization, country, people) agree on up to 5 criteria. For example, for scoring a country, you might allocate 30 points for market size, 30 points for GDP growth, 20 points for competition and 20 points for political stability.
Step 2. Weight the Criteria
On the HolonIQ platform, 100 points are allocated across up to 5 criteria, so you can weight each criteria according to how important they are to you.
Step 3. Develop indicators against the rating scale.
The team needs to have a consistent methodology and common understanding of your criteria.
Absolute Indicators. These are indicators that are independent and are assessed against set criteria. eg, 'The Team have over 10 years of experience each'. This is not a criteria that is dependant on any other item being assessed.
Relative Indicators. These are indicators that are relative eg 'In the top quartile of companies we invest in'
In all cases, it is more useful if scoring is being conducted against the full range of the rating scale. A large number of companies or people at 6, 7 and 8 is not maximising the human insight of your team to differentiate.
Testing your Rubric
The following questions can help determine if the rubric is effective:
Are the characteristics of each performance level clear? Will team mates be able to self-assess by having the descriptors?
Does the rubric adequately reflect the range of levels at which [x] may actually perform given tasks?
Are the criteria at each level defined clear to ensure that scoring is accurate, unbiased, and consistent? Could several instructors use the rubric and score a student’s performance within the same range?
Does the rubric reflect both process and product?
Are all criteria equally important, or is one variable stronger than the others?
Is the language used descriptively for other users to determine what is being measured in both qualitative and quantitative methods.
Additional considerations related to rubrics are listed below:
Rubrics need to be piloted, or field tested, to ensure they are measuring the variable intended by the designer.
Rubrics should be discussed across the team to create an understanding of expectations.
Rubrics help ensure that scoring is accurate, unbiased, and consistent.
Rubrics list expectations of performance that are aligned with the conceptual approach.