Advertisement


OBE ASSESSMENT

By davidloubser, 3 August, 2012

South African industry has suffered approximately 18 years of outcomes-based assessment (OBE). Experience has left me wondering whether we have learned anything of note about assessing learners. What follows are some thought provokers relating to assessment. I will leave you to answer that question….

Thought 1: Competence or Perfection

Are we confusing competence with perfection? The most mediocre definition of competence reveals that competence is about meeting a set standard determined by specific criteria. It follows that various standards could exist and that therefore, one could attain varying levels of competence. A pilot once explained to me that an airplane could be landed quite safely in three areas of the runway with the pilot still being declared competent. A competent performer does not need to be a perfect performer. As assessors, we should be concerned that learners attain the minimum level prescribed by that practice. It follows therefore that we cannot limit ourselves to using checklists (tick-lists) that only have two columns, one for competent (perfect) and one for not yet competent (imperfect). What happens to the levels of performance between?

Thought 2: Competence and Percentages

While we on the subject, I would be wealthy if I had one rand for each time I have heard an individual say that in OBE, we only deal with competent, not yet competent as opposed to percentages. Well I beg to differ! Why can’t we develop rubrics and attach scores to various levels of performance? Indeed, why can’t we use the scores to reflect learner grades? 50 % could easily be used to reflect a competent learner, should the minimum criteria have been met. How else are we going to differentiate between learners? Oh yes I forgot- they are either all perfect or not perfect! Some time ago, I came across a very large industry apprentice-training provider, who was requested by their senior management to bring back the grades. Apart from this being a very logical request for a number of reasons including needing to diagnose learning difficulties, healthy competition can be very rewarding for learners. I dare say, it might even contribute towards a learning culture, which is so elusive!

Thought 3: Theory and Practical

If I had one rand for each time I have heard assessors discussing “theory assessment” and “practical assessment” I would not just be wealthy, but a millionaire! So please enlighten me- precisely what are we referring to when we assess theory on one side and practical on the other. Is theory about facts and practical about “observable” things? Oh, dear- then where does foundational understanding fit and how does that get assessed or do we just ignore it because we are assessing outcomes? Is it even healthy to separate them, and what happened to SAQA’s favourite term integration? Can’t help thinking that we developed an NQF to support integration of theory (academic) and practical (training) learning and now we are separating them, or trying to separate them again in occupational qualifications. No wonder, we battle with the distinction ourselves.

Thought 4: Continuous Assessment

Continuous assessment is another popular principle in assessment, but what does it refer to? Does it refer to assessments, which never end or projects, which are on- going? I can say that a theory test on one side and a practical observation on the other side, is an example of what continuous assessment is not! Continuous assessment should surely include multiple types of assessment instruments like case studies, projects, essays and presentations. My experience has shown them to be noticeable by their absence in assessment plans. It is ironic that it is these very types of instruments, helpful in the testing of foundational understanding; the aspect which we discovered earlier, is missing. What continuous assessment does mean is that learners should be assessed in the workplace too, as opposed to just in the classroom. That would probably be very healthy or what do you think?

Thought 5: Formative and Summative Assessment

Maybe one of the most controversial statements in the policy documents of SAQA is the one referring to formative assessment, which says it can be used as summative evidence to prevent assessing outcomes twice. What I do understand about formative assessment is that it should occur during training and that it should look different to summative assessment. Why should it look different-because its purpose is different! What I also know is that we should not be confusing formative assessment with actual training activities –this seems to be a favourite out there. A formative assessment does support instruction, but should stand out as an assessment due to a formal evaluation being done with a corresponding judgment. I also know that in industry we are generally limited by the amount of time we spend with learners. We cannot spend more time on assessing them as opposed to training them during class, which appears to be the experiences of many learners I have had the pleasure to meet over the years. By the way, trainers also engage in formative evaluation when they revise lessons or learning material based on information from their previous use so the term formative is also associated with programme evaluation. Do you think we understand this term….

 

So, are you any closer to answering that question I posed at the beginning? I have probably just touched the tip of the iceberg I think. I will leave the rest for the next article if I may….

Take care

Dave - 0845563103

 


Advertisement



Copyright: Portal Publishing (Pty)Ltd | Privacy Policy | Terms of Use
Skills Portal | Careers Portal | Jobs Portal | Bursaries Portal | Skills Universe
About us | Contact us
Portal PublishingPress Council