Item Reliability

3,945,380 All Time Assessments

In statistics, reliability is the consistency of a set of measurements used in an assessment. It is a measurement of whether the items of an instrument give or are likely to give the same measurement upon multiple attempts.

In 2011 Applied Measurement Associates of Tuscaloosa, Alabama was commissioned to conduct reliability coefficient calculations for the questions\items in SmarterMeasure. An expected range for Cronbach Alpha reliability coefficient values is expected to be from .70 to .95 to indicate a reliable assessment.

Scale Cronbach Alpha Reliability Scale Type Number of Items Sample Size
Learning Styles .81 0,1,2 21 873
Learning Styles .81 0,1,2 35 28,056
Individual Attributes .80 1,2,3,4 24 29,989
Life Factors .76 1,2,3,4 20 30,004
Technical Knowledge .75 0,1 23 29,992
Technical Competency .38 0,1 10 30,001

A Cronbach Alpha Reliability Coefficient of .80 indicates that 80% of the score can be consistently reproduced using the assessment items.

It should be noted that for the areas of SmarterMeasure which showed a lower reliability coefficient that the scale type was 0,1. This scale type resulted in lower levels of variability among the possible answers thus reducing the measurement of reliability.

One of the useful features of SmarterMeasure is that school leaders (faculty and/or administrators) can view SmarterMeasure scores through a dashboard which allows them to at-a-glance identify students who might be at risk of not doing well in an online or technology rich course based on their SmarterMeasure scores. Then based on these findings the school can provide remediation and support as appropriate. This serves as a valuable student service which can increase the retention rates among online learners. Because the student population of each school is unique, one of the features of SmarterMeasure is that schools can set the grading thresholds to determine what level of SmarterMeasure scores should classify their students as "failed","questionable", or "passed". In July, 2008 an analysis was conducted based on the 108,423 students who had taken SmarterMeasure in the prior twelve months. Based on this analysis recommendations were made regarding the setting of the grading threshold values in the administrative dashboard of SmarterMeasure. Click Here to view a copy of this report.

This analysis revealed the following distributions of SmarterMeasure scores:

 

Individual Attributes

Mean: 79.8
Std. Dev.: 7.877
N: 28,863

Individual Attributes

Reading Recall

Mean: 72.65
Std. Dev.: 18.322
N: 26,694

Reading Recall

Technical Knowledge

Mean: 58.11
Std. Dev.: 9.457
N: 20,827

Technical Knowledge

Overall Technical Comp.

Mean: 94.32
Std. Dev.: 9.082
N: 21,330

Overall Technical Comp.

 

In November, 2010 an analysis was conducted using only data from secondary level students to determine the appropriate readiness ranges settings for the secondary version of SmarterMeasure. Click Here to view a copy of this report.