KU's Learning Management System

Item Analysis

Item analysis provides statistics on overall test performance and individual test questions. This data can help instructors recognize questions that might be poor discriminators of student performance. You can use this information to improve questions for future tests or to adjust credit on current attempts.

For more information on Item Analysis, select from the following:


Video - Item Analysis


Run an Item Analysis on a Test

Note: You can run item analysis on tests that include single or multiple attempts, question sets, random blocks, auto-graded question types, and questions that need manual grading. For tests with manually graded questions that have not yet been assigned scores, statistics are generated only for the scored questions. After you manually grade questions, run the item analysis again. 

  1. Go to one of the following locations to access item analysis:
    • A test deployed in a content area.
    • A deployed test listed on the Tests page.
    • A Grade Center column for a test.
  2. Access the test's contextual menu.
  3. Select Item Analysis.
  4. In the Select Test drop-down list, select a test. Only deployed tests are listed.
  5. Click Run.
  6. View the item analysis by clicking the new report's link under the Available Analysis heading or by clicking View Analysis in the status receipt at the top of the page.

Read the Test Summary on the Item Analysis Page

Note: The Test Summary is located at the top of the Item Analysis page and provides data on the test as a whole.

  1. Edit Test provides access to the test questions in the Test Canvas.
  2. The Test Summary provides statistics on the test, including:
    • Possible Points: The total number of points for the test.
    • Possible Questions: The total number of questions in the test.
    • In Progress Attempts: The number of students currently taking the test that have not yet submitted it.
    • Completed Attempts: The number of submitted tests.
    • Average Score: Scores denoted with an * indicate that some attempts are not graded and that the average score might change after all attempts are graded. The score displayed here is the average score reported for the test in the Grade Center.
    • Average Time: The average completion time for all submitted attempts.
    • Discrimination: This area shows the number of questions that fall into the Good (greater than 0.3), Fair (between 0.1 and 0.3), and Poor (less than 0.1) categories. A discrimination value is listed as Cannot Calculate when the question's difficulty is 100% or when all students receive the same score on a question. Questions with discrimination values in the Good and Fair categories are better at differentiating between students with higher and lower levels of knowledge. Questions in the Poor category are recommended for review.
    • Difficulty: This area shows the number of questions that fall into the Easy (greater than 80%), Medium (between 30% and 80%) and Hard (less than 30%) categories. Difficulty is the percentage of students who answered the question correctly. Questions in the Easy or Hard categories are recommended for review and are indicated with a red circle.

Only graded attempts are used in item analysis calculations. If there are attempts in progress, those attempts are ignored until they are submitted and you run the item analysis report again.


Read the Question Statistics Table on the Item Analysis Page

Note: The question statistics table provides item analysis statistics for each question in the test. Questions that are recommended for your review are indicated with red circles so that you can quickly scan for questions that might need revision.

In general, good questions have:

  • Medium (30% to 80%) difficulty.
  • Good or Fair (greater than 0.1) discrimination values.

Questions that are recommended for review are indicated with red circles. They may be of low quality or scored incorrectly. In general, questions recommended for review have:

  • Easy ( > 80%) or Hard ( < 30%) difficulty.
  • Poor ( < 0.1) discrimination values.
  1. Filter the question table by question type, discrimination category, and difficulty category.
  2. Investigate a specific question by clicking its title and reviewing its Question Details page.
  3. Statistics for each question are displayed in the table, including:
    • Discrimination: Indicates how well a question differentiates between students who know the subject matter those who do not. A question is a good discriminator when students who answer the question correctly also do well on the test. Values can range from -1.0 to +1.0. Questions are flagged for review if their discrimination value is less than 0.1 or is negative. Discrimination values cannot be calculated when the question's difficulty score is 100% or when all students receive the same score on a question.
      For more information on how Blackboard calculates the discrimination values see Blackboard's Item Analysis ».
    • Difficulty: The percentage of students who answered the question correctly. Difficulty values can range from 0% to 100%, with a high percentage indicating that the question was easy. Questions in the Easy (greater than 80%) or Hard (less than 30%) categories are flagged for review.
      Difficulty levels that are slightly higher than midway between chance and perfect scores do a better job differentiating students who know the tested material from those who do not. It is important to note that high difficulty values do not assure high levels of discrimination.
    • Graded Attempts: Number of question attempts where grading is complete. Higher numbers of graded attempt produce more reliable calculated statistics.
    • Average Score: Scores denoted with an * indicate that some attempts are not graded and that the average score might change after all attempts are graded. The score displayed here is the average score reported for the test in the Grade Center.
    • Standard Deviation: Measure of how far the scores deviate from the average score. If the scores are tightly grouped, with most of the values being close to the average, the standard deviation is small. If the data set is widely dispersed, with values far from the average, the standard deviation is larger.
    • Standard Error: An estimate of the amount of variability in a student's score due to chance. The smaller the standard error of measurement, the more accurate the measurement provided by the test question.

Additional Resources from Blackboard:

For more information about Item Analysis visit Blackboard's Item Analysis ».


Source: Blackboard Help »


Blackboard Login

Educational Technology Support

Email KU IT Educational Technology

itedtech@ku.edu
Blackboard support via Email

Call KU IT Customer Support

785-864-2600
Blackboard support via phone

Faculty/Staff Support

Budig 4
Blackboard support in-person

Bb Maintenance Windows

Bb Maintenance Windows
Sun: Midnight-Noon | M-F: 4-6 a.m.

KU Today
One of 34 U.S. public institutions in the prestigious Association of American Universities
Nearly $290 million in financial aid annually
44 nationally ranked graduate programs.
—U.S. News & World Report
Top 50 nationwide for size of library collection.
—ALA
23rd nationwide for service to veterans —"Best for Vets," Military Times