The BITS-SF was developed to be a shorter measure of individuals’ computer self-efficacy (CSE) that provides a single total CSE score representing an individual’s overall level of CSE. It consists of the strongest items on the BITS, two from each of the three levels (Novice, Advanced, and Expert) and one from each of the six domains of computer skills (hardware, networking, operating system, software, internet, troubleshooting).
To date, the psychometric properties of the BITS-SF have only been established for the computerized version (Weigold & Weigold, 2021a). However, studies are currently being developed to assess paper-and-pencil and interview versions, based on previous research (Weigold et al., 2013; 2018), with the expectation being that they will produce similar results.
The BITS-SF and its manual are free to download and use non-commercially as long as the author is credited, and the author’s copyright notice is included. Its content should not be modified without the author’s permission, but its format can (and should) be modified depending on how it is given in order to adjust to the demands of the medium or software package in which it is being used.
Scoring and Single Score
Participants respond to items using Yes or No. Responses are added, with higher numbers of Yes responses indicating higher levels of CSE. Numerically, total scores can range from 0 (all No responses) to 6 (all Yes responses). A score of 0 indicates negligible CSE, whereas scores of 1-2 indicate CSE for the novice computer skill level, 3 indicates CSE for the novice-to-advanced level, 4 for the advanced level, 5 for the advanced-to-expert level, and 6 for the expert level. Only a total score should be used.
Latent class analysis in a sample of Mechanical Turk workers and college students indicated the presence of three classes underlying the BITS-SF, which correspond to the novice, advanced, and expert dimensions assessed by the BITS (Weigold & Weigold, 2021a, see also Weigold & Weigold, 2021b). These three classes had significantly different mean scores across a variety of CSE measures, with those in the novice class generally having the lowest scores and those in the expert class the highest. Overlapping scores across classes also showed evidence for novice-to-advanced and advanced-to-expert scores. The BITS-SF showed evidence of convergent and discriminant validity in college students across 21 measures of similar (e.g., CSE) and dissimilar (e.g., personality) constructs. Finally, college students differed significantly in their BITS-SF scores based on their self-rated computer skill (e.g., novice, advanced) and major (e.g., education, engineering). See Weigold and Weigold (2021a) for details.
You can contact Arne Weigold, Ph.D., here, if you have any questions.
Weigold, A., & Weigold, I. K. (2021a). Measuring confidence engaging in computer activities at different skill levels: Development and validation of the Brief Inventory of Technology Self-Efficacy (BITS). Computers & Education. https://doi.org/10.1016/j.compedu.2021.104210
Weigold, A., & Weigold, I. K. (2021b). Traditional and modern convenience samples: An investigation of college student, Mechanical Turk, and Mechanical Turk college student samples. Social Science Computer Review. https://doi.org/10.1177/08944393211006847
Weigold, A., Weigold, I. K., & Natera, S. N. (2018). Mean scores for self-report surveys completed using paper-and-pencil and computers: A meta-analytic test of equivalence. Computers in Human Behavior, 86, 153-164. https://doi.org/10.1016/j.chb.2018.04.038
Weigold, A., Weigold, I. K., & Russell, E. J. (2013). Examination of the equivalence of self- report survey-based paper-and-pencil and Internet data collection methods. Psychological Methods, 18(1), 53-70. https://doi.org/10.1037/a0031607