The BITS provides information about individuals’ computer self-efficacy (CSE) for three levels of computer skills: Novice (basic computer use), Advanced (skills beyond basic use that do not typically require specialized knowledge), and Expert (skills typically requiring specific training). Each level corresponds to one subscale consisting of six items covering the same six domains of computer skills (hardware, networking, operating system, software, internet, troubleshooting).


To date, the psychometric properties of the BITS have only been established for the computerized version (Weigold & Weigold, 2021a; Weigold et al. 2023). However, studies are currently being developed to assess paper-and-pencil and interview versions, based on previous research (Weigold et al., 2013; 2018), with the expectation being that they will produce similar results.

The BITS and its manual are free to download and use non-commercially as long as the author is credited, and the author’s copyright notice is included. Its content should not be modified without the author’s permission, but its format can (and should) be modified depending on how it is given in order to adjust to the demands of the medium or software package in which it is being used.

Scoring and Three Dimensions

Participants respond to the 18 items using a six-point Likert scale ranging from Not at all Confident to Completely Confident. The six items corresponding to each of the three sub-scales are then averaged, with higher numbers indicating higher confidence for completing novice, advanced, or expert computer skills. The three sub-scale scores should be examined separately, and a total score should not be calculated. Those with more advanced scores at a higher level are typically highly confident in their ability to engage in lower-level skills. If the goal is to obtain a single total CSE score, then the BITS-SF should be used instead of the BITS.

Psychometric Properties

The three-factor structure of the BITS was assessed using Mechanical Turk workers and confirmed in college students (Weigold & Weigold, 2021a, see also Weigold & Weigold, 2021b). The BITS showed evidence of convergent and discriminant validity in college students across 21 measures of similar (e.g., CSE) and dissimilar (e.g., personality) constructs. Additionally, college students differed significantly in their scores on the Advanced and Expert levels (but not Novice) based on their self-rated computer skill (e.g., novice, advanced) and major (e.g., education, engineering). Finally, the BITS showed evidence of strong test-retest reliability for up to eight weeks in Mechanical Turk workers. See Weigold and Weigold (2021a) for details.

The Simplified Chinese and Traditional Chinese versions of the BITS showed similarly strong evidence of convergent and discriminant validity. See Weigold et al. (2023) for details.


You can contact Arne Weigold, Ph.D., here, if you have any questions.


Weigold, A., Weigold, I. K., Zhang, X., Tang, N., & Chong, Y. K. (2023). Translation and validation of the Brief Inventory of Technology Self-Efficacy (BITS): Simplified and Traditional Chinese versions. Social Science Computer Review. Advance online publication.

Weigold, A., & Weigold, I. K. (2021a). Measuring confidence engaging in computer activities at different skill levels: Development and validation of the Brief Inventory of Technology Self-Efficacy (BITS). Computers & Education, 169, 104210.

Weigold, A., & Weigold, I. K. (2021b). Traditional and modern convenience samples: An investigation of college student, Mechanical Turk, and Mechanical Turk college student samples. Social Science Computer Review, 40(5), 1302-1322.

Weigold, A., Weigold, I. K., & Natera, S. N. (2018). Mean scores for self-report surveys completed using paper-and-pencil and computers: A meta-analytic test of equivalence. Computers in Human Behavior, 86, 153-164.

Weigold, A., Weigold, I. K., & Russell, E. J. (2013). Examination of the equivalence of self-report survey-based paper-and-pencil and Internet data collection methods. Psychological Methods, 18(1), 53-70.