期刊论文详细信息
PeerJ
The effects of gamelike features and test location on cognitive test performance and participant enjoyment
article
Jim Lumsden1  Andy Skinner1  Andy T. Woods2  Natalia S. Lawrence3  Marcus Munafò1 
[1] MRC Integrative Epidemiology Unit ,(IEU), University of Bristol;School of Experimental Psychology, University of Bristol;School of Psychology, College of Life and Environmental Sciences, University of Exeter
关键词: Gamification;    Go-no-go;    Mechanical Turk;    Points;    Engagement;    Cognition;    Assessment;    Game;    Cognitive test;    Internet-based testing;   
DOI  :  10.7717/peerj.2184
学科分类:社会科学、人文和艺术(综合)
来源: Inra
PDF
【 摘 要 】

Computerised cognitive assessments are a vital tool in the behavioural sciences, but participants often view them as effortful and unengaging. One potential solution is to add gamelike elements to these tasks in order to make them more intrinsically enjoyable, and some researchers have posited that a more engaging task might produce higher quality data. This assumption, however, remains largely untested. We investigated the effects of gamelike features and test location on the data and enjoyment ratings from a simple cognitive task. We tested three gamified variants of the Go-No-Go task, delivered both in the laboratory and online. In the first version of the task participants were rewarded with points for performing optimally. The second version of the task was framed as a cowboy shootout. The third version was a standard Go-No-Go task, used as a control condition. We compared reaction time, accuracy and subjective measures of enjoyment and engagement between task variants and study location. We found points to be a highly suitable game mechanic for gamified cognitive testing because they did not disrupt the validity of the data collected but increased participant enjoyment. However, we found no evidence that gamelike features could increase engagement to the point where participant performance improved. We also found that while participants enjoyed the cowboy themed task, the difficulty of categorising the gamelike stimuli adversely affected participant performance, increasing No-Go error rates by 28% compared to the non-game control. Responses collected online vs. in the laboratory had slightly longer reaction times but were otherwise very similar, supporting other findings that online crowdsourcing is an acceptable method of data collection for this type of research.

【 授权许可】

CC BY   

【 预 览 】
附件列表
Files Size Format View
RO202307100015124ZK.pdf 4834KB PDF download
  文献评价指标  
  下载次数:18次 浏览次数:4次