期刊论文详细信息
Frontiers in Psychology
Proof of Concept of a Gamified DEvelopmental Assessment on an E-Platform (DEEP) Tool to Measure Cognitive Development in Rural Indian Preschool Children
article
Debarati Mukherjee1  Vikram Patel1  Supriya Bhavnani1  Akshay Swaminathan3  Deepali Verma2  Dhanya Parameshwaran4  Gauri Divan2  Jayashree Dasgupta2  Kamalkant Sharma2  Tara C. Thiagarajan4 
[1] Centre for Chronic Conditions and Injuries, Public Health Foundation of India;Child Development Group;Department of Global Health and Social Medicine, Harvard Medical School, United States;Sapien Labs, United States
关键词: serious game;    cognitive development;    LMIC;    digital assessment;    mHealth;    machine learning;    scalable;    preschool children;   
DOI  :  10.3389/fpsyg.2020.01202
学科分类:社会科学、人文和艺术(综合)
来源: Frontiers
PDF
【 摘 要 】

Over 250 million children in developing countries are at risk of not achieving their developmental potential, and unlikely to receive timely interventions because existing developmental assessments that help identify children who are faltering are prohibitive for use in low resource contexts. To bridge this “detection gap,” we developed a tablet-based, gamified cognitive assessment tool named DEvelopmental assessment on an E-Platform (DEEP), which is feasible for delivery by non-specialists in rural Indian households and acceptable to all end-users. Here we provide proof-of-concept of using a supervised machine learning (ML) approach benchmarked to the Bayley’s Scale of Infant and Toddler Development, 3rd Edition (BSID-III) cognitive scale, to predict a child’s cognitive development using metrics derived from gameplay on DEEP. Two-hundred children aged 34–40 months recruited from rural Haryana, India were concurrently assessed using DEEP and BSID-III. Seventy percent of the sample was used for training the ML algorithms using a 10-fold cross validation approach and ensemble modeling, while 30% was assigned to the “test” dataset to evaluate the algorithm’s accuracy on novel data. Of the 522 features that computationally described children’s performance on DEEP, 31 features which together represented all nine games of DEEP were selected in the final model. The predicted DEEP scores were in good agreement (ICC [2,1] > 0.6) and positively correlated (Pearson’s r = 0.67) with BSID-cognitive scores, and model performance metrics were highly comparable between the training and test datasets. Importantly, the mean absolute prediction error was less than three points (<10% error) on a possible range of 31 points on the BSID-cognitive scale in both the training and test datasets. Leveraging the power of ML which allows iterative improvements as more diverse data become available for training, DEEP, pending further validation, holds promise to serve as an acceptable and feasible cognitive assessment tool to bridge the detection gap and support optimum child development.

【 授权许可】

CC BY   

【 预 览 】
附件列表
Files Size Format View
RO202108170004243ZK.pdf 1279KB PDF download
  文献评价指标  
  下载次数:9次 浏览次数:2次