Computer Science Education Research

This blog is an attempt to curate the research work of educators from the field of Computer Science Education. In particular it will serve as a place where CS Education researchers can access links and references to the Computer Science Education Research.



Ethics in Computer Science Education

The spreadsheet below (see link) provides a very comprehensive survey of courses and resources about teaching ethics in computer science programs.

Also a useful twitter chat can be found at the link below

Professor Casey Fiesler is curating these resources

14 quotes from 14 #SIGCSE2017 Papers

SIGCSE 2017 is now almost halfway through and it’s time to get down to some of the papers.From the Twitter feed the SIGCSE2017 looked like it was a lot of fun, and with many interesting ideas. Below, and in no particular order, I share my favorite, stand-out quotes from papers read. These quotes are taken mainly from papers that focus on the teaching and learning of introductory programming using tools and innovative pedagogical approaches.

    1. We found that [online coding] tutorials largely taught similar content, organized content bottom-up, and provided goal-directed practices with immediate feedback” [1]
    2. “We found that the students who used physical manipulatives performed well in rule construction, whereas the students who engaged more with the rule editor of the programming environment had better mental simulation of the rules and understanding of the concepts.” [2]
    3. We found that the relationship between some introductory course experiences and self-efficacy and sense of belonging was strongest among first-generation college women, which reveals the importance of considering women’s experiences in light of their additional intersectional identities.“[3]
    4. According to our data pencil puzzle based assignments can be effective in teaching students of varying experience levels. This indicates that pencil puzzles are indeed a leveling context for the instruction of CS topics.”[4]
    5. This study showed the effectiveness of two-stage exams using a crossover experimental design. Students have a statistically significant learning gain on topics that were given in a group-retest based on quiz performance two weeks later. However, the effect is no longer detectable by the final exam. The benefitt of two stage exams for students from different demographic groups was analyzed, but no conclusive trends were observed.” [5]
    6. Our results indicate that a) students of differing achievement levels approach programming tasks differently, and b) these differences can be automatically detected, opening up the possibility that they could be leveraged for pedagogical gain.“[6]
    7. Having students re-arrange mangled code in lieu or writing their own code from scratch makes exam questions more efficient to mark, produces more consistent and reliable evaluations,and seems to preserve the relative ordering of student grades,thus indicating that it is measuring student ability as well as traditional coding questions.”[7]
    8. Our findings show a strong tendency, by many senior students, to remain at low levels of abstraction, even after realizing abstraction in a variety of CS courses. Abstraction is not an easy cognitive task, yet CS tutors should explicitly illustrate and practice it with their students in algorithmics courses“[8]
    9. The results and follow-up cognitive think alouds indicate that students are generally unfamiliar with the use of variables, and harbor misconceptions about them. They also have trouble with other aspects of introductory programming such as how loops work, and how the Boolean operators work.”[9]
    10. These results suggest that most students find TBL[team-based learning] rewarding, although there are some aspects  of the pedagogy that can be frustrating and may require alteration for TBL adoption in CS“[10]
    11. Our work confirms that variables and assignment can beproblematic for novice high school programmers. We found a pattern of deferred evaluation of variables in the programs of about a third of our students. As a possible explanation we pointed out that the students’ mistakes are consistent with a model of the notional machine exhibiting algebraic capabilities.”[11]
    12. Our conclusion is that we were successful in achieving similar outcomes, and the benefits of context-based CS1 courses, in both the Visual Media and Science versions of the course.“[12]
    13. With the increased emphasis on K-12 computing, it is critical that the community develop assessments so that teachers, parents, and administrators can understand what students arelearning.”[13]
    14. Kodu reasoning problems appear to be a promising tool for assessing computational thinking in young programmers.”[14]