People of ACM - Briana Morrison

September 5, 2023

What is a key challenge and an important opportunity for those working in computer science education today?

We certainly live in interesting times. I believe that Large Language Models (LLMs) or generative AI (GenAI) are both a key challenge and an important opportunity for everyone who currently teaches computer science. It is certainly a challenge when talking about the introductory programming courses. The vast majority of assignments in those classes are trivial for current LLMs or GenAIs like ChatGPT or CoPilot to solve. CS Education researchers are finding it difficult to design “AI-Proof” assignments that can’t be easily solved, and I’m not convinced that’s the right approach. Given that these tools are here to stay and will only improve over time, it’s just another arms race to try and find assignments that GenAI can’t easily solve. And that isn’t where I want to spend my time and energy. I would rather change what we teach students: focusing more on code reading and comprehension, designing test cases to determine whether or not the GenAI solution is correct, and modifying existing code to improve or correct functionality. There really isn’t a need for a student to start with a blank editor screen for a programming assignment ever again.

Incorporating LLMs/GenAIs into the CS curriculum is an important opportunity. Teaching students how to design effective prompts—through a series of refinements—is an important skill that we currently aren’t teaching. For years we have espoused that we want to teach “critical thinking skills.” What better way to train and test those skills than having students refine a GenAI prompt, review the output, and test the results until all test cases pass? This would truly demonstrate that students understand what they are being asked to do, can evaluate if the solution is appropriate and accurate, and modify a code base to completely solve the problem. And this can be done in any and all CS courses, not only the introductory programming courses.

I also believe that GenAI/LLMs will begin to change where students learn—and from whom. I predict a vast decrease in questions to StackOverflow and TA Office Hour visits—students will turn to GenAI/LLMs to ask questions and seek advice. These tools will become personalized learning assistants—not quite tutors since they don’t understand the learning objective—but able to answer questions and provide specialized answers to individual learning needs. The current problem occurs because learners generally have no way to verify generalized information provided by GenAI. In this sense, the code that a LLM produces is actually more beneficial. At least then a student can run and test the code to see if it works.

But when GenAI gives a “mostly” correct and believable response, and that response contains critical errors (e.g., mis-identifying the runtime classification for a specific operation on a data structure), how would the student know to question the response? This is why it is important for educators to explicitly teach the foundational ideas and facts. Students must verify any factual information produced by GenAI, which is a time-consuming task.

Your research interests include examining if, how, and when we can apply broader educational psychology principles to computer science education. Generally speaking, how is the learning of computer science similar to other disciplines, and how is it different?

Anyone who knows me will tell you that my mantra has always been “Computer Science is different!” We know that some educational psychology principles hold across many domains. One example is cognitive overload which can hamper learning regardless of the domain. In other words, if you’re asking a student to learn a small number of simple vocabulary words in a specific domain, even if she/he is experiencing cognitive overload of many extraneous pictures or diagrams on the page, most undergraduate students would be able to learn a small number of words. The cognitive structures in the brain—short term memory—are probably not being overloaded, even with extraneous information. However, if I’m asking a student to come up with a sentence that uses all the vocabulary words, that will likely overload short term memory and result in poor learning. By intertwining or linking the vocabulary words, we have dramatically raised the intrinsic load required, resulting in faulty or non-existent commitment to long-term memory.

A similar example that my research has demonstrated is that subgoal labels. Subgoal labels are the individual steps needed to accomplish a goal as part of the problem-solving process. Examples of subgoals for a recipe would be “Combine dry ingredients” and “Cream together sugar and butter.” When appropriately designed and implemented, subgoal labels benefit novice learning in STEM disciplines. When students understand the steps to solving a problem, they are more likely to be able to attempt a solution or correctly solve the problem. The subgoals provide decomposition, schemata organization, and memory anchors or beacons for faster recall from long term memory.

However, researchers have also found that learning programming does not necessarily follow the same rules as other disciplines. Perhaps the best-known example of this is the Modality Effect. Researchers in other disciplines have found that presenting students with the same information in both a spoken and visual modality reduces learning. Using videos, students presented with identical information in text on the screen and read aloud through the sound channels retained less information than when presented the information in a single modality. In programming, this has not been found to be the case.

Another example is the Split Attention Effect—having a diagram with explanations in text before or after the diagram results in less learning than incorporating all the information into the diagram (without overloading). Yet this does not seem to be true for learning programming. Programs with explanations before or after and explanations incorporated into the program as comments (or callouts) show no significant difference in learning outcomes. This may be because a computer program is not “read” like text nor processed like a figure. It is something different.

We have really just started exploring the ways that learning computer programming is in fact different than learning in other disciplines, and there is still much to discover.

In your 2020 paper “Reducing Withdrawal and Failure Rates in Introductory Programming With Subgoal Labeled Worked Examples ,” you and co-authors Lauren E. Margulieux and Adrienne Decker note that dropout and failure rates in introductory programming courses are regularly as high as 50%. Will you discuss an important insight of this paper as well as effective strategies for reducing dropout and failure rates for introductory CS classes?

I often refer to solving the CS1 (introductory programming) retention problem as the Holy Grail of CS Education research. So much research has been concentrated on finding effective strategies for reducing the dropout and failure rates for the first programming class. In the paper referenced, we implemented subgoal labels in teaching introductory Java programming. We did a task analysis protocol on several programming concepts to develop a set of subgoal labels for each concept. Students were then taught concepts using the subgoal labels and asked to solve problems using the subgoal labels. Students who learned using the subgoal labels performed better on quizzes, but no statistical difference was found in exam performance. However, the most important finding was that students that we classify as “at-risk” (e.g., those with no prior programming experience or low self-confidence in computing) were two times less likely to withdraw from or fail the course. This indicates that using subgoal labels can aid the students we most want to help.

Other effective strategies have been identified by the Center for Inclusive Computing which include separating students by prior knowledge. Feeling like you are behind what seems like everyone else in the class in the first week can be demoralizing and is a confidence shaker. A 2021 ITiCSE working group that I co-led produced “Evidence for Teaching Practices that Broaden Participation for Women in Computing” which evaluates the research related pedagogical practices for retaining women in computing.

For those who are not familiar with the platform, why is EngageCSEdu a unique and valuable resource for computer science educators?

EngageCSEdu is an ACM Education Board special project that publishes high-quality, engaging, classroom-tested Open Educational Resources (OERs) for computer science education. We publish innovative, engaging, educational assignments, as well as projects, labs, student activities, and instructional materials for a variety of computing courses (such as Introductory, Data Structures, Discrete Math, Human-Computer Interaction, AI, and Ethics-related). Each submission (or OER) is meant to be adoptable and adaptable by other instructors for use in their own classrooms. EngageCSEdu uses a dual-anonymous review process and each OER is reviewed by at least two computer science reviewers and one social scientist.

We have had special issues on AI assignments, HCI assignments, and later this year we will publish an issue on Responsible Computing materials. EngageCSEdu also contains an Ethics Repository which contains links to news articles related to technology. The Ethics Repository is searchable and maintained by the ACM Task Force on Ethics and Computing Education.

EngageCSEdu is unique because the materials are published in the ACM Digital Library giving each OER a DOI making it a common entry in a CV—it is another publication of the author’s scholarship. The DL will track downloads for author statistics similar to citations. It is also unique because we do not publish solutions—instead, adopters are welcome to email the author for solutions. And all OERs are published in an editable source format and have a Creative Common License for re-use. EngageCSEdu is an impactful venue for the dissemination of creative instructional materials and a great resource for searching for your next classroom assignment.

 

Briana B. Morrison is an Associate Professor at the University of Virginia. Her research focuses on computer science education, broadening participation in computing, and increasing K-12 access to qualified computing teachers. Morrison’s honors include a University of Nebraska Omaha Outstanding Teaching Award and a Georgia Tech College of Computing Dissertation Award.

Morrison is a member of the ACM Education Board, which oversees ACM’s curricular guidelines for university-level education. She is also the Co-Editor-in-Chief of EngageCSEdu, an ACM-sponsored online repository of high-quality, classroom-tested resources for the computer science education community. She also served on the ACM SIGCSE Board (2016-2019).