Michael Weiner

March 5, 2021

An open letter to those that teach computer scinece

I love technology. Always have. I took my first true computer science (CS) course in a public school system in middle school. I took CS courses throughout my high school education and I am currently a student at a major four-year university working towards a degree in computer science (emphasis/focus within the field is still to be determined).
 
As a former student in a public school system and a current student at a university I do have some thoughts on, in my opinion, flaws in how computer science in general is being taught. These ideas are in no way specific to one school, instructor, or class, but rather a collection of ideas from the first 10 years of my journey learning within the computer science sphere (for lack of a better term). 
 
At the high school and post-secondary level many courses are structured similarly. Usually there are:

  •  3-4 exams & a final exam that account for 65-75% of a student's final overall grade
  • 3-5 projects that account for 10-20% of a student's final overall grade
  • Lecture participation, labs, discussions, etc. that account for 5-20% of a student's final overall grade

Again, this general structure is followed by most classes and instructors. Some do completely break this mold and flip it on its head.

Lectures, labs, and discussions are usually more guided. Open conversation is always welcome, the use of online tutorials, compliers, command line tools, etc. are all fair game. I have even had professors and teaching assistants (TAs) tell me to open an IDE and run some code to answer my own question(s).

It is drilled into students in labs every week that the best way to learn is to open some code and start typing. Compile it. Find some syntax errors. Find some logic errors upon running your program. Go back to the code and try it again. And again. And again. Work in groups. Talk through problems. This is good! It is a decent representation of the "real world" that can be created in a classroom environment for an hour here and there. 

Lectures, labs, discussion, and a student's own exploration outside of class might accumulate to be 12-16 hours of work a week, maybe even more, but account for a relatively small fraction of their overall grade. It may not sound like a huge deal assuming the rest of the course functions in the same fashion, but I have found that it usually doesn't.

Those 3-5 projects usually have to be done by yourself. I have only ever been in a single university level course that allowed a single partner on a single project. Does that not seem backwards to anyone else?

Industry professionals often visited the classes I was enrolled in at my high school and I often made a point to ask each and every one of them if they ever did work exclusively by themselves. The answer I got every time was "No." Each person often explained that their day was usually filled with meetings to do pair programming or debugging, Slack or Microsoft Teams (or something similar) was used to bounce questions around all day, and each person was a member of a team.
 
If the classroom is meant to prepare students for the "real world" why are the experiences students are tasked with the exact opposite of what they will find when they graduate and leave the classroom?
 
Maybe I was missing something.
 
After my first course at the university level, it came out that in a course of roughly 500 students, 35-50 students had been caught cheating at some point throughout the semester. As I completed more courses, this issue seemed to be the single reason behind how projects are structured: catch those that cheat. It often felt that some of my instructors were more hell bent on catching cheaters than actually teaching those that wanted to be there.
 
Now, I do understand why any academic institution has to be on the look out for cheaters. Truly, I do. My problem is the lack of balance. People cheat every day, in all aspects of life. If the world tried to catch every cheater, we would do nothing else. I raise an issue when I feel as though the person that is supposed to be teaching me is more interested in sticking it to the, relatively, few students cheating. Does punishing the students that do no cheat serve as an effective measure to prevent those that want to cheat from actually cheating? No. No, it does not.
 
More frustrating to me as a student is that these projects were truly enjoyable and a rewarding experience. I have easily learned more by completing the projects I have been assigned than I have in lectures, labs, discussion, and exams combined. Hands down. These projects, on average, have typically taken me 20-25 hours to complete - per project. The time to complete a single project is more time total than I would spend on all of my exams combined and yet it might only account for 5% of my final overall grade. That seems backwards too.

Well, okay. What about those pesky exams? The exams turn pretty much every CS course I have taken on its head. You don't type anything; it is all handwritten. This does not sound like a huge deal, but when throughout the course you have been told to compile things, try things, type things, etc. - that is a huge change. You no longer have the ability to compile and check syntax. The physical action of writing is very different from typing. You lose the ability to try things - as you have been instructed to do in the rest of the class.

Ultimately, if you want to do decent on these exams you have to write perfect code. That does not exist. Anyone in the industry knows that. We are humans. We make mistakes. I also made a point to ask those in the industry if at any point in their "real world" jobs their boss has ever come to them, asked for code to work flawlessly in one hour and told them that they could not get any help or use their complier. All of them looked at me if I was crazy and said "No." Again. 
 
If we are not expecting industry professionals to do something, why are we expecting first-time learners to the field to do it?  
 
It has been shown time and time again that "traditional" or "standardized" testing doesn't help students to learn more. It stresses them out and does not accurately reflect what they do and do not know. So, why is more and more emphasis and weight placed on these exams? I honestly couldn't tell you. I don't understand the logic - especially with this field. 
 
I am also not trying to say that universities should get rid of all testing and just not test students at all. Nice try - but no. What about making the overall weight of tests proportional to the time spent taking that test? What about instead of having a "traditional" exam courses became more project based and students could show off a project using knowledge from the course? 
 
The line that "this is the way it has always been done" is not an answer. Why can't we get creative and try something new? Why do universities continue to teach courses where programming languages are no longer used, or ideas are considered to be obsolete? Teaching history is important, but if my instructor tells me "You will never use this in the real world," why is it an expectation, a requirement, for students to learn it?
 
I also realize that these ideas are not so cut and dry. They are complicated. I realize that, but on the same hand I think the time has come to reexamine how we teach computer science. Especially if we want to make the field more welcoming and diverse.

About Michael Weiner

Hey, visitor! I'm Michael, a software engineer based in Minnesota, USA. I am an IBMer working on IBM Cloud Kubernetes Service. Feel free to poke around some of my work at michaelweiner.org. Below are some of my personal thoughts on business and my experiences in the computer science industry. Thanks for reading!