The value of a computer science education for programmers

It’s not uncommon to see people decrying a computer science degree as not effectively preparing students to be programmers. Of course, this misses the point that it’s not intended to do so; computer science is the study of computation, not the study of how to program. Indeed, the 2015 Stack Overflow Developer Survey found that only 75% of programmers responding said they studied computer science at a university, and many of those still described themselves self-taught.

And yet, every year another 40,000 students graduate with a bachelor’s degree in computer science, and many of them go on to work as programmers. If programming is so remote from computer science, why do so many people who want to program start with a computer science degree?

Before I try to answer that question, I should mention my background. I have bachelor’s, master’s, and PhD degrees in computer science and have worked as a software developer for the last five years. At the same time, I would also describe myself as self-taught, because while I had a couple of programming classes in college, for the most part we were just expected to pick up languages as we needed them. The focus was on what we could do with computers, rather than the mechanics of any particular language.

Why people get a computer science degree when they want to program

I think there are three main reasons why people study computer science.

Photo of the name "Department of Computer Science"
CU Computer Science” by AberlenOwn work. Licensed under CC BY 3.0 via Commons.

The first group is people who just like computers and want to learn more about them; I fall into this group. Since we’re into computers, computer science is the natural degree path to take. Once we graduate, programming is one of the most obvious computer-related career paths to follow.

The second group is people who enjoy programming, figure they need a college degree, and settle on computer science as the most relevant degree. This doesn’t make it the wrong option; most people benefit from getting a well-rounded liberal arts education offered by a bachelor’s degree. However, there are also a number of people who really only want to learn about programming, and would most likely be happier in a certification program from a technical college that is entirely programming focused. Lately we, as a society, have started to think that everyone needs a four-year college degree, and that’s really not true; for people who know that they want to pursue a trade, going to trade school for two years can be a much more financially sensible decision.

The third group is people who decided to get a degree in computers because they heard that it guarantees them a lot of money. Hint: it doesn’t. People in that group are most likely not reading this blog.

What a computer science degree does for you

So, given that computer science classes don’t teach you to program (at least, not past a very basic level) and getting a four-year computer science degree is so much more expensive than attending a technical college or code camp, why do people – at least, those who want to be programmers and know about the other alternatives – do it?

One reason, of course, is that it expands your options. Having the degree both means that you’re qualified to do more different types of work, and makes it easier to get in the door with employers who will only consider people with degrees. It’s been said that the bachelor’s degree is the new high school diploma: it shows employers that you have a basic level of education.

But suppose you know you’re only interested in programming, and further, you’re confident you can get a programming job with or without a degree. Is it still worth it?

How a computer science degree makes you a better programmer

Again, let’s start with the basic assumption that you are, for the most part, going to teach yourself how to code. Getting the degree can still help you in two areas: getting a job and being better at it.

Getting the job

Being enrolled in a computer science program opens up more opportunities. It’s easier to get an internship (which, for programmers, can actually pay pretty well), which can essentially be an extended job interview; it lets you see if you’re interested in a job, while letting the company determine whether they’d like to keep you around without the overhead of hiring you as a full-time employee. I never did an internship while I was in college, because I was a math tutor at the time, and ended up regretting it as it’s the easiest way to get experience.

Many companies also recruit for their full-time positions as selected universities – they have a relationship with schools that tend to send them good programmers and so make a habit of looking for new employees there. I’d never even heard of the company I now work for until they took my resume at the job fair for computer science students at my school.

Doing the job

Ok, so now you have the job. What next?

We’ll assume that you have decent programming skills, and you’re keeping them up to date. Many programmers like it so much, they write code in their free time as well. That still leaves the question: what kind of programmer are you doing to be?

Drawing of a monkey coding.
Code Monkey by Len Peralta. Licensed under Creative Commons.

Are you going to be the type of programmer who’s only interested in writing code? Someone else determines what code needs to be written, and you implement it. Let’s be honest here – if you’re this type of programmer, who just totally loves to code, you’re probably going to do a better job in this kind of position than anyone else. For you, the degree may be redundant.

On the other hand, what if you’re interested in taking on a larger role? For lack of better terms, I use the word ‘programmer’ to mean someone who primarily writes code, and the phrase ‘software developer’ for someone involved in all aspects of, well, software development, including programming.

Being a software developer

As a software developer, my job involves writing code. It also involves testing other people’s code, creating designs for the code I’m going to write, critiquing other developers’ designs, and writing documentation explaining what the code does, how it should be tested, and how it should be explained to the end users. Programming is a large part of my job – maybe the most important part – but it’s not the only part, nor does it even occupy the majority of my time.

So how does having a well-rounded education help here? You need to be able to:

  • Analyze requirements to determine what actually needs to be built
  • Write clear documents that effectively communicate the intent of your code
  • When the efficiency of the code matters, determine what algorithms will work best given your unique conditions
  • Ensure that the resulting product is usable by people who are not experts on (or familiar with) your code

Some of these things – particularly algorithms – are taught in computer science classes. Others are general education requirements that you’ll have to satisfy to get the degree, even though they’re not computer-related; things like English Composition. I’ve seen more than one person complain about their college forcing them to take classes on subjects like English that they’re not interested in, but they actually are important to being a software developer. English classes teach you to communicate. Math classes, well, let’s just say I probably wouldn’t be too trusting of any program written by someone who can’t handle college math; computer science is essentially an extension of mathematics.

Always be learning

Face it, though: in ten years, you’re not going to remember most of what you learned in college. What you will remember, hopefully, is how to learn. As a full-time college student you get a lot of information thrown at you, and you have to be able to pick up the basics of multiple different subjects quickly in order to succeed. Which actually sounds a lot like a job working with computers, where you have to be constantly learning about many different technologies.

Now, about those student loans…

Overcoming Impostor Syndrome

About six months after I started my job, I mentioned to my mentor that what worried me was knowing that I was the worst developer on the team. She told me that I was definitely not.

People are often bad at judging how good they are at things. At one end of the scale we have the Dunning-Kruger effect, in which people who aren’t particularly good at something mistakenly believe their level of skill to be very high; people who are incompetent rarely recognize this. At the other end, people who are skilled at a given task tend to find it easy, assume that others also find it easy, and thus tend to underestimate their own skills; after all, they’re only good at easy tasks!

Computer programming is such a complex task, with new things constantly needing to be learned, that it’s very easy to feel like you’re falling behind. Then you look at your coworkers, who don’t appear to be having any particular difficulty, and feel like you’re not good enough to deserve the position you’re in.

Graph of actual vs perceived ability. When actual ability is much higher, we have Impostor Syndrome; when perceived ability is much higher, we have the Dunning-Kruger Effect.
Image courtesy of Imgur.

When I confessed to one of the senior developers on the team that I didn’t really feel like I knew what I was doing, he said he also felt that way during his first year, which made me feel better about my situation. After my first year, I still felt a little out of my depth, but the feeling of “I have no idea what’s going on!” had gone away.

The turning point for me was shortly after my second year with the company. The coding metrics we use showed that my performance was in line with the rest of the developers, and I got a sizable raise. This was when I started to relax. I figured, based on the raise, that my employer thought I was doing a good job, so I could probably stop worrying about getting fired because I didn’t know what I was doing.

Being a programmer means constantly failing at things; you never seem to feel fully competent (or at least, after five years, I still don’t). Of course, when you’re just starting, the feelings of inadequacy are natural; without experience, you really aren’t particularly competent yet! But when it comes to programming, those feelings may never go away because you never feel like you’re caught up.

One thing that stuck with me last year, when I heard Bob Martin speak, was he mentioned that the number of programmers doubles every five years. That means that, even though I had very little practical experience when I started my job five years ago, I’m now more experienced than half of the developers out there! Additionally, my job is not static – I have to do new things all the time, which means constant frustration but also constant learning. A few days ago, it occurred to me to think about it mathematically: if I’ve improved my skills by 2% each month, then I’m twice as good as I was three years ago. Of course, if it’s hard to measure how good people are at programming, it’s likely going to be even harder to measure improvement, but if you can find some sign each month that you’re a better programmer than you were the month before, you’re probably on the right track.

And if you were competent three years ago and you’re twice as good now, then you’re probably not terrible at this after all.

Unit Testing: What I learned my first year

In 2014, I decided that one of my goals in 2015 was to start doing unit testing at work. I picked up a copy of The Art of Unit Testing and read a reasonable amount of it. I was excited about the idea of being able to automatically catch bugs, making it less likely that they’d slip through to be found by my code reviewers or testers.

Book cover from The Art of Unit Testing

Unit testing refers to automated tests designed to each test one “unit” – usually a function – in your code. This means checking that for any given input, the function will return the appropriate output. This is used before integration testing, in which the various units are tested together to ensure they work well together.

In 2015, every time I developed a new activity, I wrote C# unit testing code to go with it. When I worked on existing activities, I wrote C# unit testing code for them. Here’s what I learned:

1) Unit tests don’t catch bugs as much as they prevent bugs.

Long, complex functions are difficult to test; you have to ensure that you’re covering every possible path. Short, simple functions that do one thing are relatively straightforward to test. That means that to make testing easier, you end up changing the way you write code.

1a) Unit testing makes your code more modular.

Simpler code, of course, is less likely to contain bugs, which means that just writing code with unit testing in mind can improve your code quality…whether or not you ever actually write the tests!

2) Unit testing promotes the refactoring of existing code.

When you write unit tests, the test class becomes another client of the class under test, and it becomes appropriate to update the class under test to make it more testable. This means that writing tests for your existing code is a good excuse to go back and refactor that code, which means that if your team decides to add tests, you can use that as justification to spend developer time paying down your existing technical debt.

3) Unit tests take time now, but they save time in the future.

I’ve heard more than one person say that doing unit testing doubles the amount of time it takes to write code – after all, you’re writing twice as much code! I don’t write THAT much test code, but I’ve heard from people who do. You get some, but not all, of that time back later in the process, because you’ve made the testing process easier (if your tests accurately describe each unit, then you know it does what it says it does) and your code easier to read.

The real time savings, however, may come in when you need to make changes to the code later on. Having existing unit tests in place reduce the need to do regression testing; if you change a unit, then run the unit tests without errors, you know that you haven’t changed the expected behavior of that unit of code, and you can focus on doing integration testing for your new workflow.

Image of a bug4) During development, unit tests are just as likely to be wrong as the code under test.

Code is code; if you’re writing your unit tests and your code under test together and the unit tests fail, it could equally well be either set of code that’s buggy. That’s not even completely true; I find it’s usually the test that’s wrong, because I’ve missed that the code under test will only ever be called after some preconditions are met and I haven’t set up those conditions properly in the test code. Still, assuming you’ll find the problem whenever a test fails, if you’ve defined your conditions correctly, the only way to get a false negative (meaning the code is wrong, but the test didn’t catch it) is if both the test and the code under test are buggy, which is clearly less likely than only the code under test having a bug.

5) It’s good to make tests fail: the benefit of test-driven development.

When I started doing unit testing, I would write the code under test, then write unit tests to verify that code. Since I’d be calling the functions being tested from other functions I was developing, I’d generally catch bugs well  before I actually added the unit tests that would capture those bugs.

I’m currently trying to move more towards test-driven development, where you write the tests first and then develop the code. In other words, first define the function signature and behavior (but don’t actually make it do anything), and then write the unit tests for that function. The tests will fail (because the function doesn’t do anything) and you can then add the actual functionality. If the tests pass, then you might have discovered a false negative and can fix the logic in your test before it trips you up running against the actual code.

6) Unit tests help you understand your code better.

This may be the largest benefit of unit testing for me. When I’m writing assertions against my code and the name of the function I’m testing doesn’t quite line up with the functionality I’m expecting, the assertions don’t quite make sense, and I know I need to rename the function (or a parameter). Additionally, unit tests are a way to document your code which is always correct; if your unit tests are all green (passing), then they should be accurately describing the functionality of the code. Unlike a comment, unit tests have to be updated when the code changes…at least, assuming you don’t allow code to be checked in with broken unit tests!

Is it worth it?

When I started writing unit tests, it was essentially because I was interested in trying them out. The company offered some training on how to write unit tests, but there was no mandate to actually use them. What I’ve found is that, while I’m not really finding bugs with my tests, they’re helping me to write cleaner code that will be easier to maintain in the future. For this reason, I’m now strongly encouraging the rest of the developers on my team to do unit testing as well.

In the past, we’ve had issues with functions and classes that get overly long and complicated. Now that we’re migrating our client code to .NET, I see unit tests as one way to encourage writing code that will be much easier to update in the future.

Project management and consistency in coding

Whenever a team leader position has opened up at work, I’ve been very clear that I’m not interested. While I have no problem with managing people – many of my jobs have involved this – I’m currently focused on improving my development skills.

However, running a project I have no issues with, and for the last few months I’ve been in charge of my team’s web migration project. This has largely consisted of planning out our timelines and helping our newer developers get up to speed on our team’s standards for web development.

"Editing a Paper" by Nic McPhee. Used under Creative Commons license.
“Editing a Paper” by Nic McPhee. Used under Creative Commons license.

I ended up sending one development log back to the developer several times with instructions to put things in the correct order. I don’t mean that the code didn’t work; I mean that I wanted the overridden methods, helper functions, and properties to be in the same order, and functions to be named the same way, as they are in the other classes.  Given that the compiler doesn’t care, why do I bother?

While we’ve been transitioning our legacy code to web, I’ve been taking advantage of the rewrite to ensure that the code is easier to read and update in the future – best practices have come a long way since it was originally written! One thing that makes no difference to the computer, but makes a lot of difference to the programmer, is consistently doing things the same way from file to file. Once you’re familiar with the conventions, it’s very easy both to locate the code for a given piece of functionality and to understand how the logic works.

This is the same reason that we have naming conventions for variables and functions. In C#, if I see a variable name in camel case I know it’s a local variable, while Pascal case means it’s a property. In JavaScript, a function call in came case is for a JavaScript function, while a call in Pascal case is for a C# DataMethod. The event handlers for the save and restore buttons might have different code in different activities, but they’re still named and called the same way. Consistency means that developers aren’t using brainpower to process trivial differences in the code, which makes it easier to focus on the functionality. At a functional level, solving the same problem the same way each time makes it easier to avoid bugs in the code, as we only need to make a particular mistake once.