One of the disadvantages of doing web development (to me, anyway) is that I don’t often get to use my background in theoretical computer science, especially with the front end stuff. It’s much more likely that I’ll be fighting with javascript than calculating asymptotic runtimes!
A few weeks ago was an exception. We had a number of jobs what could be scheduled and needed the software to determine, when a user made a scheduling request, whether that request could cause either the job being scheduled or any job that depended on it to exceed the time allowed. This was a fun little problem that I solved by using a modified depth-first-search to find the longest dependency chain for each job, then using the depths obtained in this preprocessing step to efficiently [O(n+m)] determine which jobs, if any, would end up running past the allotted time so that we could warn the user before allowing the scheduling.
In this case, the problem instance is generally small enough that we could have just brute forced a solutionĀ and it would likely still have had a reasonable runtime, but going with an efficient solution straight off means we don’t have to worry about scaling this up in the future, and the code is just nicer to work with.
You often see knowing the standard algorithms being compared to having more tools in your toolbox, and I think that’s a good comparison. I’m not the super-handyman type, but I still keep a good set of power tools around, and when something around the house needs to be fixed I often have the tools I need to just go ahead and take care of it, and I don’t have to force a tool into a situation it’s not meant for because I don’t have the correct tool available. Similarly, some types of development may not require you to pull out the graph algorithms all that often, but when those situations arrive they’re helpful to have around.