Big-O is one of the single most important topics in algorithms — and perhaps the most contentious. Algorithms are hard enough to learn as it is, and then comes along this whole other layer of...complexity.

But what if I told you that “getting” Big-O will actually help you make sense of (and better appreciate why it’s important to make sense of) algorithms in general?  I really think this concept, as intimidating as it seems, can be a key. Here’s why.

## Big-O is just a notation

Big-O seems a lot scarier than it is. Like algorithms in general, it's heavily couched in Math Speak, which is like an alien language to anyone who hasn't been adequately introduced to it. And it turns out, you probably haven't been even if you've received a full CS education. It's a topic most professors gloss over, leaving most of us not knowing up from down.

The truth is, Big-O is just the name of the notation used to describe how efficient an algorithm is. For example, the following code returns the first carrot it finds in a list of vegetables:


func findFirstCarrot(in vegetables: [Vegetable]) -> Carrot? {
for vegetable in vegetables {
if let carrot = vegetable as? Carrot {
return carrot
}
}
return nil
}


In the worst case, the code might need to check every single vegetable in the list before it finds a carrot. Another way of saying the same thing is “the time complexity of this function is O(n).” The letter “O” can just be thought of as shorthand for the rough “order” of magnitude of operations it takes to run the function, while “n” represents the possible number of vegetables passed in each time. So O(n) is just a math-style way of describing roughly how long (in the worst case) a function will take in terms of the number of inputs passed in. Easy peasy.

## Efficiency is the entire point

Many engineers think of this as just added detail to memorize. They see a list of operations for some data structure and the associated time complexities, and their brains shut down. What could be more of a waste of time than being asked to memorize tables of data that you can just be looked up as needed? And therein lies the problem.

The entire reason algorithms exist is to make our lives easier. They allow our code to run in a reasonable amount of time and take up a reasonable amount of space. This is their fundamental purpose; it’s why we care about algorithms at all. Learning about how they work without understanding the associated implications of time and space complexity misses the point.

For example, I could implement our carrot finder function above using binary search, which would be a lot more work. Just look at Ray Wenderlich’s Binary Search Implementation:

That’s a hell of a lot more code than our original implementation above. So why go through all the trouble? Because it drastically reduces the amount of time the function takes to complete. If we only have a handful of vegetables, it doesn’t matter, but if we have a billion vegetables to look through, then it’s mission critical. We care about this algorithm exclusively because of the efficiency benefits it provides; that’s what gives the algorithm meaning. Otherwise, it’s just needless complexity.

## Missing the Point Makes Learning Harder

I don't bring this up because I'm trying to waste even more of your time on highfalutin academic bs. I bring this up because trying to study algorithms without reference to why we give a damn about them is next to impossible. It's like memorizing dates in Roman history without having heard of Julius Caesar. How can you be expected to retain information that has no context? You can't. It's just not how our brains work.

In other words, if you think about time and space as separate, “add-on” details, they lose their meaning. If you think about algorithms without reference to time and space (as arbitrary sets of instructions), they lose their meaning. If you think about them together, as two critical parts of the same narrative, everything becomes interesting.

Suddenly, you see that a binary search tree is structured the way it is precisely because we are trying to achieve logarithmic time. A BST that doesn’t provide log n, for search at the very least, is useless, meaningless, unworthy of consideration. It’s something your brain will just delete. But one that does, and one that furthermore can maintain this vastly improved time for all operations — that’s a literal miracle of progress. It’s something that civilization is better off for having because of all the wonderful things it makes possible. And just like that, it’s something — that doesn’t put you to sleep.

## Getting the Point Also Prevents Mistakes

And on top of making learning easier, understanding the point of Big-O (and algorithms in general) will seriously help you avoid mistakes in interviews.

For example, if you know the purpose of a BST is to achieve logarithmic search, then there are certain mistakes about the process of binary search using a BST that you would never make. First of all, to say search takes logarithmic time is just a fancy way of saying you cut out half the possible options at each decision point. In a BST, you start at the root node and go left or right until you find what you’re looking for. Having the choice only to go left or right (with the left being less and right being greater or equal) are the rules a BST uses that guarantees you cut out half of the possible options at each decision point/node. If you know that, you’d never think a BST could have nodes with any more than two children or one parent. You’d also never think both children could be less than the current node or vice versa. Its structure is inextricably tied to its time complexity, and each helps you remember the other.

## Next Steps

So, if you want to master algorithms (and you want them to stick), learn every topic side by side with all the nuances of time and space. Not only will this content be required in interviews, but it will make every detail of the algorithm itself make sense as something in the service of that time and space complexity. The knowledge won't atrophy the next day, because it will be rooted in a narrative that breathes into it a new level of meaning and purpose. And you'll make fewer errors, because you won't be attempting to store countless unrelated facts, but rather details that are tied to a coherent purpose.

Correction: This article originally stated that the O in Big-O notation is short for order. What the author meant to convey is that it's a helpful learning technique to think of the O as order. The wording has been edited for accuracy.

# Why you should learn Big-O and stop hacking your way through algorithms

Key Results

Big-O is one of the single most important topics in algorithms — and perhaps the most contentious. Algorithms are hard enough to learn as it is, and then comes along this whole other layer of...complexity. But what if I told you that “getting” Big-O will actually help you make sense of (and better appreciate why it’s important to make sense of) algorithms in general? I really think this concept, as intimidating as it seems, can be a key. Here’s why.

## Introduction

,

Big-O is one of the single most important topics in algorithms — and perhaps the most contentious. Algorithms are hard enough to learn as it is, and then comes along this whole other layer of...complexity.

But what if I told you that “getting” Big-O will actually help you make sense of (and better appreciate why it’s important to make sense of) algorithms in general?  I really think this concept, as intimidating as it seems, can be a key. Here’s why.

## Big-O is just a notation

Big-O seems a lot scarier than it is. Like algorithms in general, it's heavily couched in Math Speak, which is like an alien language to anyone who hasn't been adequately introduced to it. And it turns out, you probably haven't been even if you've received a full CS education. It's a topic most professors gloss over, leaving most of us not knowing up from down.

The truth is, Big-O is just the name of the notation used to describe how efficient an algorithm is. For example, the following code returns the first carrot it finds in a list of vegetables:


func findFirstCarrot(in vegetables: [Vegetable]) -> Carrot? {
for vegetable in vegetables {
if let carrot = vegetable as? Carrot {
return carrot
}
}
return nil
}


In the worst case, the code might need to check every single vegetable in the list before it finds a carrot. Another way of saying the same thing is “the time complexity of this function is O(n).” The letter “O” can just be thought of as shorthand for the rough “order” of magnitude of operations it takes to run the function, while “n” represents the possible number of vegetables passed in each time. So O(n) is just a math-style way of describing roughly how long (in the worst case) a function will take in terms of the number of inputs passed in. Easy peasy.

## Efficiency is the entire point

Many engineers think of this as just added detail to memorize. They see a list of operations for some data structure and the associated time complexities, and their brains shut down. What could be more of a waste of time than being asked to memorize tables of data that you can just be looked up as needed? And therein lies the problem.

The entire reason algorithms exist is to make our lives easier. They allow our code to run in a reasonable amount of time and take up a reasonable amount of space. This is their fundamental purpose; it’s why we care about algorithms at all. Learning about how they work without understanding the associated implications of time and space complexity misses the point.

For example, I could implement our carrot finder function above using binary search, which would be a lot more work. Just look at Ray Wenderlich’s Binary Search Implementation:

That’s a hell of a lot more code than our original implementation above. So why go through all the trouble? Because it drastically reduces the amount of time the function takes to complete. If we only have a handful of vegetables, it doesn’t matter, but if we have a billion vegetables to look through, then it’s mission critical. We care about this algorithm exclusively because of the efficiency benefits it provides; that’s what gives the algorithm meaning. Otherwise, it’s just needless complexity.

## Missing the Point Makes Learning Harder

I don't bring this up because I'm trying to waste even more of your time on highfalutin academic bs. I bring this up because trying to study algorithms without reference to why we give a damn about them is next to impossible. It's like memorizing dates in Roman history without having heard of Julius Caesar. How can you be expected to retain information that has no context? You can't. It's just not how our brains work.

In other words, if you think about time and space as separate, “add-on” details, they lose their meaning. If you think about algorithms without reference to time and space (as arbitrary sets of instructions), they lose their meaning. If you think about them together, as two critical parts of the same narrative, everything becomes interesting.

Suddenly, you see that a binary search tree is structured the way it is precisely because we are trying to achieve logarithmic time. A BST that doesn’t provide log n, for search at the very least, is useless, meaningless, unworthy of consideration. It’s something your brain will just delete. But one that does, and one that furthermore can maintain this vastly improved time for all operations — that’s a literal miracle of progress. It’s something that civilization is better off for having because of all the wonderful things it makes possible. And just like that, it’s something — that doesn’t put you to sleep.

## Getting the Point Also Prevents Mistakes

And on top of making learning easier, understanding the point of Big-O (and algorithms in general) will seriously help you avoid mistakes in interviews.

For example, if you know the purpose of a BST is to achieve logarithmic search, then there are certain mistakes about the process of binary search using a BST that you would never make. First of all, to say search takes logarithmic time is just a fancy way of saying you cut out half the possible options at each decision point. In a BST, you start at the root node and go left or right until you find what you’re looking for. Having the choice only to go left or right (with the left being less and right being greater or equal) are the rules a BST uses that guarantees you cut out half of the possible options at each decision point/node. If you know that, you’d never think a BST could have nodes with any more than two children or one parent. You’d also never think both children could be less than the current node or vice versa. Its structure is inextricably tied to its time complexity, and each helps you remember the other.

## Next Steps

So, if you want to master algorithms (and you want them to stick), learn every topic side by side with all the nuances of time and space. Not only will this content be required in interviews, but it will make every detail of the algorithm itself make sense as something in the service of that time and space complexity. The knowledge won't atrophy the next day, because it will be rooted in a narrative that breathes into it a new level of meaning and purpose. And you'll make fewer errors, because you won't be attempting to store countless unrelated facts, but rather details that are tied to a coherent purpose.

Correction: This article originally stated that the O in Big-O notation is short for order. What the author meant to convey is that it's a helpful learning technique to think of the O as order. The wording has been edited for accuracy.

## Book a demo to start reviewing top engineering candidates today.

Get started hiring great software engineers with Triplebyte now

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

By subscribing you agree to Triplebyte's Terms , Privacy and Data Policy, including Cookie use. You also agree to receive potential subsequent email, SMS and third-party communications, which you may opt out of at any time. Visit our Privacy Center.