This is the second article in my series on improving your technical skills to perform better at technical interviews. The other articles can be found here:
- Data Structures
- Algorithms (you are here)
- Interview questions (in progress)
- Mobile app interviews (in progress)
- Memory management (TBD)
What is an algorithm?
An algorithm by definition is a specification of how to solve a specific problem. Basically, if you’ve written code to solve a problem, you’ve written an algorithm! Probably the two most basic categories of algorithms, commonly discussed in interviews (and around lunch tables among engineers), are search and sorting algorithms, which is what we’ll cover here.
I will introduce a handful of algorithms and explain why they are interesting to look at. I recommend diving into the links to understanding them at depth.
Before we get into it, let’s establish some terminology.
Big-O notation explained
Because algorithms are a way to solve problems in efficient ways, we need a way to measure and compare them, and that way is called the Big-O notation. It provides a way to estimate how fast or complex an algorithm is and it’s what we use to compare two algorithms solving the same task. Knowing what Big-O is and how to calculate it is important so spend some time learning it.
In short, Big-O notation is used to explain the time complexity, or how long, an algorithm takes depending on the input size. This chart provides an overview of some common Big-O complexities. The faster algorithms are down and to the right, meaning it can process more input in less time.
The x-axis represents the number (n) of elements and the y-axis represents time (N). Image credit: Swift Algorithm Club
As you go through the algorithms below you can reference back to this chart to understand how they compare.
Learn more:
Space-Time Trade-Off in Algorithms
Speed isn’t always the most important aspect when designing an algorithm. Sometimes you might be working with gigantic datasets or on a computer system with limited memory. This is where space-time trade-off comes in; choosing the right algorithm to optimise for computation time or memory (space). Sometimes the fastest algorithm isn’t the best choice for your application. We’ll cover both fast and space efficient algorithms below.
The Recursive Algorithm Design Pattern
When an algorithm calls itself we call it Recursion. We use it to solve problems which depend on results of smaller versions of itself. This can often be compared to iteration but there are advantages with each solution. Here’s an example:
// Iteration:
func iteration(end: Int) {
var x = 0
while x < end {
print(x)
x += 1
}
}
iteration(end: 5)
// Recursion
func recursion(end: Int, current: Int = 0) {
guard current < end else { return }
print(current)
recursion(end: end, current: current + 1)
}
recursion(end: 5)
These two approaches will produce the same result. If the algorithm is more complicated, it can be easier and cleaner to use recursion.
Learn more: Wikipedia - Recursion
The Divide and Conquer Design Pattern
An important concept where an algorithm splits the input into smaller pieces to solve the smallest possible problem, and if needed (e.g. when sorting) stitching the result back together. Several algorithms such as Merge Sort or Binary Search use this strategy.
Learn more: Wikipedia - Divide and Conquer
Common Search Algorithms
Search algorithms can mean a lot of different things like finding the best chess play or best travel route, but for technical interviews it’s fairly common to be asked how to find or insert a value into a dataset or database in the most efficient way.
Linear Search Explained
This one is pretty straight forward, the name says it all, but it’s a good reference point for other search algorithms and worth mentioning. Linear search is simply iterating over a dataset one by one, comparing values until you find what you are searching for.
Average Big-O performance: O(n)
Binary Search
I really enjoy writing a binary search algorithm, it’s a fast way to search over a sorted dataset. Using something called a divide and conquer strategy, binary search will recursively split the sorted dataset in two, determine which half contains the value based on min/max values in the set and continue until (if) it finds the searched value.
Average Big-O performance: O(log n)
Simple and Popular Sorting algorithms
There are many different sorting algorithms and I’ve overheard many discussions between software engineers discussing performance and functionality, it’s a fascinating and nerdy topic. Unless you’re going for a very senior role at a very large tech company, in my experience, it’s rare to come across interview questions which ask you to implement sorting algorithms. It’s quite common to get questions around how your code performs and knowing how a sort (or search) affects your overall performance is a must, it’s not free just because you call the standard library sorting method. This knowledge will also help you write better and more performant algorithms in your daily work (and maybe save you a few dollars on your favorite cloud hosting platform).
Bubble Sort
Bubble sort is a simple but slow sorting algorithm. It iterates through your input comparing two adjacent elements, swapping them if needed, and repeating this until everything is sorted. Since the swapping can be done in place this is a very space-efficient algorithm.
Average Big-O performance: O(n²)
- Here’s a video using dance to explain bubble sort (YouTube)
- Swift Algorithm Club - Bubble Sort
- Wikipedia - Bubble Sort
Merge sort
Merge sort, in its basic form, uses the divide and conquer and recursion patterns. Merge sort first splits your input into smaller pieces, then, while merges the pieces back together in a sorted order.
Compared to Bubble Sort, Merge Sort trades space efficiency for speed since splitting and merging arrays means allocating new arrays to store the values in.
Average Big-O performance: O(n log n)
Quicksort
One of the most well known and fastest sorting algorithms. Similarly to Merge Sort it also uses divide and conquer and recursion. Quicksort gained its fame for using recursion. Quicksort is very complicated and I don’t suggest learning the details but knowing about it, what strategy it uses and how it performs is a good start.
Average Big-O performance: O(n log n)
Insertion Sort
Insertion sort, as it’s name hints, means you take your input objects one at a time and insert them into a new array, always in a sorted fashion. I included it as I once received an interview question where it would have helped me produce a faster algorithm. Keeping datasets always sorted can be very helpful when you need to add or remove items to them often.
Average Big-O performance: O(n²)
Make sure to subscribe to the newsletter below to stay up-to-date when I rease new articles.