Git Workflows and Principles

People use Git in a myriad of ways and many develop their own personal workflows. Flexibility is one of Git’s strengths, but that can also make collaboration harder. Establishing some principles or best practices, for yourself or your team, will increase productivity, save time and avoid losing work.

I’ve been using Git for almost two decades and learned the ins and outs of it, and I’ve used it in both small and large (100+ dev) teams. This led to the discovery that many teams don’t have established workflows which can really wreak havoc on a team. Over time I’ve seen what works and I wanted to share my workflow to make collaboration easier.

I distilled this workflow down to 4 principles and 6 practical tips, and if you follow them Git is a delight! Especially when you’re working on a team.

I hope you find them useful, let me know in the comments below or tell me what your flow looks like.

Here’s what we’ll be covering.


  1. Always work on a feature branch
  2. Take deliberate actions
  3. Never force push
  4. Squash your git history clean

How to and tips

  1. Simplify your commands
  2. Progressive git adding (patch add)
  3. Amending and fixing commits
  4. Keep a clean history with interactive rebasing
  5. Save your hard work with git stash
  6. Get out of sticky situations with cherry-pick

In the examples below I use the command line Git tool but these concepts should work with most graphical interfaces. Otherwise, now is a great time to learn these commands.


Always work on a feature branch

Pushing direct to the main branch should be both discouraged and disabled. If you push to the main branch you are opening up to accidental bugs or worse - malicious attacks.

When it’s time to work on a new feature, first make sure you have the latest upstream changes, then check out a new branch. Having a routine like this minimises the risk of errors when working on your changes.

Here’s how I start work on a new feature:

# First make sure we're up-to-date
git checkout main
git fetch
git rebase origin/main
# Then create a branch
git checkout -b acr/feature-name

I always prefix my branches with a personal identifier or username. This makes it easier for me to find my old branches and easier for colleagues to know who owns it.

In an ideal world we always start working from the main branch like in the example above. I’ve forgotten to fetch the latest changes before starting new work so many times and if you do, it’s not hard to correct. Use git stash to save your changes, follow the example above, then apply your changes with git stash pop.

Github flow goes into further detail about this process if you want to learn more.

Take deliberate actions

Working with Git you always want your actions to be deliberate to avoid errors and bad states. Git has convenience methods like pull which basically performs the fetch and merge steps. But what happens if you have local changes, or you’re the wrong branch? Simple examples perhaps but it’s easy to end up in more complicated situations.

That’s why we want to be deliberate about our actions. Try to be as specific as possible when working with Git, and if something goes wrong it’s one thing you need to fix. I can’t even count how many times being deliberate has saved me from complicated merges and rebases.

A couple of practical tips on being more deliberate.

Prefer fetch over pull

The fetch command only fetches the remote changes and stores them in the local database. The pull command first does a fetch then merges your local changes with the remote.

Using fetch instead of pull puts you in control over when and where you want those changes merged, like merging a different branch into yours.

Always using fetch is like muscle memory for me, and allows me not to worry about how the merge will be even with local changes.

Prefer merge over rebase

If you’re working on a feature branch for a longer period of time changes are likely to occur on the main branch. To keep your feature branch up-to-date, fetch and merge them.

From your feature branch:

git fetch
git merge origin/main

Faster and easier than checking out main, pulling, and rebasing the feature branch to main. It’s also much clearer what’s happening. Deliberate.

Using merge means we only have to deal with one potential merge conflict. It’s easier than having many small conflicts, or addressing the same conflict several times. It also avoids rewriting the git tree like a rebase does.

I only use rebase to catch up on the main or a shared branch where we want to preserve the git tree.

If you’re worried this leads to more typing, stay to the end where I share my shortcuts to make this fast and easy.

Never force push

It really surprised me to find out that a lot of people use force push. There are very few legitimate reasons for using force push but the risks are very high. You don’t really need it! Thanks to the principles and tips in this article I can’t even remember the last time I needed it.

Using force push opens up for a range of possible problems:

  • it’s easy to mess up the git history and cause huge problems for collaborators
  • risk overwriting an unexpected change or conflict, causing bugs
  • it makes collaboration and receiving help from coworkers harder
  • code review tools have trouble with it, especially partial reviews

Same as always using fetch, making it a rule helps.

If you never force push, you never get comfortable using it!

Reserve force pushing for emergency situations, like removing sensitive information or correcting errors.

A common reason to force push seems to be rebasing the feature branch onto the latest main branch. To avoid this issue, instead of rebase use merge and pushing will work without force.

Squash your git history clean

Keeping a clean git tree helps with release management, reverting changes, finding bugs and much more. It’s one of the reasons why rebase is so common, but here’s where squash merging comes in.

When we squash merge we clean up the commit history by merging all commits on a branch into one commit on main. No traces of “fix this” or “attempt that”. With squash merge we get one commit per feature branch making it easy to revert if there was a problem. Feature branches you can delete after merging to master to keep the git tree looking nice and clean.

I typically use Github to squash a branch through a pull request, but here’s how you do it on the command line:

git checkout main
git merge --squash {feature-branch}
git commit -m "{Commit message explaining the feature}"
git push

How to put the principles to use

These principles might seem limiting at first, but your work will be easier and you’ll spend less time fixing issues. I rarely have complicated conflicts, branching issues or git history errors.

To help you get out of any sticky situations, maintaining a clean history and avoiding loss of work, here are some tips.

Simplify your commands with aliases

Since I use the command line and Git is one of my core tools I want things to go quick and be easy. To save a few characters every time I want git, I shortened it down to just g.

Depending on which shell you use edit your ~/.profile, ~/.zshrc, or ~/.bashrc file and add:

alias g='git'

I also have aliases for common operations like:

alias gs='git status'
alias gd='git diff'
alias ga='git add'
alias gc='git commit'
alias gp='git push'
alias grom='git rebase origin/main'

Finally, to view my git tree/history I have a command I call “graph”:

git log --all --decorate --graph --pretty=format:'%C(yellow)%h %C(red)%ad %C(blue)%an%C(green)%d %C(reset)%s' --date=short --abbrev-commit

I call this git graph (or g for short) in ~/.gitconfig, and I have a terminal command alias gg in ~/.profile.

I’m so used to using g, I found it hard to type out git throughout this article!

At the bottom of this article you’ll find all these aliases in one place.

Progressive git adding (patch add)

To make sure I only add the changes I want I always add my changes in chunks. Git calls this patching, which I find confusing, so I call it “adding progressively”, as in “steadily in stages”, or gap for short. Adding your changes this way also forces you to look over all the changes, performing a self-review as you go.

git add --patch

This will start stepping through your changes patch-by-patch and allow you to add or ignore them. You can also split a patch into smaller ones, or pick specific lines you wish to add. Use the built-in help command once you start.

Patching will ignore new or removed files, so don’t forget to check that you have all your changes before committing.

I shorten this command in the ~/.gitconfig to git ap, and I even have a command line alias for that:

alias gap='git add --patch'

Amending and fixing commits

If you noticed a typo or forgot a file after committing there’s a nice trick to fix that while avoiding a second commit.

First stage your changes through git add then run:

git commit --amend

If you don’t need to change the original commit message use:

git commit --amend --no-edit

I aliased this to git fix in ~/.gitconfig.


  1. This only works on the most recent commit.

  2. DO NOT use it if you already pushed your code to the server. Your local history will diverge from the remote and you’ll have to use force push.

If you can’t amend or fix your commits, remember to squash merge when merging your feature branch back to main.

Improve messages and history with interactive rebasing

Interactive rebase enables you to edit old commit messages, or merge commits into one.

git rebase -i HEAD~4

Where 4 is the number of commits you want to edit/list. This will bring up an editor where you can choose to edit or merge those four latest commits.

Save your hard work with stash

Misstakes happen all the time, even after years of practice. What’s important is knowing how to get yourself out of a sticky situation.

Enter git stash, a life saver for git misstakes. Stash saves all your local changes (except new or removed files) and removes them from your code. This allows you to make changes to your git history, change branches and more without losing that work.

To save your work type:

git stash

To recover your work there are a two options:

git stash apply

Which re-applies your saved changes, and keeps the saved copy of the changes for later use (git stash clear will remove it).

Then we have:

git stash pop

Which applies the changes and removes the saved copy in one go.

But there is actually a third and valuable way of recovering a stashed change. Stash saves your changes as commits on a special branch. You can find this branch and commit in the git tree to recover the changes. Use the git graph command line tool (from earlier) to locate the branch named (refs/stash). Once you found the commit grab the SHA, inspect it with git show {COMMIT SHA} and apply the changes with a cherry-pick git cherry-pick {COMMIT SHA}.

Get out of sticky situations with cherry-pick

It’s surprisingly easy to mess up a branch like committing a change to the wrong branch. If you do, cherry-pick is another life saver to remember.

Lets say I started some work on a new feature, on the wrong branch… First idea might be to stash, rebase and re-applying the stashed work on the correct branch. But sometimes stashing can be finicky across branches or cause confusing conflicts. If you run into problems try cherry-picking instead:

  1. First check out a new temporary branch
  2. Commit the changes
  3. Switch to the correct branch
  4. Cherry-pick the commit to recover it.
  5. If needed, reset the previous branch to its original state.
git add --patch
git commit -m "Temp stash"
# Copy the commit SHA from commit
git checkout other-branch
git cherry-pick {commit SHA}
# Assuming you made an error on main:
git checkout main
git reset --hard upstream/main

Bonus: My Git config

I’ve been collecting helpful git commands for a long time and thought I would end this post with sharing them. It’s also available through my configuration repository on Github.

My favorite aliases from ~/.gitconfig, they are used with the git (or g in my case) command, e.g. g aa:

a = add
c = commit
f = fetch
r = rebase
w = show
d = diff
p = push
s = status
t = stash
b = branch
l = log
g = log --all --decorate --graph --pretty=format:'%C(yellow)%h %C(red)%ad %C(blue)%an%C(green)%d %C(reset)%s' --date=short--abbrev-commit
aa = add --all
ap = add --patch
amend = commit --amend
fix = commit --amend -C HEAD
cp = cherry-pick
what = whatchanged
dom = diff origin/master
graph = log --all --decorate --graph --pretty=oneline --abbrev-commit
co = checkout
pr = pull --rebase
rom = rebase origin/master
riom = rebase -i origin/master

My favorite Git related aliases from ~/.profile

alias g='git'
alias gs='git s'
alias ga='git add'
alias gap='git ap'
alias gadd='git add'
alias gc='git commit'
alias gco='git checkout'
alias gw='git show'
alias gd='git d'
alias gl="git l"
alias gf='git f'
alias gdc='git dc'
alias gdo='git do'
alias gg='git g'
alias gp='git p'
alias gb='git symbolic-ref HEAD'
alias gr='git rebase'
alias gri='git rebase -i'
alias grom='git rebase origin/main'

Read more

Work In Progress (WIP) limits

Too many work in progress items is bad for productivity. It delays delivery, feedback and return on investment.

Imagine you have a backlog of three projects or tasks of similar complexity. So you go ahead and schedule each project one at a time. Your schedule will look like this:

WIP Limit - Simple project timeline.jpg

Let’s assume a stakeholder or customer shows up and asks about the progress of their project. “Why haven’t you started this other thing” or something along those lines. To be helpful, avoid conflicts or cancellations you start working on the next project, and then the next one after that. Seems simple enough, we establish with the stakeholder that delivery will take a little longer. Now your plan looks like this:

WIP Limit - Fragmented project timeline.jpg

This might give you and stakeholders a feeling that we’re making progress on several fronts, we’re getting more done! Besides, this is similar to how it works on a factory floor where each station has a specific purpose in a delivery line.

Well we didn’t consider the cost of context switching. Every time you switch between projects or tasks you need to reset your brain and learn a new context, dig out the material/code, etc. This takes time. Now if we add context switching between tasks our timeline looks like this:

WIP Limit - Timelines with context switching.jpg

Now it’s super clear that having several items in progress at the same time is slower than completing them one at a time.

What if we have a team and we want to minimise context switching but deliver on several projects at the same time? Let’s split the team and create several work streams!

Two work streams in progress:

WIP Limit - Two lanes.jpg

Three work streams:

WIP Limit - Three lanes.jpg

We eliminated the context switching! But now you don’t have one team, you have two or three, and each group will take 2-3 times longer to deliver anything! Every team member will feel less connected and have less insight into each others work. They will have less possibility to support or share the burden of responsibilities. The project might now depend on one person who feels less connected and may fall ill or leave the company. This won’t create a happy, strong, cooperative or agile team. On top of all this we’re also creating more overhead work!

What if we are ok with the tradeoffs we’ve talked about so far. Stakeholders accept later delivery. The fine for context switching is acceptable. Team fragmentation and morale is ignored.

Then let’s talk about our return on investment and feedback! Every project has some sort of delivery value like cash payment or added user growth. The longer you wait to deliver, the longer you wait for that return, or feedback. If the return compounds over time like interest rate or user growth, shipping early means gaining traction earlier.

ROI for projects in consecutive order

Now we start collecting new users or money after delivering after the first third of the time.

WIP Limit - ROI consecutive projects.jpg

ROI when switching between projects

We won’t collect new users or money until the end of all three projects.

WIP Limit - ROI switching projects.jpg

ROI with multiple work streams

Won’t collect anything until everything is completed.

WIP Limit - ROI three lanes.jpg

In summary

Of course the real world has a many factors to consider when planning. If you have a big team, it doesn’t make sense to put all of them on a small feature or fixing a bug at the same time. I’ve seen and made this misstake myself in the past, try to avoid it or at least keep a low work in progress limit for your team.

If you limit you or your team’s WIP you will:

  • Reduce context switching
  • Improve productivity
  • Deliver faster
  • Get your return on investment or feedback faster
  • A more synchronised and supportive team, making it stronger and happier.

Further reading:

Read more

Learn Algorithms to Succeed at Technical Interviews

This is the second article in my series on improving your technical skills to perform better at technical interviews. The other articles can be found here:

  1. Data Structures
  2. Algorithms (you are here)
  3. Interview questions (in progress)
  4. Mobile app interviews (in progress)
  5. Memory management (TBD)

What is an algorithm?

An algorithm by definition is a specification of how to solve a specific problem. Basically, if you’ve written code to solve a problem, you’ve written an algorithm! Probably the two most basic categories of algorithms, commonly discussed in interviews (and around lunch tables among engineers), are search and sorting algorithms, which is what we’ll cover here.

I will introduce a handful of algorithms and explain why they are interesting to look at. I recommend diving into the links to understanding them at depth.

Before we get into it, let’s establish some terminology.

Big-O notation explained

Because algorithms are a way to solve problems in efficient ways, we need a way to measure and compare them, and that way is called the Big-O notation. It provides a way to estimate how fast or complex an algorithm is and it’s what we use to compare two algorithms solving the same task. Knowing what Big-O is and how to calculate it is important so spend some time learning it.

In short, Big-O notation is used to explain the time complexity, or how long, an algorithm takes depending on the input size. This chart provides an overview of some common Big-O complexities. The faster algorithms are down and to the right, meaning it can process more input in less time.

Chart visualizing common Big-O complexities

The x-axis represents the number (n) of elements and the y-axis represents time (N). Image credit: Swift Algorithm Club

As you go through the algorithms below you can reference back to this chart to understand how they compare.

Learn more:

Space-Time Trade-Off in Algorithms

Speed isn’t always the most important aspect when designing an algorithm. Sometimes you might be working with gigantic datasets or on a computer system with limited memory. This is where space-time trade-off comes in; choosing the right algorithm to optimise for computation time or memory (space). Sometimes the fastest algorithm isn’t the best choice for your application. We’ll cover both fast and space efficient algorithms below.

The Recursive Algorithm Design Pattern

When an algorithm calls itself we call it Recursion. We use it to solve problems which depend on results of smaller versions of itself. This can often be compared to iteration but there are advantages with each solution. Here’s an example:

// Iteration:
func iteration(end: Int) {
  var x = 0 
  while x < end {
    x += 1

iteration(end: 5)

// Recursion
func recursion(end: Int, current: Int = 0) {
  guard current < end else { return } 
  recursion(end: end, current: current + 1)

recursion(end: 5)

These two approaches will produce the same result. If the algorithm is more complicated, it can be easier and cleaner to use recursion.

Learn more: Wikipedia - Recursion

The Divide and Conquer Design Pattern

An important concept where an algorithm splits the input into smaller pieces to solve the smallest possible problem, and if needed (e.g. when sorting) stitching the result back together. Several algorithms such as Merge Sort or Binary Search use this strategy.

Learn more: Wikipedia - Divide and Conquer

Common Search Algorithms

Search algorithms can mean a lot of different things like finding the best chess play or best travel route, but for technical interviews it’s fairly common to be asked how to find or insert a value into a dataset or database in the most efficient way.

Linear Search Explained

This one is pretty straight forward, the name says it all, but it’s a good reference point for other search algorithms and worth mentioning. Linear search is simply iterating over a dataset one by one, comparing values until you find what you are searching for.

Average Big-O performance: O(n)

I really enjoy writing a binary search algorithm, it’s a fast way to search over a sorted dataset. Using something called a divide and conquer strategy, binary search will recursively split the sorted dataset in two, determine which half contains the value based on min/max values in the set and continue until (if) it finds the searched value.

Average Big-O performance: O(log n)

There are many different sorting algorithms and I’ve overheard many discussions between software engineers discussing performance and functionality, it’s a fascinating and nerdy topic. Unless you’re going for a very senior role at a very large tech company, in my experience, it’s rare to come across interview questions which ask you to implement sorting algorithms. It’s quite common to get questions around how your code performs and knowing how a sort (or search) affects your overall performance is a must, it’s not free just because you call the standard library sorting method. This knowledge will also help you write better and more performant algorithms in your daily work (and maybe save you a few dollars on your favorite cloud hosting platform).

Bubble Sort

Bubble sort is a simple but slow sorting algorithm. It iterates through your input comparing two adjacent elements, swapping them if needed, and repeating this until everything is sorted. Since the swapping can be done in place this is a very space-efficient algorithm.

Average Big-O performance: O(n²)

Merge sort

Merge sort, in its basic form, uses the divide and conquer and recursion patterns. Merge sort first splits your input into smaller pieces, then, while merges the pieces back together in a sorted order.

Compared to Bubble Sort, Merge Sort trades space efficiency for speed since splitting and merging arrays means allocating new arrays to store the values in.

Average Big-O performance: O(n log n)


One of the most well known and fastest sorting algorithms. Similarly to Merge Sort it also uses divide and conquer and recursion. Quicksort gained its fame for using recursion. Quicksort is very complicated and I don’t suggest learning the details but knowing about it, what strategy it uses and how it performs is a good start.

Average Big-O performance: O(n log n)

Insertion Sort

Insertion sort, as it’s name hints, means you take your input objects one at a time and insert them into a new array, always in a sorted fashion. I included it as I once received an interview question where it would have helped me produce a faster algorithm. Keeping datasets always sorted can be very helpful when you need to add or remove items to them often.

Average Big-O performance: O(n²)

Make sure to subscribe to the newsletter below to stay up-to-date when I rease new articles.

Read more

Learn Data Structures to Succeed at Technical Interviews

This is the first article in my series on improving your technical skills to perform better at technical interviews. The other articles can be found here:

  1. Data Structures (you are here)
  2. Algorithms
  3. Interview questions (in progress)
  4. Mobile app interviews (in progress)
  5. Memory management (TBD)

Why should you learn Data Structures?

Solving engineering challenges efficiently require knowing your data needs, how to store, query and update your datasets. This could mean the trustworthy Array, Set or Hash Table/Dictionary are not the right tools for the job. Luckily, there are other data structures available to you which can solve the problem in a simpler or more efficient way.

It’s quite common to get interview questions on how to implement these data structures so let’s dive into them and understand how they work and when they should be applied.

How a Hash Table works

Before we jump into the more complicated data structures I want to visit one of the most common ones.

Depending on your language of choice the Hash Table might be called a Dictionary or Associative Array; and although there is more than one way to implement them, they solve the same problem, and work similarly.

var myHashTable: [String] = []
myHashTable["hello"] = "world"
print(myHashTable["hello"])) // -> world

Because of how memory allocation works, hash tables store their data in what looks like arrays! They hash (compute) an array index from the hash key (hello above) and use that to decide where in the array to store the data. This typically means that hash tables are not very memory efficient since they allocate a range of memory for the storage array.

It’s really fun to learn how hash tables are implemented as it’s something we use every day in our work, and it’s quite simple!

Learn more:

What Linked List are good for

Linked lists are sequences of data, they sound a lot like arrays but work very differently. Linked lists, unlike arrays, are stored as separate objects with pointers to each other where arrays are stored in a linear way. This allows linked lists to grow and shrink easily but adds some memory complexity due to the use of pointers and not storing objects in a contiguous manner, like an array. Also unlike an array, it’s harder to jump to the N-th item since you need to traverse the linked objects to find the one you’re looking for. Linked lists are often used to implement Stacks and Queues which we’ll cover next. They can also be used to implement sorted lists, like a music playlist.

Learn more:

Stacks and Queues for data processing

Stacks and queues are two similar data structures commonly used for collecting and processing data in a specific order. A stack is a Last-In-First-Out (LIFO) collection: last object to go in is the first object pulled out. A queue is as the name suggests, First-In-First-Out (FIFO). Stacks and Queues can be implemented on top of arrays, but I suggest you learn how to build them with both arrays and linked lists. These are great for processing work or requests in a certain order.

Learn more about Stacks:

Learn more about Queues:

Trees are good at relational data

Trees are used to represent hierarchical relationships between objects. Trees are built up by nodes and each node typically has a parent node and multiple child nodes. Unlike trees in nature, the root node, which has no parent node, is represented at the top. Tree structures can save a lot of time, which we’ll cover in the questions and quizzes article. Tree’s are great for graph data, where data has relationships to each other, for example an auto-complete feature.

Learn more:

Honorary mention, the Binary Search Tree (BST)

It’s not unheard of that companies ask you questions about BST but I’ve not personally come across them. A binary search tree is a type of tree where each node has at most two children, and insertions and deletions are always performed in such a way that the tree is sorted. This data structure is very complex, and from what I hear, it’s one of the harder things you learn in an algorithms class. I recommend knowing what they do and how they work, but I don’t expect these to be common interview questions, unless perhaps you’re interviewing for very senior engineering roles.

Learn more:

Make sure to subscribe to the newsletter below to stay up-to-date when I rease new articles.

Read the next article in this series on Algorithms.

Read more

Improving at technical interviews without a computer science degree

I originally wrote this series in 2017, while mentoring a friend on how to succeed at technical interviews. It’s also a big reason to why I started this blog. A BIG thank you to my friend Paul who taught me and helped me understand all of this, and who read drafts of this series.

Being Self-Taught and Scared of Computer Science

As a self-taught developer, whenever I’m faced with a problem, I take a pragmatic approach to solve it. This may involve using some old tricks I learned, or simply to ask Google for some help along the way. It’s been an approach which served me well in the past and has led to a career I’m very happy with, working for startups as well as larger, established businesses.

However, I always felt that chip on my shoulder for not knowing computer science basics and I’ve been in interviews and everyday conversations where I felt lost due to this fact. On top of that, I always felt the threshold to learn these basics (funny as they are called basics) was really high and I never took the time to approach the subject. I was scared of it and always put it off with excuses like being a good app developer doesn’t require this knowledge, or investing time learning other areas like design and user experience. Both of which, I think, are still mostly true, but it prevented me from being a better programmer.

Learning Computer Science

In 2017, when it was time for me to take a new step in my career I had my goal set high. I wanted to join a top-tier tech company in Silicon Valley. It quickly became clear that to succeed with the questions asked at these companies I needed to step up my interviewing game and read up on those basics I’d been avoiding.

Once I started digging into the subject, I realized that learning about data structures and algorithms wasn’t as scary as I had painted it out to be. Even if you rarely need to write a linked list or a sort algorithm (since there’s probably a very optimized solution available for you), understanding how they work and where they can be applied is important for any software engineer. In the end, knowing them will make you a better developer and better prepared for technical interviews.

Quizzes at Technical Interviews

A lot of people say these types of interview questions are not the best way to gauge the knowledge of a candidate, and I agree, there are better ways. Since the first draft of this article, I’ve held hundreds of technical interviews myself and I never ask these types of questions. There are better ways of getting to know the candidate and their skillset, but that might be the subject of another article. However, having the knowledge will make you better at your day-to-day job and being prepared for these types of questions is still helpful, because they are very common.

The goal of this series is to share what I learned to improve at those interviews and to hopefully help people in a similar situation to mine. Maybe we can make those pesky interview questions obsolete (aiming high!).

If you still need to be convinced why you should learn about data structures and algorithms, read this introduction: Why Algorithms - Swift Algorithms Club. I’ll be linking a lot to Swift Algorithm Club because I find their guides and code easy to follow.

Getting started

In the first two articles I’ll cover common data structures and algorithms which are good to know. I’ll give them a quick introduction and provide some links I found helpful for studying them further. In future articles I will cover common and mobile-specific interview questions (focusing on iOS due to my background), and maybe we’ll dip our toes in memory management.

  1. Data Structures
  2. Algorithms
  3. Interview questions (in progress)
  4. Mobile app interviews (in progress)
  5. Memory management (TBD)

Make sure to subscribe to my newsletter below to be notified when the new articles are published.

Read more

Why Mobile App Development is Important

Why is mobile application development important, what are the opportunities and challenges when building mobile apps, and how do you get started?

These are some of the questions I cover in this presentation which serves as an introduction to developers or stakeholders who are new to mobile app development.

Why Mobile App Development is Important (transcript)

This transcript is an attempt to recreate the presentation I gave based on my notes. I hope they will help give clarity to the slides and speaker notes.

My background

I’ve been an engineer for roughly 17 years. Started with web development, then learned backend, becoming a full-stack developer. About eight years ago I found native mobile development and just fell in love with it. I also held some CTO roles a long the way.

Highlighted merits include:

  • Building a top 10 ranked photo app around 2013 (a very competitive market).
  • Co-founded and was CTO for a startup where I built an app from start to over $30k/mo in revenue.
  • Worked on Instacart’s Logistics team. With over 200 000 shoppers earning their income via the app. When I started we were only two iOS developers and I helped build the team to 12 before I left.
  • Currently I’m a consultant at Spotify, who has hundreds of app developers. Working on car mode which has millions of users.

Why should you build native mobile apps?

  • On average we spend over four hours on our mobile phones every day.
  • About two of those hours are spent in social media apps.
  • More than 80% of mobile minutes are spent in apps.
  • Digital media consumption on mobile passed TV this year.
  • 1/3 of video consumption is on mobile.
  • Over 70% of digital media consumption spent on mobile (growing 7-9% year-over-year)
  • Editors at Aftonbladet (one of Swedens leading newspapers) only use mobile previews. Media companies with 50+ year old audiences do the same.

How much does a mobile engineer earn?

According to a 2017 report companies have increased demand for mobile developers but fewer people are searching for mobile roles.

This may lead to an increase in salaries for mobile developers. Some reports like and indicate that mobile engineers make more money than general software engineers. I have not been able to verify this and anecdotally I haven’t noticed much difference. Some numbers indicate $72,000 a year on average for mobile engineers compared to $58,000 for a web developer.

Why native apps over web apps?

They have generally better performance and more responsive user experience. They are installed on the customers phone, giving easy and quick access. This also has the downside that native apps actually need to be installed to be used. It’s easier to create more immersive experiences, like Augmented Reality. You can utilize full usage of platform technologies, like camera, notifications and location services (GPS). It’s easy to integrate with other apps or system provided features, like push notifications, calendar and contacts.

Native apps can be offline

Native apps can be and should have offline support! Apps that act like online-only web apps are bad for the user experience.

Powerful hardware

Mobile devices are almost as powerful as laptop computers today, allowing us to perform advanced tasks like processing images with Machine Learning (ML) models or play desktop-quality games. A 2019 iPhone 11 Pro has comparable performance to a 2017 MacBook Pro, and ML accelerators can perform 1 trillion operations per second on that iPhone.

More device capabilities

  • Powerful platform features such as Artifical Reality (AR), integrate with speach through Siri or Google Assistant, ML, camera/photo library access, GPS, 2D and 3D games, and HealthKit integrations are all possible.
  • Can send push notifications.
  • Integrate with Calendar and Contacts.
  • Charge money through subscriptions or wallet integrations (Google Wallet/Apple Pay).

Native mobile development is fun!

I really enjoy building native apps!You possess so much power in tiny format. It’s a strange and fantastic feeling having your users interact with your work by touch of hand. You have unique and interesting limitations compared to backend and web development. Finally, nerding out, native, statically typed and compiled languages are fun to work in.

Challenges when building native mobile apps

Battery consumption

Long-lasting battery is important to users. I’m sure you’ve pulled your phone out of your pocked and realized it’s dead. You don’t want to be the app who drains peoples battery.

This means you need to be aware of a few things in particular:

  • Excessive CPU/GPU processing
  • Screen activity
  • Antenna usage, network requests and GPS usage

In fact this is a classic interview question asked for mobile engineering roles.

To avoid these issues offload heavy tasks to a backend; avoid unnecessary network activity (use caching/batching); do you need the highest fidelity GPS position or is “within 100 meters” enough?

Short attention span and User Experience on mobile

Important to consider user attention when designing for mobile and wearable apps. Statistics reveal longer user sessions on media apps, social media apps and games. Other apps see mostly short, task-oriented sessions, which means we have a challenge with short attention span. It’s even more true on wearables where sessions are measured in seconds, not minutes.

Another concern to keep in mind is that apps can, and commonly do, get killed by the OS. Make it easy for users to recover a task they had started. Background tasks are complicated with limited time to complete.

Dealing with bad connectivity

Just like offline is an opportunity, it’s also a challenge where you can often be in areas with bad connectivity. Cell phone signals are actually pretty bad outside of urban areas, and even in urban areas we can have areas of spotty connection, like inside large grocery stores. Something we worked hard on to workaround at Instacart.

Offline support isn’t just an opportunity, it’s a necessity.

Since it’s a common problem, we have a few tools at our disposal; use request caching or a local database (SQLite is common choice). Synchronize data to local storage to ensure you provide some value even when the user has a bad connection.

Mobile App Distribution challenges

Since you install apps directly on peoples devices, distribution and update cycles are very different from what you might be used to from the web and backend world.

People control their own devices making it harder to force them to upgrade, and upgrade cycles typically can take days or weeks. Dormant versions can stay on a device for months or years. Have a plan on how to to force upgrade and drop legacy support, remember to include your backend endpoints, as they will serve for a long time.

Releasing a new version of your app can be slow due to platform processes. You might need to wait several days to get an update out to your users. Today this mostly happens within a few hours on Android, and within a day on iOS, but don’t depend on it.

Another problem when releasing are platform rules. You’re at the mercy of platform providers to approve your app and they have strict and ever updating rules.

Make sure you have good crash handling and reporting. Since you can’t access the device like a server you need to have a tool for reporting and monitoring your crash rates and stack traces.

Since upgrade cycles are slow, you can’t fix a crash right away. Therefore it’s common to release new functionality behind feature flags. Feature flags are controlled by your backend and mobile apps check with the backend to see what functionality should be enabled. Using these can help you disable a new feature causing a crash, or synchronize the rollout of a new feature across platforms. Remember you can be punished by platform providers if you hide functionality from the app reviews.

At smaller companies it’s common to release new features as they are ready, but larger companies typically stick to a scheduled release cadence, weekly or bi-weekly are common.

Slower development speed for mobile apps

Due to all these challenges it’s typical to see somewhat slower development speed for mobile apps compared to web or backend services.

  • Avoid taking too much risk with each update.
  • It’s takes longer to release and distribute crash fixes.
  • Attention to detail and rigorous testing and code reviews are strongly recommended.
  • Use feature flags as much as possible!

Should you develop for Android or iOS first?

The answer might depend on the market you are trying to reach.

These global stats might help you make a choice:

  iOS Android
Market share 25% 75%
Number of app downloads 30% 70%
App or subscription purchases 65% 35%
Monthly spend on tech purchases $101 $51

These are global stats, if you are targeting a specific geographic area you should look up those specific numbers, for example in North America the share of iPhone sales is over 50%!

A couple of conclusion we can draw from this data:

  1. Android reaches a much larger audience.
  2. iOS users spend or are more prepared to pay for apps and services they use on the phones.

iOS development introduction

  • Apps were originally written in Objective-C.
  • Swift is the recommended language today, Apple is pushing developers in this direction.
  • Xcode is the standard IDE, and in my opinion the best, but there are alternatives.
  • iOS provides good privacy as a platform default, Apple works hard for their users in this area.
  • Unfortunately, in part for privacy reasons, apps are very locked down, or contained in silos, this adds some development challenges.
  • Heavily controlled App Store, tougher requirements and rules to get in. Apple uses manual review process heavily.
  • If you want to build apps for Mac OS, Apple Watch or iPad, you get it without very little extra effort.

Android development introduction

  • Apps originally written in Java, due to legal battles Java on Android was frozen several versions back.
  • Kotlin is the new, recommended language. Preferred in most cases.
  • Android Studio, based on IntelliJ IDEA is the standard IDE but others exist.
  • Android has bad privacy. Google’s business is built around gathering all of your data, for better or worse.
  • Easy to integrate with other apps and operating system.
  • Uncomplicated marketplace, allows for faster app reviews and easier ruleset, both good and bad - many more copycats.

Cross-platform native apps

So why not develop your apps with a cross-platform ecosystem?

First let’s cover some common cross-platform ecosystems.

Developing apps with Xamarin

Xamarin is more established and has been around the longest, but less popular. Built by Microsoft, apps are written in C# .NET on top of a framework called Mono.

Developing apps with Flutter

Flutter is the new kid on the block. Developed by Google, apps are written in Dart and uses a C++ engine called SKIA. Flutter promises good testing & native integrations. Seems very interesting if you know, or are prepared to learn Dart.

Developing apps with React-Native

React-Native is provided by Facebook and uses Javascript. It’s probably the most established and popular choice for cross-platform development today.

Pros and cons of cross-platform native apps


  • “Write once, deploy everywhere” (in theory).
  • Tooling is more open making it easier to setup your own systems.
  • If you already know the language it will be faster to get started.
  • Great for simpler apps like social media or apps originating from a web app.
  • Typically provides hot-reloading capabilities.


  • Even though code is shared, platforms require native UI and exceptions.
  • Native support can be flaky, leading to user experience being negatively impacted.
  • Might lack support for platform technology, like Machine Learning or Augmented Reality.
  • Limited by community, when platform providers release new features you have to help out or wait for community to support these new features.
  • Typically you still need some native development or people on your team with native skills.
  • Some larger companies like AirBnB and LinkedIn report that it’s harder to find enough qualified developers.

How do you start developing native apps?

First wireframe!

If you have an idea for a mobile app, before you jump into developement phase. Test a prototype with wireframes, or mockups. Due to the challenges of mobile app developement, it’s valuable to verify your ideas on paper before coding.

Some tools to use are:

  • Balsamiq Mockups, great for low-fidelity mockups.
  • InVision, great for turning existing designs into clickable prototypes.
  • Figma, allows you to both design and create prototypes.
  • Framer, another tool for creating prototypes with building blocks.
  • Principle, makes it easy to work with animations and interactive prototypes.

Learning to code

There are tons of resources for learning how to develop for Android and iOS, for example Udemy or Coursera, two of my favorites are:

Resources for native development

You can’t really manage without the default native tools provided by Apple and Google, visit them to download Xcode or Android Studio and find platform documentation.

References to data provided in the slide notes.

Read more

Strong opinions, weakly held and blogging

I’ve been meaning to start a blog for years but always struggled with what to write. Writing new, authentic and timely material is hard.

Software engineering is a constantly changing and fast paced industry, ideas and knowledge are quickly improving and I had a fear that the content I wrote wouldn’t age well.

I have strong opinions held weakly

To me this means writing about topics which are accurate today, but may change tomorrow. It means expressing opinions and taking actions based on the information and knowledge you have at hand, but being open and able to adapt to changes and new information. It does NOT mean that I change opinions easily, am a pushover, or that I don’t stand by my choices. In fact, on the contrary, I require facts or much convincing to change my mind!

I think this describes me well. I have a pragmatic approach to endeavors in life and I am constantly learning and improving. That’s how I’m going to treat this blog, and I hope I will learn as much from you readers as you do from me.

I’ll mostly be writing about software engineering, with a deeper focus on mobile and iOS development – which is what I do for a living.

If you want to follow along, please subscribe below, and don’t forget to follow me on Twitter for all the soundbites: @acr.

– Andrew

Read more