đŸŠ„ Big O Notation for Dummies

Hello friends!

Welcome to this week’s Sloth Bytes. I hope you had a goofy week 😁 

This AI Tool Might Be the Shortcut to Your First Million

What if building your million-dollar idea was as easy as sending a text message?

Indie hackers and solo founders are launching MVPs, going viral, and making money.

Their secret? They’re not building from scratch. They’re using tools like Lovable. 

Lovable is an AI-powered app builder that takes you from idea to live product fast

No dev team, no endless setup. Just describe what you want, and it builds the front end, back end, and even has integrations with GitHub, Stripe, and Supabase.

Whether you’re technical and want to move 10x faster, or non-technical and just want your idea to exist already, Lovable is your unfair advantage.

It’s Europe’s fastest-growing AI dev platform and it’s helping people launch real, scalable businesses in weeks.

Will AI take your job?

Maybe.

Or it might help you quit it.

Big O For Dummies

Let’s me be honest with you.

Big O notation will haunt you in school AND technical interviews.

I remember prepping for my first interview and thought I needed to get a math degree just to understand what the heck “logarithmic time complexity” meant.

So here’s the guide I wish I had.

Explained to you as if you and I are both half-asleep in class and pretending to be productive.

What Is Big O Actually?

Big O notation is just a way to describe how your code performs as the input gets bigger.

Using this method, we can determine an algorithms best case, average case, and their worse case.

It’s not meant to be precise, it’s meant to give us a general idea.

Why Can’t We Just Time It?

Sloth this seems a little extra, why can’t we just do this:

start_time = time.time()
fake_function()
end_time = time.time()
# Calculate elapsed time
elapsed_time = end_time - start_time
print(f"Function took {elapsed_time:.4f} seconds to execute.")

Here’s why timing alone sucks for analysis:

  • Different hardware = different results: Your code might run in 0.5s on your super fast rtx 8090 machine, but take 30s on my potato machine.

  • The programming language: If I created the same program in C and JavaScript, C would most likely be faster. That doesn’t really help us.

  • System noise: Background processes, memory usage, whether Spotify is playing sad lofi in the background. This can all skew timing.

With big O we can minimize these problem because it focuses on the algorithm.

📊 Big O Cheat Sheet (All the important ones)

These are from sorted from best to worst

Name

Big O

Example

Constant (The best!)

O(1)

Accessing an array item

Logarithmic

O(log n)

Binary search

Linear

O(n)

Loop through an array

Linearithmic

O(n log n)

Efficient sorts (merge, quick)

Quadratic

O(nÂČ)

Nested loops

Cubic

O(nÂł)

Triple nested loops đŸ€ź

Exponential

O(2ⁿ)

Recursive problems (bad ones)

Factorial (Pick a different career buddy)

O(n!)

Permutations of everything

Big O notation is used for two things: Time and space.

Time Complexity

The time required to execute your algorithm.

Example: If your function loops over a list of 1000 items.

That’s a time complexity of O(n).

Space Complexity

How much memory your algorithm uses.

Example: If you store a new array of n items.

That’s a space complexity of O(n).

In interviews, they’ll could ask for both, but usually they only ask for time complexity.

Interviewers ask you this because sometimes you can speed things up by using more memory. But you need to be able to explain that tradeoff.

How to Figure Out Big O (Without Crying)

Here’s a quick and dirty checklist to help you figure out the time complexity of code:

1. Count the loops

  • A single loop over n items? That’s O(n).

  • A loop inside another loop? That’s O(nÂČ).

  • Triple nested loops? Now you’re in O(nÂł) territory (also: please stop).

2. Drop the constants

  • If you’re doing n things twice (like two separate loops), that’s still O(n), not O(2n).

  • Big O doesn’t care about your constants, only how it scales.

3. Watch for logarithmic operations

  • If you’re cutting the problem in half each time (like binary search or merge sort), that’s O(log n).

  • Anything “divide and conquer” usually lands here.

4. Recursive calls = stack

  • If a function calls itself multiple times per call (like in Fibonacci), you might end up with O(2ⁿ).

  • If it calls itself once per call (like factorial), that’s just O(n).

Pro tip: Break your code down into chunks.

Analyze each one, then combine them just like LEGOs Ugly, nested, brain-melting LEGOs.

You’re a nerd and wanna learn more?

Check this article out:

Thanks for the feedback!

Thanks to everyone who submitted!

New challenge format!

Instead of having the entire challenge here, I’m going to link an easy question and a hard question from LeetCode/other coding platforms.

This way the emails will be shorter and people have more options 😁 

Let me know what you think of this format.

How To Submit Answers

Reply with a solution to either problem or even both đŸ€Ż :

  • A link to your solution(s) (github, twitter, personal blog, portfolio, replit, etc)

  • if you’re on the web version leave a comment with the link

  • If you want to be mentioned here, I’d prefer if you send a GitHub link

That’s all from me!

Have a great week, be safe, make good choices, and have fun coding.

If I made a mistake or you have any questions, feel free to comment below or reply to the email!

See you all next week.

What'd you think of today's email?

Login or Subscribe to participate in polls.

Want to advertise in Sloth Bytes?

If your company is interested in reaching an audience of developers and programming enthusiasts, you may want to advertise with us here.

Reply

or to participate.