9 minute read

Warning: This post is NSFW. In this series, we are going to build a really, really simple database management system that you should by no means use in a production work environment. Here’s the experiment: Start with a naive implementation of a database – read and write from a local csv file Reach for all the normal features we use every day: basic CRUD, aggregate queries, complex joins Realize that our basic implementation falls short Look at how these problems are solved in modern systems Think of the whole series as one giant experiment in Cunningham’s Law: “the best way to get the right answer on the internet is not to ask a question; it’s to post the wrong answer.

94 minute read

In my last post, I wrote about steps you can take to make writing complicated queries more manageable. One aspect that I didn’t cover in that post is how to set sample data to work with during the writing process. Assuming you’re not working directly in your production database as you test out new queries (right? right?? right???), you need some way to work on your new ideas. In this post, I want to share some tips and tricks for creating reliable, reproduceable test data to help you develop new ideas in SQL.

21 minute read

How many times have you started off building a complicated analytical SQL query like this? SELECT … . uh??? . . SELECT * FROM … SELECT … And you get stuck trying to figure out exactly what you want to select. You’re thinking about averages, group by’s, the order of your results or some change you want to see over time and the query editor is just sitting there, taunting you, because in SQL, you have to know up front what you want to select into your final results.

8 minute read

If you are a software engineer and you have just enough SQL to write queries that count, sum, average join and maybe sub-select, then I’m writing this post for you. If, when you need more complicated analysis or computation, you pull your query results into excel or your favorite scripting language to do more processing, then I have some good news.

There’s a whole lot more you can do right in SQL and it’s not too bad to learn how to do it.

In this post, I’m going to cover a few concepts that have recently helped me do more computation, better analysis, and it turns out, more efficient querying…and I hope they help you too.

4 minute read

The best upgrade I’ve made to my workflow in the past year was to start keeping an engineering notebook.

Whenever I start a new project, the first thing I do is create a new section in my engineering notebook. It’s really simple. With a tiny script, I generate a dedicated folder for the new project, plus three sub-folders and a README:

    some_new_work
        |__ notes/
        |__ data/
        |__ scripts/
        |__ README.md

As simple as this structure seems, it has had a tremendous impact on my work. In this post, I want to try to unpack why and how I think that’s working.

5 minute read

EDIT: After I posted this initially, I got some great feedback, so I wrote a follow-up post here.

In this post, we are going to explore a strategy for collecting the Top N results per Group over a mixed dataset, all in a single query.

I stumbled onto this solution the other day, mostly driven by the fear that I was re-scanning my BigQuery data too often. At the time, the only way I knew how to look at a Top 10 list of a subset of the data was to add a WHERE clause limiting the whole data set to a single group and combine with ORDER BY and LIMIT clauses.

For each group, I would just modify the WHERE clause, rescan all the data, and get new results. I thought there had to be an easier way to get the same ordered subset for any particular group in the data, all at once.

It turns out, there is a much more efficient way to solve this problem.

5 minute read

It’s easy to blow your BigQuery budget when you are exploring a new data set. Because you’re billed for the amount of data scanned, not the ultimate result set, when you don’t know what you’re looking for, you can end up with wasteful queries.

In this post, I’m going to share some tips for more efficiently scanning data in BigQuery when you don’t quite know what you need.