There are several different ways to take and store screenshots on the web. The methods proscribed below are intermediate and advanced methods aimed at people who care about owning their content and want ways to slice and dice their history. My own progression of tooling took the following turns over the years.
I started with a third-party mac application that I paid a monthly fee for. Eventually they raised prices and prevented exports. I vowed never to trust something so important to an outside service.
I moved to Linux and had less options with off-the-shelf services. I decided uploading screenshots wasn’t a very hard problem and wrote some simple bash scripts to upload files to a Google Cloud Storage bucket. This is what I’d recommend for most people.
My final solution (the museum) is a TypeScript based system that stores screenshots in a Xata database to make everything searchable. I run all files through Google’s Vision API for some enhanced abilities.
You don’t need to read this blog to find a third-party screenshot app, especially on a Mac. Most designers I know go with this first option and call it a day. We’ll instead spend time with options 2 and 3.
Set up a watch folder with bash scripting
A minimal setup only takes a weekend of tinkering and requires the following:
A place to store your files ala Amazon S3 or Google Cloud Storage. My examples will use Google Cloud Storage.
A small shell script that can watch a folder on your desktop and shuttle files to your bucket.
For many years before the museum I used a simple shell script that transported watched files on my local Desktop to a Google Storage bucket using their gsutil command-line tools. Internal Linux services inotify and notify-send helped give me an “app-like” experience with notifications and the provided a URL automatically copied to my clipboard. If you’re using S3 or another service, you can sub out rclone for the gsutil parts.
The URL can have whatever structure you want, but the below code will produce something like https://snid.es/2023OCT/d5RZnmlvWlioophL.png assuming you’ve set up your bucket for public access and attached a domain.
For testing, it’s easy enough to run this script from your terminal and watch the log as files move in and out. You’ll need to run chmod +x script.sh to make your script executable.
Run the bash script on startup in GNOME
Likely you want to run that script every time your computer starts. Setting this up is going to be different for every Desktop, but since I run GNOME, it’s as simple as making a new file named ~/.config/autostart/screenshots.desktop.
Once that file is created, restart your user session and it should start running. If everything goes well, now you can save any file to your watch folder and a notification should pop up which its location in our Google bucket.
NNN plugin to quickly upload files
Because the script we set up watches for any files, not just screenshots, this means we can drop videos (or anything) in there as well. This partially removes my need for services like Dropbox, which I’d used primarily to transport files to others. I like using a watch folder vs working with the files directly because I can chain other programs to that concept fairly easily. For example, I use a small NNN plugin to automatically copy files to the watch folder, where they get shipped off. If you’ve never messed with NNN before, and you don’t understand why it’s cool, I made a video tutorial on the tool a few years ago.
This simple setup works great and is easy to manage. You don’t get a frontend like the one on this site, but not everyone needs that kind of depth. If you’re happy with this solution, you might not need to read further.
A more powerful solution using Google Cloud Vision APIs and Xata
While the above system worked reliably for many years it had a couple drawbacks. Because I was renaming all the files to use hashed strings, it was nearly impossible to distinguish one URL from another. If I lost that particular URL, I wasn’t going to dig through folders on Google Storage to find the file, and would instead resort to recapturing the content I wanted to share. This is the reason it often looks like there are duplicates in the museum.
What I really wanted was something searchable, and a way to visually navigate and delete old uploads in a protected fashion. This came with it’s own challenges:
Image content is not searchable, so I’d need an OCR system to read the content and store that as meta information.
I’d need a database to organize the files and store that metadata alongside.
I’d also need a search mechanism like Elasticsearch to provide fuzzy search against the records.
Lastly, for a frontend, I’d need a way to transform images on the fly to smaller sizes.
Xata, a Database platform where I happen to work, provides most of those features by default. Xata allows me to store the images in a database directly, then run image transformations on the fly. It also comes with a search and aggregations endpoint built on top of Elasticsearch so that I can build fun charts of my activity over the years. Because Xata is ultimately just a Postgres database, I can also add some other columns to manage the public visibility of certain files.
Once thing Xata does not provide is a way to read the text in my screenshots. I decided to utilize Google Vision AI for this part, primarily because I already had images on Google Storage, and because their API will process 1,000 images each month for free. In a good month I upload around 200 files, so I only needed to cover the cost of scanning my back catalog, which ended up being around $50 bucks. Using Google + Xata, this entire project can be run for free assuming normal use patterns.
Google Vision also provides services for auto-tagging and breaking down color properties in images. While the tagging service isn’t very useful for screenshots (lots of generic technology and software tags), it works extremely well at photography (check how it correctly spots hats). I additionally use the image properties API to determine what colors are most used in any particular image. Right now this is used to generate the ornamental color band component on individual image pages, but I have plans to provide search by color distance soon.
Everything Google gives me I store as simple JSON blobs in a Xata JSON column type. This makes them searchable through Xata’s native full-text search abilities and let’s me run direct filters on the JSON documents if I want to target a specific property.
The above flow likely sounds complicated, but it is all handled in a tidy 200-line TypeScript file. Both Xata and Google provide TypeScript clients so it made sense to move to that ecosystem for my new watch scripts rather than Bash. This means the code is also a little more portable and should run easily on Mac or Windows machines. Here’s the gist of what my new script does.
Similar to my earlier shell script, I watch a single folder, this time using the Chokodar instead of iNotify.
Files dropped there are shuffled up to my Google cloud bucket (now serving as a backup), this time using the crypto package to generate the file names.
The file is then uploaded to Xata in a new files table of my davesnider-dot-com database. This creates a new record using the same ID that I used for Google.
I hit the Google Vision API to provide OCR text, tagging and color properties.
I move the JSON blobs Google provides to the new Xata database record.
I trigger a notification and copy the URL of the file to my clipboard.
Set up your project
First, make a new directory to store your script and install the dependencies. Make sure you sign up for a Xata account before beginning.
When initializing a project, Xata allows us to start with a predefined schema. Here is the structure we’ll use, which gives us columns for all the Google Vision fields we’ll want to add as well as a way to hide and favorite certain images.
You’ll need a .env file that points to all your secrets in the folder.
For the Xata database, I use the following database schema. When creating a new Xata database you would initialize the database with the command xata init --schema /path/to/this/schema.json.
Create a watch script in TypeScript
I use a single TypeScript file to watch for files, upload and scan them with Google Vision. Create a new node project with the following dependencies.
Run the TypeScript file on startup in GNOME
Similar to our bash solution, we’ll need to add an autostart script to GNOME so that our watch folder runs on startup. Because TypeScript is natively pretty difficult to run on it’s own, we need to make a small wrapper script in bash so that the service runs with all the dependencies.
Then we need to point to this file in GNOME’s autostart system. Create a new file in ~/.config/autostart/screenshots.desktop.
Once that file is created, restart your user session and it should start running. If everything goes well, now you can save any file to your watch folder and a notification should pop up. The files will now exist in Google Storage (as a Backup) and Xata (as our primary Database). At this point we have everything we need to build a nice frontend to view the files. Here’s what everything looks like in its final form.
Building a frontend to view files
The source code for the museum is available on Github if you want. Going in depth on how to build out a JavaScript frontend would likely involve a much longer writeup but I’ll summarize briefly how my own Museum code works. Svelte does most of the heavy lifting, querying the Xata database. At a high-level view of the code, here is how easy Xata makes it to work with our database in any JavaScript system.
This is just the tip of the iceberg of things we can do now that our files are in the database. To see the full flow in action, the video I posted along with my museum blog post covers the highlights.