The Paper Key — Decoupling Your Digital Legacy from the Platform
Table of Contents
by:
Lari Huttunen
We spend our careers building “secure” systems. We lock down IAM roles, rotate keys, implement multi-factor authentication, and place our data inside the “golden cages” of cloud providers. We pride ourselves on uptime, availability, and the complexity of our defense-in-depth strategies.
But recently, I’ve been troubled by a simple, somewhat morbid question:
If I am gone tomorrow, does my data die with me?
Most of us have a “Bus Factor” of one. Our digital legacies—photos, documents, code, private archives—are often protected so well that they are effectively indestructible and inaccessible. If the primary provider locks the account, or if the complex web of asymmetric keys and 2FA tokens cannot be navigated by a grieving heir, the data is lost.
We often confuse security (keeping bad actors out) with continuity (letting the right people in). In our quest to secure the perimeter, we have built vaults that seal shut the moment we stop turning the wheel.
True disaster recovery is not just about servers going down. It is about the creator disappearing.
I recently began reshaping a set of personal shell scripts into a public operational toolset called restic-ops. The goal wasn’t to build another backup product, but to prove a specific philosophical point about Data Sovereignty:
The Trap of “Golden Handcuffs” #
We currently live in an era of vendor lock-in disguised as convenience. If your backup strategy relies on AWS Snapshots, Google Cloud Storage buckets with intricate IAM policies, or proprietary SaaS backup agents, you are not just banking on the integrity of your data. You are banking on:
- The continued existence of your cloud account.
- The validity of your credit card on file.
- The accessibility of the proprietary tooling required to read those snapshots.
If the platform is the only thing that can read your data, you don’t own the data—you are leasing access to it.
Consider the “Golden Cage” of modern authentication. We are taught to use hardware tokens (YubiKeys), authenticator apps, and biometric scanners. These are excellent for protecting active infrastructure. But in the context of a digital legacy, they are catastrophic points of failure. If your heir does not have your fingerprint, your face, or the PIN to your YubiKey, the “Break Glass” procedure fails before it begins.
To secure a digital legacy, we must decouple the data from the platform. The Operating System should be irrelevant—Linux, BSD, macOS or Windows. The storage provider (S3, B2, MinIO, or a physical USB drive) should be a swappable commodity. The only thing that matters is the encrypted blob and the ability to read it.
The Scope of Sovereignty: Archives vs. Feeds #
It is important to draw a distinction between “content” and “legacy.” Even if I am framing this methodology as one that can save one’s personal digital legacy, I am not currently interested in decoupling content locked away in the tech giants’ social platforms.
I have largely opted out of that ecosystem, actively residing only on Bluesky. Platforms like LinkedIn, Facebook or Instagram fall out of scope for me because they are “feeds,” not archives. In the past, I have fallen prey to “portfolio” services such as 500px and Youpic. These platforms promised to showcase photography but ended up locking away the high-resolution originals behind paywalls or deteriorating terms of service. They turned my data into their content. That’s why I created a self-hosted site for my photography portfolio.
This experience has solidified my stance:
The Prerequisite: The Sovereign Inventory #
This brings us to a critical, often overlooked step in disaster recovery: The Inventory. To be effective with restic-ops, or indeed any sovereign backup strategy, you must first audit your digital existence. You cannot safeguard what you cannot touch. Too many of us assume our data is safe because we can see it in a web browser. But if that data exists only as rows in a proprietary cloud database, or as metadata in a SaaS dashboard, it is not ready for long-term storage.
The first step is not installing the tool; it is migration. You must move your data from “accessible via API” to “accessible via filesystem.”
- Can you dump the database to a standard SQL file?
- Can you export the Google Doc to a Markdown or PDF file?
- Can you pull the raw RAW/JPEG files from the photo service to a local directory?
Only when your data exists as tangible files—entities that can be listed, read, and moved without special permission from a vendor—does it become capable of being saved. restic-ops is the armored truck, but you must first pack the boxes.
The “Piece of Paper” Standard #
When designing restic-ops, I established a strict constraint for the restoration process. I call it the “Piece of Paper” standard.
Imagine the worst-case scenario:
- I am not available to type in a password.
- My laptop is destroyed or encrypted.
- The primary cloud credentials are lost.
- My phone (and its 2FA tokens) are wiped.
The heirs to this legacy are left with a single sheet of paper stored in a physical safe. On it, there are secrets—endpoints, repository paths, and passphrases.
For this to work, the recovery mechanism cannot be “smart.” It cannot rely on cloud-tethered Key Management Services (KMS) that revoke access if a bill is unpaid. It cannot rely on hardware-bound keys locked inside a specific physical device that might be lost in the same fire that took the laptop. It cannot rely on complex identity workflows (OIDC, SAML) that require a living, breathing user to click “Approve.”
These technologies are the ultimate “Golden Cages”—secure while you are here, but impenetrable hurdles when you are gone.
This is why restic-ops explicitly prioritizes symmetric encryption.
In the security world, we often view symmetric keys (a simple password) as less sophisticated than asymmetric (public/private key) architecture. But complexity is the enemy of execution. If a private key is lost, or if the passphrase for that private key is forgotten, the game is over.
In my model, knowing the passphrase is all that is needed to unlock the secrets. There is no “keyring” to import. There is no “web of trust” to validate. If you can read the paper and type the characters into a terminal, you can recover the data. It is crude, it is simple, and it works when everything else has failed.
Half the Equation: Data vs. Infrastructure #
It is important to draw a line in the sand regarding what this project is—and what it is not.
restic-ops is laser-focused on Data Recovery, not Service Restoration.
There is a distinct difference that engineers often conflate. Service Restoration is the job of tools like Ansible, Terraform, or Kubernetes. They build the house. They install the packages, configure the firewall, and start the daemons.
restic-ops saves the furniture.
I am not trying to automate the spinning up of a new PostgreSQL server or the configuration of a web host. That is a different problem for a different day, and one that often requires knowledge of the current state of technology. (Who knows if Ansible will be the standard in 10 years?)
My goal is to ensure that when the “house” burns down, and the architect is gone, the blueprint and the assets still exist.
- We recover the SQL dump, not the running database.
- We recover the folder of JPEGs, not the running gallery software.
- We recover the Markdown files, not the generated website.
If we have the raw data, we can always rebuild, even if we have to use completely different tools to do so. Without the data, all the automation scripts in the world are useless.
Introducing restic-ops #
This project is my attempt to standardize this workflow using robust, open-source concepts: restic, gpg, and a philosophy of simplicity.
I chose restic because it is a single binary. It does not require a complex installation. It speaks the language of encryption natively. It deduplicates data to save space and bandwidth, making frequent backups viable.
I chose GnuPG to handle the symmetric encryption of the repository secrets, acting as the “wrapper” for the keys to the kingdom.
I am not aiming for the layman; this is expert tooling for those who understand the stakes. It assumes you are comfortable in a CLI. It assumes you understand what an environment variable is. But it also assumes that you want to leave behind a system that is transparent, readable, and logical.
I am open-sourcing the restic-ops repository to share this methodology. It is an invitation to skilled engineers to rethink how we safeguard our digital lives. We need to move away from “proprietary convenience” and back toward “sovereign simplicity.”
In the next post, I will dive into the technical architecture of the repository. We will look at:
- The Wrapper: How restic-ops abstracts connection complexity without hiding the logic.
- The Cryptography: Why specific encryption methods were chosen to balance security with recoverability.
- The Secrets: How to manage the “Master Key” so that it actually fits on that piece of paper.
The goal is simple:
To make the platform irrelevant and the data immortal.
While you wait for the next post, visit the repository, fork it even, try it out and help me make it better.
Acknowledgement: Built with AI, Verified by Human #
This project was architected by me but codified with the heavy assistance of AI (Gemini/Copilot). I treated the AI as a junior developer: I gave the instructions, I set the constraints, and—most importantly—I audited and thoroughly tested the result.
This approach allowed me to move from a philosophical concept to a working prototype in a fraction of the time. However, because this is a security tool, I have manually reviewed every line of code to ensure it adheres to the Piece of Paper standard and contains no hallucinations or insecure defaults. I invite you to do the same.