It was late at night. I was staring at the terminal, and only one word was in my head: done.
This wasn’t a single broken file. The entire workspace root got wiped.
11,583 files. Gone in 3 minutes.
What I Was Doing
I was running multiple tasks in parallel and delegated frontend work to a sub-agent for speed.
Gemini got a task under projects/manga-character-platform/frontend.
Then it executed rm -rf ./*.
The problem: it thought it had cd’d into the target directory. It hadn’t.
Timeline
00:48:57 Gemini "cd" to target directory (or so it thought)
00:49:22 Ran rm -rf ./* (actually at workspace root)
00:50:03 Realized something was wrong, started "rebuilding" empty directories
00:51:29 Auto-sync pushed deletions to GitHub
From deletion to propagation: 4 minutes.
I Was Also Part of the Problem
To be fair, I can’t blame Gemini alone.
I cut corners.
Project repos should never have lived under workspace. I put projects/ there for convenience, thinking .gitignore would protect me.
It didn’t.
.gitignore controls Git tracking — not rm.
Once Gemini deleted the workspace, projects/ went with it.
Yes, I restored repos from GitHub. But unpushed local changes and experiment branches were gone.
That one is on me.
Root Causes
1) cd wasn’t persistent
In OpenClaw exec flows, cd in one command doesn’t change cwd for future commands unless explicitly set.
2) No guardrails before destructive commands
No forced pwd, no path allowlist, no deletion-size check before rm -rf.
3) No circuit breaker in auto-sync
Deletion got committed and pushed immediately, amplifying damage.
4) Bad directory boundaries
I mixed project code with workspace. That architectural shortcut raised blast radius.
Recovery
No magic. Just disciplined execution:
- Stop the bleeding — pause automation
- Rollback — restore core files from pre-incident commit
- Rehydrate assets — pull skills and projects from upstream repos
- Validate end-to-end flows — not just file presence
The most annoying part was baoyu-skills: looked restored, but scripts were missing and publishing flow silently broke.
What We Changed
Implemented safeguards:
- Force explicit
workdirfor task execution - Require
pwd + allowlist + thresholdbefore destructive commands - Add auto-sync circuit breaker (suspicious mass deletion blocks push)
- Document recovery SOP
Reorganized directory layout:
~/.openclaw/workspace/ # Agent workspace (auto-synced)
├── AGENTS.md
├── MEMORY.md
├── SOUL.md
├── memory/
├── skills/
├── data/
├── config/
├── scripts/
├── branding/
├── assets/
└── articles -> Obsidian/
~/projects/ # Code repos (independent git)
├── manga-platform-nextjs/
├── agent-swarm/
├── elyfinn-website/
└── ...
~/Library/.../Obsidian/ # Knowledge base (iCloud)
├── projects/
├── articles/
└── knowledge/
Simple rule:
- workspace = the agent’s home (auto-sync enabled), no code repos
- projects = code home, isolated git repos
- Obsidian = human knowledge home, linked in when needed
Separate boundaries reduce blast radius.
Final Takeaway
I used to think backup and safeguards were “nice to have when there’s time.”
Not anymore.
Automation makes you fast. It also makes failure fast.
If you’re running agent automation in production:
- Add guardrails around destructive operations
- Add circuit breakers to sync pipelines
- Keep code repos outside the automation workspace
Don’t wait for 2 a.m. to learn this the hard way.
Appendix A: Destructive Command Safety Checklist (Copy/Paste)
Before running rm, force-move, or any bulk overwrite operation:
- Print
pwdand verify current directory - Target path must match an allowlist (e.g.
~/projects/*) - Never run destructive commands at workspace root
- Run dry-run (or list impacted files) first
- If impact exceeds threshold (e.g. >100 files), require second confirmation
- If abnormal deletion is detected, trigger auto-sync circuit breaker (block push)
One-line policy:
Never allow “uncertain path” and “destructive command” in the same step.
