App-Fixes Design: Transaktionale Sicherheit¶
Design für kritische Fixes in der Rust-App zur Vermeidung von Datenverlust.
Fix 1: Transaktionales rebuild_database¶
Problem¶
Siehe Data Loss Risks: P-002
DELETE vor INSERT → Bei Fehler: Datenverlust
Lösung¶
Datei: src-tauri/src/commands/project.rs
pub fn rebuild_database_from_backlog(
conn: &mut Connection,
project_id: &str,
backlog_content: &str,
) -> Result<(), String> {
// 1. Parse FIRST (validate before touching DB)
let parse_result = parser::parse_backlog(backlog_content)
.map_err(|e| format!("Parse error: {}", e))?;
// 2. Validate structure
validator::validate_items(&parse_result.items)
.map_err(|e| format!("Validation error: {}", e))?;
validator::check_cycles(&parse_result.dependencies)
.map_err(|e| format!("Cycle detected: {}", e))?;
// 3. Start transaction (all or nothing)
let tx = conn.transaction()
.map_err(|e| format!("Transaction error: {}", e))?;
// 4. Delete old data (in transaction)
tx.execute(
"DELETE FROM backlog_items WHERE project_id = ?",
params![project_id]
)?;
tx.execute(
"DELETE FROM dependencies WHERE project_id = ?",
params![project_id]
)?;
tx.execute(
"DELETE FROM acceptance_criteria WHERE project_id = ?",
params![project_id]
)?;
// 5. Insert new data (in transaction)
for item in parse_result.items {
items_repo::insert_tx(&tx, project_id, &item)?;
// Bei Fehler: Automatic rollback via Drop trait
}
for dep in parse_result.dependencies {
deps_repo::insert_tx(&tx, project_id, &dep)?;
}
for criterion in parse_result.criteria {
criteria_repo::insert_tx(&tx, &criterion)?;
}
// 6. Commit (only if ALL succeeded)
tx.commit()
.map_err(|e| format!("Commit failed: {}", e))?;
tracing::info!("✓ Database rebuilt successfully for project {}", project_id);
Ok(())
}
Wichtig: Repository-Funktionen brauchen _tx Varianten:
// items_repo.rs
pub fn insert_tx(
tx: &Transaction,
project_id: &str,
item: &BacklogItem
) -> Result<(), rusqlite::Error> {
tx.execute(
"INSERT INTO backlog_items (id, project_id, title, ...) VALUES (?, ?, ?, ...)",
params![item.id, project_id, item.title, ...]
)?;
Ok(())
}
Vorteile¶
- ✅ Atomarität: Alles oder nichts
- ✅ Bei Fehler: Automatic Rollback (via Drop)
- ✅ Alte Daten bleiben erhalten bei Parse-Fehler
- ✅ Kein Datenverlust-Risiko
Fix 2: Hash vor Write¶
Problem¶
Siehe Data Loss Risks: P-003
Window zwischen Write und Hash-Store → Bei Crash: Falscher Hash
Lösung¶
Datei: src-tauri/src/commands/items.rs
pub async fn save_backlog(
project_id: String,
app_state: State<'_, AppState>,
) -> Result<(), String> {
let conn = app_state.db.lock().await;
// Fetch all items
let items = items_repo::get_all(&conn, &project_id)?;
let preamble = get_project_preamble(&conn, &project_id)?;
// Render to markdown
let content = renderer::render_backlog(&items, &preamble)?;
// ✅ FIX: Calculate hash BEFORE write
let hash = calculate_hash(&content);
// Get backlog file path
let backlog_path = get_backlog_path(&conn, &project_id)?;
// Write file
std::fs::write(&backlog_path, &content)
.map_err(|e| format!("Failed to write backlog: {}", e))?;
// Store hash (already calculated, consistent with written content)
project_repo::update_hash(&conn, &project_id, &hash)?;
tracing::info!("✓ Backlog saved: {} (hash: {})", backlog_path, &hash[..8]);
Ok(())
}
Noch besser: Transaction für Hash-Update:
// Update hash in transaction
let tx = conn.transaction()?;
project_repo::update_hash_tx(&tx, &project_id, &hash)?;
tx.commit()?;
Vorteile¶
- ✅ Hash immer konsistent mit File-Inhalt
- ✅ Bei Crash: Hash ist vom Content, der geschrieben WERDEN SOLLTE
- ✅ Kein permanenter Konflikt-Zustand
Fix 3: AutoSave Cancel bei External Change¶
Problem¶
Siehe Race Conditions: P-005
AutoSave kann External Changes überschreiben
Lösung¶
Datei: app/src/hooks/useAutoSave.ts
export function useAutoSave(projectId: string, enabled: boolean = true) {
const [isSaving, setIsSaving] = useState(false);
const saveTimerRef = useRef<NodeJS.Timeout | null>(null);
const unlistenRef = useRef<UnlistenFn | null>(null);
// Cancel AutoSave on external change
useEffect(() => {
if (!enabled) return;
const setupListener = async () => {
const unlisten = await listen('backlog-file-changed', (event) => {
// Cancel pending AutoSave timer
if (saveTimerRef.current) {
clearTimeout(saveTimerRef.current);
saveTimerRef.current = null;
console.log('⚠️ AutoSave cancelled (external change detected)');
}
});
unlistenRef.current = unlisten;
};
setupListener();
return () => {
if (unlistenRef.current) {
unlistenRef.current();
}
};
}, [enabled]);
const triggerSave = useCallback(() => {
if (!enabled) return;
// Cancel previous timer
if (saveTimerRef.current) {
clearTimeout(saveTimerRef.current);
}
// Start new 2s timer
saveTimerRef.current = setTimeout(async () => {
setIsSaving(true);
try {
await invoke('save_backlog', { projectId });
console.log('✓ Auto-save successful');
} catch (error) {
console.error('Auto-save failed:', error);
} finally {
setIsSaving(false);
saveTimerRef.current = null;
}
}, 2000);
}, [enabled, projectId]);
return { triggerSave, isSaving };
}
Vorteile¶
- ✅ AutoSave überschreibt nie External Changes
- ✅ User sieht Konflikt-Banner (kann entscheiden)
- ✅ Kein unerwarteter Datenverlust
Fix 4: Write mit File-Lock¶
Problem¶
Parallele Writes (App + Claude Code gleichzeitig) → File-Korruption
Lösung¶
Datei: src-tauri/src/utils/file_ops.rs
use std::fs::{File, OpenOptions};
use std::io::{Write};
use std::path::Path;
#[cfg(unix)]
use std::os::unix::fs::FileExt;
#[cfg(windows)]
use std::os::windows::fs::FileExt;
pub fn write_with_lock(path: &Path, content: &str) -> Result<(), String> {
// Open file with write + create + truncate
let mut file = OpenOptions::new()
.write(true)
.create(true)
.truncate(true)
.open(path)
.map_err(|e| format!("Failed to open file: {}", e))?;
// Platform-specific exclusive lock
#[cfg(unix)]
{
use fs2::FileExt;
file.lock_exclusive()
.map_err(|e| format!("Failed to lock file: {}", e))?;
}
#[cfg(windows)]
{
use fs2::FileExt;
file.lock_exclusive()
.map_err(|e| format!("Failed to lock file: {}", e))?;
}
// Write content
file.write_all(content.as_bytes())
.map_err(|e| format!("Failed to write: {}", e))?;
file.flush()
.map_err(|e| format!("Failed to flush: {}", e))?;
// Unlock (automatic via Drop, but explicit for clarity)
#[cfg(any(unix, windows))]
{
use fs2::FileExt;
file.unlock()
.map_err(|e| format!("Failed to unlock: {}", e))?;
}
Ok(())
}
Dependency: Add fs2 to Cargo.toml:
Usage:
Vorteile¶
- ✅ Verhindert parallele Writes
- ✅ Claude Code muss kurz warten (~10ms max)
- ✅ Keine File-Korruption
Nachteil¶
- ⚠️ Blocking Operation (aber nur kurz)
- ⚠️ Claude Code könnte "File locked" Error bekommen
Alternative: Non-Blocking mit Retry:
for attempt in 0..3 {
match write_with_lock(&path, content) {
Ok(_) => return Ok(()),
Err(e) if e.contains("locked") && attempt < 2 => {
std::thread::sleep(Duration::from_millis(100));
continue;
}
Err(e) => return Err(e),
}
}
Fix 5: modified_in_app Flag Management¶
Problem¶
modified_in_app Flag wird nicht korrekt zurückgesetzt → Falsche Konflikte
Lösung¶
Clear Flag nach successful Save:
// In save_backlog()
pub async fn save_backlog(
project_id: String,
app_state: State<'_, AppState>,
) -> Result<(), String> {
// ... existing code ...
// Write file
write_with_lock(&backlog_path, &content)?;
// Store hash
project_repo::update_hash(&conn, &project_id, &hash)?;
// ✅ FIX: Clear modified_in_app flags
items_repo::clear_modified_flags(&conn, &project_id)?;
Ok(())
}
Repository-Funktion:
// items_repo.rs
pub fn clear_modified_flags(
conn: &Connection,
project_id: &str
) -> Result<(), rusqlite::Error> {
conn.execute(
"UPDATE backlog_items SET modified_in_app = 0 WHERE project_id = ?",
params![project_id]
)?;
Ok(())
}
Vorteile¶
- ✅ Nach Save: Keine false-positive Konflikte
- ✅ Flag wird nur bei echten App-Änderungen gesetzt
Testing-Strategie¶
Unit-Tests¶
#[test]
fn test_transactional_rebuild_rollback() {
let mut conn = setup_test_db();
insert_test_items(&mut conn, "proj", 10);
// Invalid data (constraint violation)
let invalid_content = "### [F-001] Duplicate\n### [F-001] Duplicate";
let result = rebuild_database_from_backlog(&mut conn, "proj", invalid_content);
assert!(result.is_err());
// Verify: Rollback successful, old data intact
let count: i32 = conn.query_row(
"SELECT COUNT(*) FROM backlog_items WHERE project_id = ?",
params!["proj"],
|row| row.get(0)
).unwrap();
assert_eq!(count, 10);
}
#[test]
fn test_hash_before_write() {
// Mock filesystem to test timing
let hash = calculate_hash("content");
// Verify hash calculated before write call
}
Integration-Tests¶
// useAutoSave.test.ts
test('AutoSave cancels on external change', async () => {
const { triggerSave } = renderHook(() => useAutoSave('proj-1', true));
// Trigger AutoSave
act(() => {
triggerSave();
});
// Wait 1s (before 2s timer)
await sleep(1000);
// Simulate external change event
act(() => {
mockEmit('backlog-file-changed', {});
});
// Wait for AutoSave timeout
await sleep(2000);
// Verify: save_backlog was NOT called
expect(invokeCount).toBe(0);
});
Deployment-Reihenfolge¶
- ✅ Fix 1: Transaktionales rebuild_database (Kritisch)
- ✅ Fix 2: Hash vor Write (Wichtig)
- ✅ Fix 5: modified_in_app Management (Wichtig)
- ✅ Fix 3: AutoSave Cancel (Mittel)
- ✅ Fix 4: Write-Lock (Optional, nur bei Bedarf)