Compare commits

..

No commits in common. "main" and "1.2.1" have entirely different histories.
main ... 1.2.1

210 changed files with 2231 additions and 60156 deletions

View File

@ -1,21 +0,0 @@
# Backend data (images, database)
backend/src/data/db/*.db
backend/src/data/db/*.db-*
backend/src/data/images/
backend/src/data/previews/
backend/src/data/groups/
# Node modules (will be installed in container)
backend/node_modules
frontend/node_modules
# Build outputs
frontend/build
# Dev files
.git
.gitignore
*.md
docs/
test_photos/
data-backup/

13
.gitignore vendored
View File

@ -9,11 +9,6 @@ node_modules/
.env .env
.env.local .env.local
# Telegram credentials
scripts/.env.telegram
scripts/node_modules/
scripts/package-lock.json
# IDE # IDE
.vscode/ .vscode/
.idea/ .idea/
@ -29,11 +24,3 @@ npm-debug.log*
# Build outputs # Build outputs
dist/ dist/
build/ build/
# Backend data (uploaded images, database, etc.)
backend/src/data/
# Development-specific files (created by ./dev.sh)
frontend/.env
frontend/env.sh
frontend/env-config.js

View File

@ -1,220 +0,0 @@
# API Authentication Guide
## Übersicht
Die API verwendet **zwei verschiedene Authentifizierungs-Mechanismen** für unterschiedliche Zugriffslevel:
### 1. Admin-Routes (Session + CSRF)
- **Zweck**: Geschützte Admin-Funktionen (Deletion Log, Cleanup, Moderation, Statistics)
- **Methode**: HTTP Session (Cookie) + CSRF-Token
- **Konfiguration**: `.env``ADMIN_SESSION_SECRET` (+ Admin-Benutzer in DB)
### 2. Management-Routes (UUID Token)
- **Zweck**: Self-Service Portal für Gruppen-Verwaltung
- **Methode**: UUID v4 Token in URL-Path
- **Quelle**: Automatisch generiert beim Upload, gespeichert in DB
---
## 1. Admin Authentication
### Setup
1. **Session Secret setzen**:
```env
ADMIN_SESSION_SECRET=$(openssl rand -hex 32)
```
> Standardmäßig setzt der Server in Production HTTPS-Only Cookies (`Secure`). Falls deine Installation **ohne HTTPS** hinter einem internen Netzwerk läuft, kannst du das Verhalten über `ADMIN_SESSION_COOKIE_SECURE=false` explizit deaktivieren. Verwende dies nur in vertrauenswürdigen Umgebungen und setze den Wert vorzugsweise per lokaler Compose-Override-Datei oder geheimen ENV-Variablen, damit das Repo weiterhin den sicheren Default `true` behält.
2. **Backend starten** Migration legt Tabelle `admin_users` an.
3. **Setup-Status prüfen**:
```bash
curl -c cookies.txt http://localhost:5000/auth/setup/status
```
4. **Initialen Admin anlegen** (nur wenn `needsSetup=true`):
```bash
curl -X POST -H "Content-Type: application/json" \
-c cookies.txt -b cookies.txt \
-d '{"username":"admin","password":"SuperSicher123!"}' \
http://localhost:5000/auth/setup/initial-admin
```
5. **Login für weitere Sessions**:
```bash
curl -X POST -H "Content-Type: application/json" \
-c cookies.txt -b cookies.txt \
-d '{"username":"admin","password":"SuperSicher123!"}' \
http://localhost:5000/auth/login
```
6. **CSRF Token abrufen** (für mutierende Requests):
```bash
curl -b cookies.txt http://localhost:5000/auth/csrf-token
```
### Verwendung
Alle `/api/admin/*`- und `/api/system/*`-Routen setzen voraus:
1. Browser sendet automatisch das Session-Cookie (`sid`).
2. Für POST/PUT/PATCH/DELETE muss der Header `X-CSRF-Token` gesetzt werden.
Beispiel:
```bash
CSRF=$(curl -sb cookies.txt http://localhost:5000/auth/csrf-token | jq -r '.csrfToken')
curl -X PATCH \
-H "Content-Type: application/json" \
-H "X-CSRF-Token: $CSRF" \
-b cookies.txt \
-d '{"approved":true}' \
http://localhost:5000/api/admin/groups/abc123/approve
```
### Geschützte Endpoints (Auszug)
| Endpoint | Method | Beschreibung |
|----------|--------|--------------|
| `/api/admin/deletion-log` | GET | Deletion Log Einträge |
| `/api/admin/deletion-log/csv` | GET | Deletion Log als CSV |
| `/api/admin/cleanup/run` | POST | Cleanup manuell starten |
| `/api/admin/cleanup/status` | GET | Cleanup Status |
| `/api/admin/rate-limiter/stats` | GET | Rate-Limiter Statistiken |
| `/api/admin/groups` | GET | Alle Gruppen (Moderation) |
| `/api/admin/groups/:id/approve` | PATCH | Gruppe freigeben |
| `/api/admin/groups/:id` | DELETE | Gruppe löschen |
| `/api/system/migration/*` | POST | Migrationswerkzeuge |
### Error Codes
| Status | Bedeutung |
|--------|-----------|
| `401` | Session fehlt oder ist abgelaufen |
| `403` | CSRF ungültig oder Benutzer hat keine Admin-Rolle |
| `419` | (optional) Session wurde invalidiert |
---
## 2. Management Authentication
### Setup
**Kein Setup nötig!** Token werden automatisch generiert.
### Funktionsweise
1. **Bei Upload** wird automatisch ein UUID v4 Token generiert
2. **Token wird gespeichert** in DB (Spalte: `management_token`)
3. **Token wird zurückgegeben** in der Upload-Response
4. **Nutzer erhält Link** wie: `https://example.com/manage/{token}`
### Verwendung
Token wird **im URL-Path** übergeben (nicht im Header):
```bash
# Token validieren und Daten laden
GET /api/manage/550e8400-e29b-41d4-a716-446655440000
# Bilder hochladen
POST /api/manage/550e8400-e29b-41d4-a716-446655440000/images
# Gruppe löschen
DELETE /api/manage/550e8400-e29b-41d4-a716-446655440000
```
### Geschützte Endpoints
| Endpoint | Method | Beschreibung |
|----------|--------|--------------|
| `/api/manage/:token` | GET | Gruppen-Daten laden |
| `/api/manage/:token/consents` | PUT | Social Media Consents |
| `/api/manage/:token/metadata` | PUT | Metadaten bearbeiten |
| `/api/manage/:token/images` | POST | Bilder hinzufügen |
| `/api/manage/:token/images/:imageId` | DELETE | Bild löschen |
| `/api/manage/:token` | DELETE | Gruppe löschen |
### Sicherheits-Features
- **Token-Format Validierung**: Nur gültige UUID v4 Tokens
- **Rate Limiting**: Schutz vor Brute-Force
- **Audit Logging**: Alle Aktionen werden geloggt
- **DB-Check**: Token muss in DB existieren
### Error Codes
| Status | Bedeutung |
|--------|-----------|
| `404` | Token nicht gefunden oder Gruppe gelöscht |
| `429` | Rate Limit überschritten |
---
## Testing
### Unit Tests
```bash
npm test -- tests/unit/auth.test.js
```
### Integration Tests
```bash
# Admin Auth testen
npm test -- tests/api/admin-auth.test.js
# Alle API Tests
npm test
```
### Manuelles Testen
1. **Login**:
```bash
curl -c cookies.txt -X POST -H "Content-Type: application/json" \
-d '{"username":"admin","password":"Secret123"}' \
http://localhost:5000/auth/login
```
2. **CSRF holen**:
```bash
CSRF=$(curl -sb cookies.txt http://localhost:5000/auth/csrf-token | jq -r '.csrfToken')
```
3. **Admin-Route aufrufen**:
```bash
curl -b cookies.txt -H "X-CSRF-Token: $CSRF" http://localhost:5000/api/admin/deletion-log
# → 200 OK
```
4. **Ohne Session** (z. B. Cookies löschen) → Request liefert `403 SESSION_REQUIRED`.
---
## Production Checklist
- [ ] `ADMIN_SESSION_SECRET` sicher generieren (>= 32 Bytes random)
- [ ] `.env` nicht in Git committen (bereits in `.gitignore`)
- [ ] HTTPS verwenden (TLS/SSL) damit Cookies `Secure` gesetzt werden können (falls nicht möglich: `ADMIN_SESSION_COOKIE_SECURE=false` setzen nur in vertrauenswürdigen Netzen)
- [ ] Session-Store auf persistentem Volume ablegen
- [ ] Rate Limiting & Audit Logs überwachen
- [ ] Admin-Benutzerverwaltung (On-/Offboarding) dokumentieren
---
## Sicherheits-Hinweise
### Session-Secret Rotation
1. Wartungsfenster planen (alle Sessions werden invalidiert)
2. Neuen `ADMIN_SESSION_SECRET` generieren
3. `.env` aktualisieren und Backend neu starten
### Management-Token
- Token sind **permanent gültig** bis Gruppe gelöscht wird
- Bei Verdacht auf Leak: Gruppe löschen (löscht auch Token)
- Token-Format (UUID v4) macht Brute-Force unpraktisch
### Best Practices
- Keine Admin-Secrets im Frontend oder in Repos committen
- Admin-Session-Cookies nur über HTTPS ausliefern
- Rate-Limiting für beide Auth-Typen aktiv halten
- Audit-Logs regelmäßig auf Anomalien prüfen
- Session-Store-Backups schützen (enthalten Benutzer-IDs)

View File

@ -1,601 +1,6 @@
# Changelog # Changelog
## [2.0.1] - 2025-12-01 ## [Unreleased] - Branch: upgrade/deps-react-node-20251028
## [2.0.0] - 2025-11-30
### ✨ Features
- ENV-Struktur massiv vereinfacht (Phase 6)
- Add consent change and deletion notifications (Phase 4)
- Add upload notifications to Telegram Bot (Phase 3)
- Add TelegramNotificationService (Phase 2)
- Add Telegram Bot standalone test (Phase 1)
- Add Telegram notification feature request and improve prod.sh Docker registry push
### 🔧 Chores
- Add package.json for Telegram test scripts
## [1.10.2] - 2025-11-29
### ✨ Features
- Auto-push releases with --follow-tags
## [1.10.1] - 2025-11-29
### 🐛 Fixes
- Update Footer.js version to 1.10.0 and fix sync-version.sh regex
### ♻️ Refactoring
- Use package.json version directly in Footer instead of env variables
## [1.10.0] - 2025-11-29
### ✨ Features
- Enable drag-and-drop reordering in ModerationGroupImagesPage
- Error handling system and animated error pages
### ♻️ Refactoring
- Extract ConsentFilter and StatsDisplay components from ModerationGroupsPage
- Consolidate error pages into single ErrorPage component
- Centralized styling with CSS and global MUI overrides
### 🔧 Chores
- Improve release script with tag-based commit detection
## Public/Internal Host Separation (November 25, 2025)
### 🌐 Public/Internal Host Separation (November 25, 2025)
#### Backend
- ✅ **Host-Based Access Control**: Implemented `hostGate` middleware for subdomain-based feature separation
- Public host blocks internal routes: `/api/admin/*`, `/api/groups`, `/api/slideshow`, `/api/social-media/*`, `/api/auth/*`
- Public host allows: `/api/upload`, `/api/manage/:token`, `/api/previews`, `/api/consent`, `/api/social-media/platforms`
- Host detection via `X-Forwarded-Host` (nginx-proxy-manager) or `Host` header
- Environment variables: `PUBLIC_HOST`, `INTERNAL_HOST`, `ENABLE_HOST_RESTRICTION`, `TRUST_PROXY_HOPS`
- ✅ **Rate Limiting for Public Host**: IP-based upload rate limiting
- `publicUploadLimiter`: 20 uploads per hour for public host
- Internal host: No rate limits
- In-memory tracking with automatic cleanup
- ✅ **Audit Log Enhancement**: Extended audit logging with source tracking
- New columns: `source_host`, `source_type` in `management_audit_log`
- Tracks: `req.requestSource` (public/internal) for all management actions
- Database migration 009: Added source tracking columns
#### Frontend
- ✅ **Host Detection Utility**: Runtime host detection for feature flags
- `hostDetection.js`: Centralized host detection logic
- Feature flags: `canAccessAdmin`, `canAccessSlideshow`, `canAccessGroups`, etc.
- Runtime config from `window._env_.PUBLIC_HOST` / `INTERNAL_HOST`
- ✅ **React Code Splitting**: Lazy loading for internal-only features
- `React.lazy()` imports for: SlideshowPage, GroupsOverviewPage, ModerationPages
- `ProtectedRoute` component: Redirects to upload page if accessed from public host
- Conditional routing: Internal routes only mounted when `hostConfig.isInternal`
- Significant bundle size reduction for public users
- ✅ **Clipboard Fallback**: HTTP-compatible clipboard functionality
- Fallback to `document.execCommand('copy')` when `navigator.clipboard` unavailable
- Fixes: "Cannot read properties of undefined (reading 'writeText')" on HTTP
- Works in non-HTTPS environments (local testing, HTTP-only deployments)
- ✅ **404 Page Enhancement**: Host-specific error messaging
- Public host: Shows "Function not available" message with NavbarUpload
- Internal host: Shows standard 404 with full Navbar
- Conditional navbar rendering based on `hostConfig.isPublic`
#### Configuration
- ✅ **Environment Setup**: Complete configuration for dev/prod environments
- `docker/dev/docker-compose.yml`: HOST variables, ENABLE_HOST_RESTRICTION, TRUST_PROXY_HOPS
- `docker/dev/frontend/config/.env`: PUBLIC_HOST, INTERNAL_HOST added
- Frontend `.env.development`: DANGEROUSLY_DISABLE_HOST_CHECK for Webpack Dev Server
- Backend constants: Configurable via environment variables
#### Testing & Documentation
- ✅ **Local Testing Guide**: Comprehensive testing documentation
- `/etc/hosts` setup for Linux/Mac/Windows
- Browser testing instructions (public/internal hosts)
- API testing with curl examples
- Rate limiting test scripts
- Troubleshooting guide for common issues
- ✅ **Integration Testing**: 20/20 hostGate unit tests passing
- Tests: Host detection, route blocking, public routes, internal routes
- Mock request helper: Proper `req.get()` function simulation
- Environment variable handling in tests
#### Bug Fixes
- 🐛 Fixed: Unit tests failing due to ENV variables not set when module loaded
- Solution: Set ENV before Jest execution in package.json test script
- 🐛 Fixed: `req.get()` mock not returning header values in tests
- Solution: Created `createMockRequest()` helper with proper function implementation
- 🐛 Fixed: Webpack "Invalid Host header" error with custom hostnames
- Solution: Added `DANGEROUSLY_DISABLE_HOST_CHECK=true` in `.env.development`
- 🐛 Fixed: Missing PUBLIC_HOST/INTERNAL_HOST in frontend env-config.js
- Solution: Added variables to `docker/dev/frontend/config/.env`
- 🐛 Fixed: Wrong navbar (Navbar instead of NavbarUpload) on 404 page for public host
- Solution: Conditional rendering `{hostConfig.isPublic ? <NavbarUpload /> : <Navbar />}`
- 🐛 Fixed: "Plattformen konnten nicht geladen werden" in UUID Management mode
- Solution: Added `/api/social-media/platforms` to PUBLIC_ALLOWED_ROUTES
#### Technical Details
- **Backend Changes**:
- New files: `middlewares/hostGate.js`, `middlewares/rateLimiter.js` (publicUploadLimiter)
- Modified files: `server.js` (hostGate registration), `auditLog.js` (source tracking)
- Database: Migration 009 adds `source_host`, `source_type` columns
- Environment: 5 new ENV variables for host configuration
- **Frontend Changes**:
- New files: `Utils/hostDetection.js` (214 lines)
- Modified files: `App.js` (lazy loading + ProtectedRoute), `404Page.js` (conditional navbar)
- Modified files: `MultiUploadPage.js`, `UploadSuccessDialog.js` (clipboard fallback)
- Modified files: `env-config.js`, `public/env-config.js` (HOST variables)
- New file: `.env.development` (Webpack host check bypass)
- **Production Impact**:
- nginx-proxy-manager setup required for subdomain routing
- Must forward `X-Forwarded-Host` header to backend
- Set `TRUST_PROXY_HOPS=1` when behind nginx-proxy-manager
- Public host users get 96% smaller JavaScript bundle (code splitting)
---
## feature/security
### 🔐 Session-Based Admin Authentication & Multi-Admin Support (November 23, 2025)
#### Backend
- ✅ **Server-Side Sessions + CSRF**: Replaced Bearer-token auth with HttpOnly session cookies backed by SQLite, added `requireAdminAuth` + `requireCsrf` middlewares, and exposed `GET /auth/csrf-token` for clients.
- ✅ **New Auth Lifecycle**: Added `GET /auth/setup/status`, `POST /auth/setup/initial-admin`, `POST /auth/login`, `POST /auth/logout`, `POST /auth/change-password`, and `POST /api/admin/users` to support onboarding, login, rotation, and creating additional admins.
- ✅ **Admin Directory**: Introduced `admin_users` table, repository, and `AdminAuthService` (hash/verify, forced password change flag, audit-friendly responses) plus Jest coverage for the new flows.
- ✅ **OpenAPI & Swagger Stability**: Regenerate spec on dev start only, ignore `docs/openapi.json` in nodemon watches, and expose Swagger UI reliably at `http://localhost:5001/api/docs/`.
#### Frontend
- ✅ **Admin Session Context**: New `AdminSessionProvider` manages setup/login state, CSRF persistence, and guards moderation routes via `AdminSessionGate`.
- ✅ **Force Password Change UX**: Added `ForcePasswordChangeForm`, change-password API helper, and conditional gate that blocks moderation access until the first login password is rotated.
- ✅ **Management UI Updates**: Moderation/management pages now assume cookie-based auth, automatically attach CSRF headers, and gracefully handle session expiry.
#### Tooling & Scripts
- ✅ **API-Driven CLI**: Replaced the legacy Node-only helper with `scripts/create_admin_user.sh`, which can bootstrap the first admin or log in via API to add additional admins from any Linux machine.
- ✅ **Docker & Docs Alignment**: Updated dev/prod compose files, Nginx configs, and `README*`/`AUTHENTICATION.md`/`frontend/MIGRATION-GUIDE.md` to describe the new security model and CLI workflow.
- ✅ **Feature Documentation**: Added `FeatureRequests/FEATURE_PLAN-security.md` + `FEATURE_TESTPLAN-security.md` outlining design, validation steps, and residual follow-ups.
---
## feature/SocialMedia
### 🧪 Comprehensive Test Suite & Admin API Security (November 16, 2025)
#### Testing Infrastructure
- ✅ **Jest + Supertest Framework**: 45 automated tests covering all API endpoints
- Unit tests: 5 tests for authentication middleware (100% coverage)
- Integration tests: 40 tests for API endpoints
- Test success rate: 100% (45/45 passing)
- Execution time: ~10 seconds for full suite
- ✅ **Test Organization**:
- `tests/unit/` - Unit tests (auth.test.js)
- `tests/api/` - Integration tests (admin, consent, migration, upload)
- `tests/setup.js` - Global configuration with singleton server pattern
- `tests/testServer.js` - Test server helper utilities
- ✅ **Test Environment**:
- In-memory SQLite database (`:memory:`) for isolation
- Temporary upload directories (`/tmp/test-image-uploader/`)
- Singleton server pattern for fast test execution
- Automatic cleanup after test runs
- `NODE_ENV=test` environment detection
- ✅ **Code Coverage**:
- Statements: 26% (above 20% threshold)
- Branches: 15%
- Functions: 16%
- Lines: 26%
#### Admin API Authentication
- ✅ **Bearer Token Security**: Protected all admin and dangerous system endpoints
- `requireAdminAuth` middleware for Bearer token validation
- Environment variable: `ADMIN_API_KEY` for token configuration
- Protected routes: All `/api/admin/*`, `/api/system/migration/migrate`, `/api/system/migration/rollback`
- HTTP responses: 403 for invalid/missing tokens, 500 if ADMIN_API_KEY not configured
- ✅ **Authentication Documentation**:
- Complete setup guide in `AUTHENTICATION.md`
- Example token generation commands (openssl, Node.js)
- curl and Postman usage examples
- Security best practices and production checklist
#### API Route Documentation
- ✅ **Single Source of Truth**: `backend/src/routes/routeMappings.js`
- Centralized route configuration for server and OpenAPI generation
- Comprehensive API overview in `backend/src/routes/README.md`
- Critical Express routing order documented and enforced
- ✅ **Route Order Fix**: Fixed Express route matching bug
- Problem: Generic routes (`/groups/:groupId`) matched before specific routes (`/groups/by-consent`)
- Solution: Mount consent router before admin router on `/api/admin` prefix
- Documentation: Added comments explaining why order matters
- ✅ **OpenAPI Auto-Generation**:
- Automatic spec generation on backend start (dev mode)
- Swagger UI available at `/api/docs/` in development
- Skip generation in test and production modes
#### Bug Fixes
- 🐛 Fixed: SQLite connection callback not properly awaited (caused test hangs)
- Wrapped `new sqlite3.Database()` in Promise for proper async/await
- 🐛 Fixed: Upload endpoint file validation checking `req.files.file` before `req.files` existence
- Added `!req.files` check before accessing `.file` property
- 🐛 Fixed: Test uploads failing with EACCES permission denied
- Use `/tmp/` directory in test mode instead of `data/images/`
- Dynamic path handling with `path.isAbsolute()` check
- 🐛 Fixed: Express route order causing consent endpoints to return 404
- Reordered routers: consent before admin in routeMappings.js
#### Frontend Impact
**⚠️ Action Required**: Frontend needs updates for new authentication system
1. **Admin API Calls**: Add Bearer token header
```javascript
headers: {
'Authorization': `Bearer ${ADMIN_API_KEY}`
}
```
2. **Route Verification**: Check all API paths against `routeMappings.js`
- Consent routes: `/api/admin/groups/by-consent`, `/api/admin/consents/export`
- Migration routes: `/api/system/migration/*` (not `/api/migration/*`)
3. **Error Handling**: Handle 403 responses for missing/invalid authentication
4. **Environment Configuration**: Add `REACT_APP_ADMIN_API_KEY` to frontend `.env`
#### Technical Details
- **Backend Changes**:
- New files: `middlewares/auth.js`, `tests/` directory structure
- Modified files: All admin routes now protected, upload.js validation improved
- Database: Promisified SQLite connection in DatabaseManager.js
- Constants: Test-mode path handling in constants.js
- **Configuration Files**:
- `jest.config.js`: Test configuration with coverage thresholds
- `.env.example`: Added ADMIN_API_KEY documentation
- `package.json`: Added Jest and Supertest dependencies
---
### 🎨 Modular UI Architecture (November 15, 2025)
#### Features
- ✅ **Reusable Component System**: Created modular components for all pages
- `ConsentManager.js` (263 lines): Workshop + Social Media consents with edit/upload modes
- `GroupMetadataEditor.js` (146 lines): Metadata editing with edit/upload/moderate modes
- `ImageDescriptionManager.js` (175 lines): Batch image descriptions with manage/moderate modes
- `DeleteGroupButton.js` (102 lines): Standalone group deletion component
- ✅ **Multi-Mode Support**: Components adapt behavior based on context
- `mode="upload"`: External state, no save buttons (MultiUploadPage)
- `mode="edit"`: Management API endpoints (ManagementPortalPage)
- `mode="moderate"`: Admin API endpoints (ModerationGroupImagesPage)
- ✅ **Code Reduction**: Massive reduction in code duplication
- ManagementPortalPage: 1000→400 lines (-60%)
- ModerationGroupImagesPage: 281→107 lines (-62%)
- MultiUploadPage: Refactored to use modular components
- Net result: +288 lines added, -515 lines removed = **-227 lines total**
#### UI Consistency
- 🎨 **Design System**: Established consistent patterns across all pages
- Paper boxes with headings inside (not outside)
- HTML `<button>` with CSS classes instead of Material-UI Button
- Material-UI Alert for inline feedback (SweetAlert2 only for destructive actions)
- Icons: 💾 save, ↩ discard, 🗑️ delete, 📥 download
- Individual save/discard per component section
#### Bug Fixes
- <20> Fixed: Image descriptions not saving during upload (preview ID → filename mapping)
- 🐛 Fixed: FilterListIcon import missing in ModerationGroupsPage
- 🐛 Fixed: Button styles inconsistent across pages
#### Technical Details
- **Frontend Changes**:
- New files: 4 modular components (686 lines)
- Refactored files: 7 pages with consistent patterns
- State management: Deep copy pattern, JSON comparison, set-based comparison
- API integration: Mode-based endpoint selection
---
### 🔑 Self-Service Management Portal (November 11-14, 2025)
#### Backend Features (Phase 2 Backend - Nov 11)
- ✅ **Management Token System**: UUID v4 token generation and validation
- Tokens stored in `groups.management_token` column
- Token-based authentication for all management operations
- Format validation (UUID v4 regex)
- ✅ **Management APIs**: Complete self-service functionality
- `GET /api/manage/:token` - Load group data
- `PUT /api/manage/:token/consents` - Revoke/restore consents
- `PUT /api/manage/:token/metadata` - Edit title/description
- `PUT /api/manage/:token/images/descriptions` - Batch update descriptions
- `POST /api/manage/:token/images` - Add images (max 50 per group)
- `DELETE /api/manage/:token/images/:imageId` - Delete single image
- `DELETE /api/manage/:token` - Delete entire group
- ✅ **Security Features**:
- Rate limiting: 10 requests/hour per IP (in-memory)
- Brute-force protection: 20 failed attempts → 24h IP ban
- Management audit log: All actions tracked in `management_audit_log` table
- Token masking: Only first 8 characters logged
- ✅ **Database Migration 007**: Management audit log table
- Tracks: action, success, error_message, ip_address, user_agent
- Indexes for performance: group_id, action, ip_address, created_at
#### Frontend Features (Phase 2 Frontend - Nov 13-14)
- ✅ **Management Portal Page**: Full-featured user interface at `/manage/:token`
- Token validation with error handling
- Consent management UI (revoke/restore)
- Metadata editing UI
- Image upload/delete UI
- Group deletion UI (with confirmation)
- ✅ **Component Reuse**: ConsentCheckboxes with mode support
- `mode="upload"`: Upload page behavior
- `mode="manage"`: Management portal behavior
- Eliminates ~150 lines of duplicated code
- ✅ **Upload Success Integration**: Management link prominently displayed
- Copy-to-clipboard functionality
- Security warning about safe storage
- Email link for social media post deletion requests
---
### 🔐 Social Media Consent Management (November 9-10, 2025)
#### Backend Features (Phase 1 Backend - Nov 9)
- ✅ **Database Migrations**:
- Migration 005: Added consent fields to `groups` table
* `display_in_workshop` (BOOLEAN, NOT NULL, default 0)
* `consent_timestamp` (DATETIME)
* `management_token` (TEXT, UNIQUE) - for Phase 2
- Migration 006: Social media platform system
* `social_media_platforms` table (configurable platforms)
* `group_social_media_consents` table (per-group, per-platform consents)
* Revocation tracking: `revoked`, `revoked_timestamp` columns
- GDPR-compliant: Old groups keep `display_in_workshop = 0` (no automatic consent)
- ✅ **API Endpoints**:
- `GET /api/social-media/platforms` - List active platforms (Facebook, Instagram, TikTok)
- `POST /api/groups/:groupId/consents` - Save consents (batch operation)
- `GET /api/groups/:groupId/consents` - Load consent status
- `GET /api/admin/groups/by-consent` - Filter groups by consent (all, workshop, platform-specific)
- `GET /api/admin/consents/export` - Export consent data (CSV/JSON format)
- ✅ **Upload Validation**: 400 error if `display_in_workshop` not set to true
- ✅ **Repositories**:
- `SocialMediaRepository.js`: Platform & consent management
- Extended `GroupRepository.js`: Consent filtering queries
#### Frontend Features (Phase 1 Frontend - Nov 10)
- ✅ **ConsentCheckboxes Component**: GDPR-compliant consent UI
- Workshop consent (mandatory, cannot upload without)
- Social media consents (optional, per-platform checkboxes)
- Informative tooltips explaining usage
- Legal notice about moderation and withdrawal rights
- ✅ **ConsentBadges Component**: Visual consent status indicators
- Icons: 🏭 Workshop, 📱 Facebook, 📷 Instagram, 🎵 TikTok
- Tooltips with consent details and timestamps
- Filtering support for revoked consents
- ✅ **Moderation Panel Updates**:
- Consent filter dropdown (All, Workshop-only, per-platform)
- Export button for CSV/JSON download
- Consent badges on each group card
- In-memory filtering (loads all groups, filters client-side)
- ✅ **Upload Success Dialog**: Group ID display for consent withdrawal reference
#### Testing Results (Nov 10)
- ✅ Upload with/without workshop consent
- ✅ Social media consent persistence
- ✅ Filter functionality (All: 76, Workshop: 74, Facebook: 2)
- ✅ CSV export with proper formatting
- ✅ Badge icons and tooltips
- ✅ Migration 005 & 006 auto-applied on startup
- ✅ GDPR validation: 72 old groups with display_in_workshop = 0
---
## Preload Image
### 🚀 Slideshow Optimization (November 2025)
#### Features
- ✅ **Image Preloading**: Intelligent preloading of next 2-3 images
- Custom hook `useImagePreloader.js` for background image loading
- Eliminates visible loading delays during slideshow transitions
- Cache management with LRU strategy (max 10 images)
- 3-second timeout for slow connections with graceful fallback
- ✅ **Chronological Sorting**: Groups now display in chronological order
- Primary sort: Year (ascending, oldest first)
- Secondary sort: Upload date (ascending)
- Sequential group transitions instead of random
- Consistent viewing experience across sessions
#### Technical Details
- **Frontend Changes**:
- New file: `frontend/src/hooks/useImagePreloader.js`
- Modified: `frontend/src/Components/Pages/SlideshowPage.js`
- Removed random shuffle algorithm
- Added predictive image loading with Image() API
- Debug logging in development mode
#### Bug Fixes
- 🐛 Fixed: Duplicate image display issue in slideshow (network latency)
- 🐛 Fixed: Flickering transitions between images
- 🐛 Fixed: Loading delays visible to users on slower connections
#### Performance
- ⚡ 0ms load time for pre-cached images (vs. 200-1500ms before)
- ⚡ Seamless transitions with no visual artifacts
- ⚡ Better UX on production servers with slower internet
---
## Delete Unproved Groups
### ✨ Automatic Cleanup Feature (November 2025)
#### Backend
- ✅ **Database Schema**: New `deletion_log` table for audit trail
- Columns: group_id, year, image_count, upload_date, deleted_at, deletion_reason, total_file_size
- Performance indexes: idx_groups_cleanup, idx_groups_approved, idx_deletion_log_deleted_at
- Automatic schema migration on server startup
- ✅ **Services**: New cleanup orchestration layer
- `GroupCleanupService.js` - Core cleanup logic with 7-day threshold
- `SchedulerService.js` - Cron job scheduler (daily at 10:00 AM Europe/Berlin)
- Complete file deletion: originals + preview images
- Comprehensive logging with statistics
- ✅ **Repositories**: Extended data access layer
- `DeletionLogRepository.js` - CRUD operations for deletion history
- `GroupRepository.js` - New methods:
- `findUnapprovedGroupsOlderThan()` - Query old unapproved groups
- `getGroupStatistics()` - Gather metadata before deletion
- `deleteGroupCompletely()` - Transactional deletion with CASCADE
- ✅ **API Endpoints**: Admin API routes (`/api/admin/*`)
- `GET /deletion-log?limit=N` - Recent deletions with pagination
- `GET /deletion-log/all` - Complete deletion history
- `GET /deletion-log/stats` - Statistics with formatted file sizes
- `POST /cleanup/trigger` - Manual cleanup trigger (testing)
- `GET /cleanup/preview` - Dry-run preview of deletions
- ✅ **Dependencies**: Added `node-cron@3.0.3` for scheduled tasks
#### Frontend
- ✅ **Components**: New deletion log display
- `DeletionLogSection.js` - Statistics cards + history table
- Statistics: Total groups/images deleted, storage freed
- Table: Group ID, year, image count, timestamps, reason, file size
- Toggle: "Last 10" / "All" entries with dynamic loading
- ✅ **Moderation Page**: Integrated cleanup features
- **Countdown Widget**: Shows "⏰ X Tage bis Löschung" on pending groups
- **Approval Feedback**: SweetAlert2 success/error notifications
- **Deletion Log**: Integrated at bottom of moderation interface
- Visual indicators for pending vs. approved status
- ✅ **Dependencies**: Added `sweetalert2` for user feedback
#### Infrastructure
- ✅ **Nginx Configuration**: Updated routes for admin API
- Dev + Prod configs updated
- `/api/admin` proxy to backend (no separate auth - protected by /moderation access)
- Proper request forwarding with headers
#### Testing
- ✅ **Test Tools**: Comprehensive testing utilities
- `tests/test-cleanup.sh` - Interactive bash test script
- `backend/src/scripts/test-cleanup.js` - Node.js test alternative
- Features: Backdate groups, preview cleanup, trigger manually, view logs
- `tests/TESTING-CLEANUP.md` - Complete testing guide with 6 scenarios
#### Documentation
- ✅ **README.md**: Updated with automatic cleanup features
- ✅ **TESTING-CLEANUP.md**: Comprehensive testing guide
- ✅ **Code Comments**: Detailed inline documentation
---
## Image Description
### ✨ Image Descriptions Feature (November 2025)
#### Backend
- ✅ **Database Migration**: Added `image_description` column to `images` table (TEXT, nullable)
- Automatic migration on server startup
- Index created for performance optimization
- Backward compatible with existing images
- ✅ **Repository Layer**: Extended `GroupRepository.js` with description methods
- `updateImageDescription()` - Update single image description
- `updateBatchImageDescriptions()` - Batch update multiple descriptions
- Validation: Max 200 characters enforced
- `createGroup()` now accepts `imageDescription` field
- ✅ **API Endpoints**: New REST endpoints for description management
- `PATCH /groups/:groupId/images/:imageId` - Update single description
- `PATCH /groups/:groupId/images/batch-description` - Batch update
- Server-side validation (200 char limit)
- Error handling and detailed responses
- ✅ **Upload Integration**: Batch upload now supports descriptions
- `POST /api/upload/batch` accepts `descriptions` array
- Descriptions matched to images by filename
- Automatic truncation if exceeding limit
#### Frontend
- ✅ **Core Components**: Enhanced `ImageGalleryCard` and `ImageGallery`
- **Edit Mode**: Toggle button to activate description editing
- **Textarea**: Multi-line input with character counter (0/200)
- **Validation**: Real-time character limit enforcement
- **Placeholder**: Original filename shown as hint
- **Display Mode**: Italicized description display when not editing
- ✅ **Upload Flow**: Extended `MultiUploadPage.js`
- Edit mode for adding descriptions during upload
- State management for descriptions per image
- Descriptions sent to backend with upload
- Clean up on form reset
- ✅ **Moderation**: Enhanced `ModerationGroupImagesPage.js`
- Edit mode for existing group images
- Load descriptions from server
- Batch update API integration
- Save button with success feedback
- Optimistic UI updates
- ✅ **Slideshow**: Display descriptions during presentation
- Centered overlay below image
- Semi-transparent background with blur effect
- Responsive sizing (80% max width)
- Conditional rendering (only if description exists)
- ✅ **Public View**: Show descriptions in `PublicGroupImagesPage.js`
- Display in single-image gallery mode
- Italicized style for visual distinction
- No edit functionality (read-only)
#### Styling
- ✅ **CSS Additions**: New styles for edit mode and descriptions
- `.image-description-edit` - Edit textarea container
- `.image-description-edit textarea` - Input field styles
- `.char-counter` - Character counter with limit warning
- `.image-description-display` - Read-only description display
- Responsive design for mobile devices
#### Testing & Quality
- ✅ All phases implemented and committed
- ⏳ Integration testing pending
- ⏳ User acceptance testing pending
---
## Upgrade Deps: React & Node (October 2025)
### 🎯 Major Framework Upgrades (October 2025) ### 🎯 Major Framework Upgrades (October 2025)

View File

@ -1,480 +0,0 @@
# Feature Plan: EXIF-Daten Extraktion & Capture-Date Sortierung
**Status**: In Planung
**Branch**: `feature/ExifExtraction`
**Erstellt**: 09. November 2025
## Ziel
Automatische Extraktion von EXIF-Metadaten (insbesondere Aufnahmedatum) aus hochgeladenen Bildern und Nutzung dieser Informationen für eine präzisere chronologische Sortierung der Slideshow-Gruppen.
## Motivation
### Aktuelles Verhalten
- Slideshow-Gruppen werden nach **Jahr** (Benutzereingabe) und **Upload-Datum** sortiert
- Aufnahmedatum der Bilder wird nicht berücksichtigt
- Bei Bildern aus verschiedenen Jahren in einer Gruppe: Unklare Sortierung
### Gewünschtes Verhalten
- Automatische Extraktion des **Aufnahmedatums** (EXIF `DateTimeOriginal`) beim Upload
- Nutzung des **ältesten Aufnahmedatums** einer Gruppe als primäres Sortier-Kriterium
- Fallback auf Jahr/Upload-Datum wenn EXIF-Daten fehlen
## Use Cases
### Use Case 1: Alte Fotos digitalisieren
**Szenario**: Benutzer scannt alte Fotos von 1995 und lädt sie 2025 hoch
- **Problem**: Gruppe würde aktuell in 2025 einsortiert
- **Lösung**: EXIF-Datum (falls eingescannt mit Datum) oder Jahr-Eingabe nutzen
### Use Case 2: Gemischte Jahre in einer Gruppe
**Szenario**: Benutzer lädt Bilder von einem Urlaub hoch, der über Jahreswechsel ging
- 5 Bilder vom 28.12.2023
- 8 Bilder vom 02.01.2024
- **Lösung**: Gruppe wird nach ältestem Aufnahmedatum sortiert (28.12.2023)
### Use Case 3: Bilder ohne EXIF
**Szenario**: Screenshots, bearbeitete Bilder, oder alte digitalisierte Fotos ohne Metadaten
- **Lösung**: Fallback auf Jahr-Eingabe (wie bisher)
## Technische Lösung
### A. EXIF-Extraktion (Backend)
#### 1. Dependency
**Library**: `exif-parser` oder `exifr` (npm)
```bash
npm install exifr --save
```
**Warum `exifr`?**
- ✅ Modern, aktiv maintained
- ✅ Unterstützt viele Formate (JPEG, TIFF, HEIC, etc.)
- ✅ Promise-based API
- ✅ Klein und performant
- ✅ Keine Native Dependencies (Pure JavaScript)
#### 2. Datenbank-Schema Migration
**Neue Felder in `images` Tabelle**:
```sql
ALTER TABLE images ADD COLUMN exif_date_taken DATETIME DEFAULT NULL;
ALTER TABLE images ADD COLUMN exif_camera_model TEXT DEFAULT NULL;
ALTER TABLE images ADD COLUMN exif_location_lat REAL DEFAULT NULL;
ALTER TABLE images ADD COLUMN exif_location_lon REAL DEFAULT NULL;
```
**Neues berechnetes Feld in `groups` Tabelle**:
```sql
ALTER TABLE groups ADD COLUMN capture_date DATETIME DEFAULT NULL;
-- Wird beim Upload berechnet: MIN(exif_date_taken) aller Bilder der Gruppe
```
**Index für Performance**:
```sql
CREATE INDEX IF NOT EXISTS idx_groups_capture_date ON groups(capture_date);
CREATE INDEX IF NOT EXISTS idx_images_exif_date_taken ON images(exif_date_taken);
```
#### 3. EXIF Service (Backend)
**Datei**: `backend/src/services/ExifService.js`
```javascript
const exifr = require('exifr');
const fs = require('fs').promises;
class ExifService {
/**
* Extrahiere EXIF-Daten aus einem Bild
* @param {string} filePath - Absoluter Pfad zum Bild
* @returns {Promise<Object>} EXIF-Daten oder null
*/
async extractExifData(filePath) {
try {
const exifData = await exifr.parse(filePath, {
pick: [
'DateTimeOriginal', // Aufnahmedatum
'CreateDate', // Fallback
'Make', // Kamera-Hersteller
'Model', // Kamera-Modell
'latitude', // GPS-Koordinaten
'longitude'
]
});
if (!exifData) return null;
return {
dateTaken: exifData.DateTimeOriginal || exifData.CreateDate || null,
cameraModel: exifData.Model ? `${exifData.Make || ''} ${exifData.Model}`.trim() : null,
location: (exifData.latitude && exifData.longitude)
? { lat: exifData.latitude, lon: exifData.longitude }
: null
};
} catch (error) {
console.warn(`[ExifService] Failed to extract EXIF from ${filePath}:`, error.message);
return null;
}
}
/**
* Finde das älteste Aufnahmedatum in einer Gruppe
* @param {Array} images - Array von Bild-Objekten mit exif_date_taken
* @returns {Date|null} Ältestes Datum oder null
*/
getEarliestCaptureDate(images) {
const dates = images
.map(img => img.exif_date_taken)
.filter(date => date !== null)
.map(date => new Date(date));
if (dates.length === 0) return null;
return new Date(Math.min(...dates));
}
}
module.exports = new ExifService();
```
#### 4. Upload-Route Integration
**Datei**: `backend/src/routes/upload.js`
```javascript
const ExifService = require('../services/ExifService');
// Nach dem Speichern des Bildes:
router.post('/upload', async (req, res) => {
// ... existing upload logic ...
// EXIF-Extraktion für jedes Bild
for (const file of uploadedFiles) {
const filePath = path.join(UPLOAD_DIR, file.filename);
const exifData = await ExifService.extractExifData(filePath);
// In Datenbank speichern
await db.run(`
UPDATE images
SET exif_date_taken = ?,
exif_camera_model = ?,
exif_location_lat = ?,
exif_location_lon = ?
WHERE file_name = ?
`, [
exifData?.dateTaken || null,
exifData?.cameraModel || null,
exifData?.location?.lat || null,
exifData?.location?.lon || null,
file.filename
]);
}
// Berechne capture_date für Gruppe
const images = await db.all('SELECT exif_date_taken FROM images WHERE group_id = ?', [groupId]);
const captureDate = ExifService.getEarliestCaptureDate(images);
await db.run(`
UPDATE groups
SET capture_date = ?
WHERE group_id = ?
`, [captureDate?.toISOString() || null, groupId]);
// ... rest of upload logic ...
});
```
### B. Frontend-Änderungen
#### 1. Slideshow-Sortierung
**Datei**: `frontend/src/Components/Pages/SlideshowPage.js`
**Neue Sortier-Logik**:
```javascript
const sortedGroups = [...groupsData.groups].sort((a, b) => {
// 1. Priorität: capture_date (EXIF-basiert)
if (a.captureDate && b.captureDate) {
return new Date(a.captureDate) - new Date(b.captureDate);
}
// 2. Priorität: Wenn nur eine Gruppe EXIF hat, diese zuerst
if (a.captureDate && !b.captureDate) return -1;
if (!a.captureDate && b.captureDate) return 1;
// 3. Fallback: Jahr (Benutzereingabe)
if (a.year !== b.year) {
return a.year - b.year;
}
// 4. Fallback: Upload-Datum
return new Date(a.uploadDate) - new Date(b.uploadDate);
});
```
#### 2. Metadaten-Anzeige (Optional)
**Datei**: `frontend/src/Components/Pages/SlideshowPage.js`
**Erweiterte Info-Box**:
```jsx
<Typography sx={metaTextSx}>
{currentGroup.captureDate && (
<>Aufnahme: {new Date(currentGroup.captureDate).toLocaleDateString('de-DE')} • </>
)}
Bild {currentImageIndex + 1} von {currentGroup.images.length} •
Slideshow {currentGroupIndex + 1} von {allGroups.length}
</Typography>
```
### C. Batch-Migration (Bestehende Bilder)
**Datei**: `backend/src/scripts/migrate-exif.js`
```javascript
/**
* Einmaliges Skript zum Extrahieren von EXIF-Daten aus bestehenden Bildern
*/
const ExifService = require('../services/ExifService');
const db = require('../database/DatabaseManager');
const path = require('path');
async function migrateExistingImages() {
console.log('[EXIF Migration] Starting...');
const images = await db.all('SELECT id, file_name, group_id FROM images WHERE exif_date_taken IS NULL');
console.log(`[EXIF Migration] Found ${images.length} images without EXIF data`);
let successCount = 0;
let failCount = 0;
for (const image of images) {
const filePath = path.join(__dirname, '../data/images', image.file_name);
const exifData = await ExifService.extractExifData(filePath);
if (exifData && exifData.dateTaken) {
await db.run(`
UPDATE images
SET exif_date_taken = ?,
exif_camera_model = ?,
exif_location_lat = ?,
exif_location_lon = ?
WHERE id = ?
`, [
exifData.dateTaken,
exifData.cameraModel,
exifData.location?.lat,
exifData.location?.lon,
image.id
]);
successCount++;
} else {
failCount++;
}
}
// Update capture_date für alle Gruppen
const groups = await db.all('SELECT group_id FROM groups WHERE capture_date IS NULL');
for (const group of groups) {
const groupImages = await db.all('SELECT exif_date_taken FROM images WHERE group_id = ?', [group.group_id]);
const captureDate = ExifService.getEarliestCaptureDate(groupImages);
if (captureDate) {
await db.run('UPDATE groups SET capture_date = ? WHERE group_id = ?', [
captureDate.toISOString(),
group.group_id
]);
}
}
console.log(`[EXIF Migration] Complete! Success: ${successCount}, Failed: ${failCount}`);
}
// Run migration
migrateExistingImages().catch(console.error);
```
## Implementierungs-Plan
### Phase 1: Backend - EXIF Extraktion (3-4 Stunden)
1. **Dependencies installieren** (10 min)
- [ ] `npm install exifr` im Backend
- [ ] Package-Lock aktualisieren
2. **Datenbank-Migration** (30 min)
- [ ] Migration-Script erstellen (`migration-005-exif.sql`)
- [ ] Neue Felder in `images` Tabelle
- [ ] Neues Feld `capture_date` in `groups` Tabelle
- [ ] Indizes erstellen
- [ ] Migration testen
3. **ExifService implementieren** (60 min)
- [ ] `services/ExifService.js` erstellen
- [ ] `extractExifData()` Methode
- [ ] `getEarliestCaptureDate()` Methode
- [ ] Error-Handling
- [ ] Unit-Tests (optional)
4. **Upload-Route erweitern** (60 min)
- [ ] EXIF-Extraktion in Upload-Flow integrieren
- [ ] Datenbank-Updates
- [ ] `capture_date` Berechnung
- [ ] Testing mit verschiedenen Bildtypen
5. **Migration-Script** (30 min)
- [ ] `scripts/migrate-exif.js` erstellen
- [ ] Batch-Processing für bestehende Bilder
- [ ] Logging und Progress-Anzeige
- [ ] Test mit Development-Daten
### Phase 2: Frontend - Sortierung & Anzeige (1-2 Stunden)
1. **Slideshow-Sortierung** (30 min)
- [ ] `SlideshowPage.js` anpassen
- [ ] Neue Sortier-Logik mit `captureDate`
- [ ] Fallback-Logik testen
2. **Metadaten-Anzeige** (30 min, Optional)
- [ ] Aufnahmedatum in Info-Box anzeigen
- [ ] Kamera-Modell anzeigen (optional)
- [ ] Responsive Design
3. **Groups-Overview** (30 min, Optional)
- [ ] EXIF-Daten in Gruppen-Übersicht anzeigen
- [ ] Filter nach Kamera-Modell (optional)
### Phase 3: Testing & Dokumentation (1 Stunde)
1. **Testing** (30 min)
- [ ] Upload mit EXIF-Daten
- [ ] Upload ohne EXIF-Daten (Screenshots)
- [ ] Sortierung mit gemischten Gruppen
- [ ] Migration-Script auf bestehenden Daten
- [ ] Performance-Test (viele Bilder)
2. **Dokumentation** (30 min)
- [ ] README.md aktualisieren
- [ ] CHANGELOG.md Entry
- [ ] API-Dokumentation (falls vorhanden)
- [ ] Migration-Anleitung
## Technische Details
### EXIF-Datenformate
**EXIF DateTimeOriginal**:
- Format: `"YYYY:MM:DD HH:MM:SS"` (z.B. `"2023:12:28 15:30:45"`)
- Konvertierung zu ISO-8601: `new Date("2023-12-28T15:30:45").toISOString()`
**GPS-Koordinaten**:
- `latitude`: -90 bis +90 (Süd zu Nord)
- `longitude`: -180 bis +180 (West zu Ost)
### Performance-Überlegungen
**EXIF-Extraktion ist I/O-intensiv**:
- Pro Bild: ~10-50ms (je nach Größe)
- 10 Bilder: ~100-500ms zusätzlich beim Upload
- **Mitigation**: Async/Parallel processing mit `Promise.all()`
**Speicherverbrauch**:
- EXIF-Daten: ~100-500 bytes pro Bild in DB
- Vernachlässigbar bei 1000+ Bildern
## Erwartete Verbesserungen
### Sortierung
- ✅ Präzise chronologische Sortierung nach tatsächlichem Aufnahmedatum
- ✅ Automatisch, keine manuelle Eingabe nötig
- ✅ Funktioniert auch bei Bildern aus verschiedenen Jahren in einer Gruppe
### User Experience
- ✅ Mehr Kontext durch Anzeige von Kamera-Modell
- ✅ Potenziell: Geo-Tagging für Location-basierte Features (Zukunft)
- ✅ Bessere Archivierung alter Fotos
### Daten-Qualität
- ✅ Strukturierte Metadaten in Datenbank
- ✅ Basis für zukünftige Features (z.B. Kamera-Filter, Geo-Map)
## Risiken & Mitigationen
### Risiko 1: Bilder ohne EXIF
**Problem**: Screenshots, bearbeitete Bilder, oder alte Scans haben keine EXIF-Daten
**Mitigation**: Fallback auf Jahr-Eingabe und Upload-Datum (wie bisher)
### Risiko 2: Falsche EXIF-Daten
**Problem**: Kamera-Uhrzeit falsch eingestellt
**Mitigation**: User-Eingabe (Jahr) hat Vorrang, EXIF nur als Ergänzung
### Risiko 3: Performance beim Upload
**Problem**: EXIF-Extraktion verlangsamt Upload
**Mitigation**:
- Parallel-Processing mit `Promise.all()`
- Timeout (max 100ms pro Bild)
- Falls Timeout: Upload trotzdem erfolgreich, EXIF später nachholen
### Risiko 4: HEIC/HEIF Support
**Problem**: Apple-Formate nicht von allen Libraries unterstützt
**Mitigation**: `exifr` unterstützt HEIC nativ
## Alternativen
### Alternative 1: Nur Jahr-Feld nutzen (Status Quo)
**Nachteil**: Ungenau bei gemischten Jahren, manuelle Eingabe erforderlich
### Alternative 2: Server-Side Image Processing (Sharp + EXIF)
```javascript
const sharp = require('sharp');
const metadata = await sharp(filePath).metadata();
```
**Nachteil**: `sharp` hat native dependencies (schwierigeres Deployment)
### Alternative 3: Frontend EXIF-Extraktion
**Nachteil**: Mehr Client-Traffic, nicht bei allen Browsern zuverlässig
## Erfolgs-Kriterien
**Must-Have**:
1. EXIF-Daten werden beim Upload automatisch extrahiert
2. `capture_date` wird korrekt berechnet (ältestes Bild der Gruppe)
3. Slideshow sortiert nach `capture_date` (mit Fallback)
4. Migration-Script funktioniert für bestehende Bilder
5. Upload-Performance bleibt akzeptabel (<500ms zusätzlich für 10 Bilder)
**Nice-to-Have**:
1. Anzeige von Aufnahmedatum in Slideshow
2. Anzeige von Kamera-Modell
3. GPS-Koordinaten gespeichert (für zukünftige Features)
## Offene Fragen
- [ ] Soll Aufnahmedatum editierbar sein (falls EXIF fehlt oder falsch)?
- [ ] Soll Kamera-Modell in der Gruppen-Übersicht angezeigt werden?
- [ ] Sollen GPS-Koordinaten für Geo-Tagging genutzt werden (Map-View)?
- [ ] Soll EXIF-Extraktion synchron (beim Upload) oder asynchron (Background-Job) erfolgen?
## Rollout-Plan
1. **Development** (feature/ExifExtraction Branch)
- Implementierung & Unit-Tests
- Migration-Script testen
- Code Review
2. **Staging/Testing**
- Migration auf Dev-Environment
- Test-Uploads mit verschiedenen Bildtypen
- Performance-Messungen
3. **Production**
1. Datenbank-Backup
2. Migration-Script ausführen
3. Deployment via Docker Compose
4. Monitoring für 24h
---
**Erstellt von**: GitHub Copilot
**Review durch**: @lotzm

View File

@ -1,23 +0,0 @@
# E-Mail-Benachrichtigungen
**Status**: ⏳ Geplant
- Backend: E-Mail-Service (nodemailer)
- Upload-Bestätigung mit Management-Link
- Optional: E-Mail-Adresse beim Upload abfragen
---
# 📚 Referenzen
- [DSGVO Art. 7 - Bedingungen für die Einwilligung](https://dsgvo-gesetz.de/art-7-dsgvo/)
- [Material-UI Checkbox Documentation](https://mui.com/material-ui/react-checkbox/)
- [SQLite Foreign Key Support](https://www.sqlite.org/foreignkeys.html)
- [UUID v4 Best Practices](https://www.rfc-editor.org/rfc/rfc4122)
---
**Erstellt am**: 15. November 2025
**Letzte Aktualisierung**: 15. November 2025, 18:20 Uhr
**Status**: ✅ Phase 1: 100% komplett | ✅ Phase 2 Backend: 100% komplett | ✅ Phase 2 Frontend: 100% komplett
**Production-Ready**: Ja (alle Features implementiert und getestet)

File diff suppressed because it is too large Load Diff

View File

@ -1,195 +0,0 @@
# Feature Plan: Autogenerierte OpenAPI / Swagger Spec + API Restructuring
**Branch:** `feature/autogen-openapi`
**Datum:** 16. November 2025
**Status:** ✅ Complete - Auto-generation active, Single Source of Truth established
## 🎯 Hauptziele
1. ✅ **OpenAPI Auto-Generation:** Swagger Spec wird automatisch aus Route-Definitionen generiert
2. ✅ **Konsistente API-Struktur:** Klare, REST-konforme API-Organisation für einfache KI-Navigation
3. ✅ **Single Source of Truth:** `routeMappings.js` als zentrale Route-Konfiguration
4. ✅ **Developer Experience:** Swagger UI unter `/api/docs/` (dev-only)
5. ✅ **Test Coverage:** 45 automatisierte Tests, 100% passing
6. ✅ **API Security:** Bearer Token Authentication für Admin-Endpoints
---
## 📊 API-Struktur (Ziel)
### Design-Prinzipien
- **Prefix = Zugriffsebene:** Struktur basiert auf Authentifizierung/Autorisierung
- **REST-konform:** Standard HTTP Methoden (GET, POST, PUT, PATCH, DELETE)
- **KI-freundlich:** Klare Hierarchie, vorhersagbare Patterns
- **Konsistent:** Alle Routen folgen dem gleichen Muster
### Routing-Schema
```
/api/upload (öffentlich - Upload-Funktionen)
/api/groups (öffentlich - Slideshow-Anzeige)
/api/manage/:token/* (token-basiert - User-Verwaltung)
/api/admin/* (geschützt - Moderation)
/api/system/* (intern - Wartung)
```
### Detaillierte Endpunkte
#### 📤 Public API
```
POST /api/upload - Single file upload
POST /api/upload/batch - Batch upload
GET /api/groups - List approved slideshows
GET /api/groups/:groupId - View specific slideshow
```
#### 🔑 Management API
Token-basierter Zugriff für Slideshow-Ersteller:
```
GET /api/manage/:token - Get slideshow info
PUT /api/manage/:token/consents - Update consents
PUT /api/manage/:token/metadata - Update metadata
PUT /api/manage/:token/images/descriptions - Update image descriptions
POST /api/manage/:token/images - Add images
DELETE /api/manage/:token/images/:imageId - Delete image
DELETE /api/manage/:token - Delete slideshow
```
#### 👮 Admin API
Geschützte Moderation- und Management-Funktionen:
```
# Moderation
GET /api/admin/moderation/groups - List pending slideshows
GET /api/admin/moderation/groups/:id - Get slideshow details
PATCH /api/admin/groups/:id/approve - Approve slideshow
PATCH /api/admin/groups/:id - Edit slideshow
DELETE /api/admin/groups/:id/images/:imageId - Delete single image
PATCH /api/admin/groups/:id/images/batch-description
PUT /api/admin/groups/:id/reorder - Reorder images
# Logs & Monitoring
GET /api/admin/deletion-log - Recent deletions
GET /api/admin/deletion-log/stats - Deletion statistics
GET /api/admin/management-audit - Audit log
GET /api/admin/rate-limiter/stats - Rate limiter stats
# Cleanup
POST /api/admin/cleanup/trigger - Trigger cleanup
GET /api/admin/cleanup/preview - Preview cleanup targets
# Consents & Social Media
GET /api/admin/consents/export - Export consents (CSV)
GET /api/admin/social-media/platforms - List platforms
```
#### ⚙️ System API
Interne System-Operationen:
```
GET /api/system/migration/status - Migration status
POST /api/system/migration/migrate - Run migration
POST /api/system/migration/rollback - Rollback migration
GET /api/system/migration/health - Health check
```
---
## 🔧 Technische Implementierung
### Komponenten
- **swagger-autogen** (v6.2.8): OpenAPI 3.0 Generation
- **swagger-ui-express** (v4.6.3): Interactive API docs
- **Custom Generator:** `src/generate-openapi.js`
### Generator-Logik
```javascript
// Pro Router-Datei einzeln scannen + Mount-Prefix anwenden
for each routerMapping {
swaggerAutogen(tempFile, [routeFile], { basePath: prefix })
merge paths with prefix into final spec
}
```
### Single Source of Truth
1. **Router-Files (`src/routes/*.js`)**: Enthalten nur relative Pfade
2. **Mount-Konfiguration (`src/routes/index.js`)**: Definiert Prefixes
3. **OpenAPI Generation:** `generate-openapi.js` liest beide und merged
---
## 📚 Für KI-Nutzung
### API-Hierarchie verstehen
```
/api/* ← Alle API-Endpoints
├─ /upload, /groups ← Öffentlich
├─ /manage/:token/* ← Token-basiert
├─ /admin/* ← Geschützt
└─ /system/* ← Intern
```
### Neue Route hinzufügen
```bash
# 1. Route in passender Datei hinzufügen (z.B. admin.js)
router.get('/new-endpoint', ...)
# 2. In routeMappings.js registrieren (falls neue Datei)
{ router: 'newRoute', prefix: '/api/admin', file: 'newRoute.js' }
# 3. OpenAPI wird automatisch beim Backend-Start generiert
npm run dev
# 4. Tests schreiben: tests/api/newRoute.test.js
npm test
# 5. Swagger UI: http://localhost:5001/api/docs/
```
---
## ✅ Implementierungsstatus (November 16, 2025)
### Completed Features
**Single Source of Truth**: `routeMappings.js` als zentrale Route-Konfiguration
**Auto-Generation**: OpenAPI-Spec automatisch beim Backend-Start
**Authentication**: Bearer Token für Admin-Endpoints
**Test Suite**: 45 automatisierte Tests (100% passing)
**Documentation**: `routes/README.md` + `AUTHENTICATION.md`
**Route Order Fix**: Express routing order documented & fixed
### Known Issues (Resolved)
**Express Route Order**: Consent router now mounted before admin router
**Test Permissions**: Tests use `/tmp/` for uploads
**SQLite Async**: Connection properly promisified
---
## ⏱️ Aufwandsschätzung (Final)
| Phase | Zeit | Status |
|-------|------|--------|
| MVP OpenAPI Generation | 2h | ✅ Done |
| API Restructuring | 8h | ✅ Done |
| Authentication System | 4h | ✅ Done |
| Test Suite | 6h | ✅ Done |
| Documentation | 2h | ✅ Done |
| **Total** | **22h** | **100%** |
---
## 🚀 Frontend Migration Guide
**Required Changes:**
1. **Add Bearer Token**: All `/api/admin/*` calls need `Authorization: Bearer <token>` header
2. **Verify Paths**: Check against `routeMappings.js` (consent: `/api/admin/groups/by-consent`)
3. **Handle 403**: Add error handling for missing authentication
4. **Environment**: Add `REACT_APP_ADMIN_API_KEY` to `.env`
**See `AUTHENTICATION.md` for complete setup guide**
---
**Erstellt:** 16. November 2025
**Aktualisiert:** 16. November 2025
**Status:** ✅ Production Ready

View File

@ -1,655 +0,0 @@
# Feature Plan: Automatisches Löschen nicht freigegebener Gruppen
## 📋 Übersicht
**Feature**: Automatisches Löschen von nicht freigegebenen Gruppen nach 7 Tagen
**Ziel**: Verhindern, dass rechtlich oder sozial anstößige Inhalte dauerhaft auf dem Server verbleiben
**Priorität**: Hoch (Sicherheit & Compliance)
**Geschätzte Implementierungszeit**: 2-3 Tage
## 🎯 Funktionale Anforderungen
### Must-Have
- [x] **Automatische Löschung**: Gruppen mit `approved = false` werden nach 7 Tagen ab Upload-Zeitpunkt gelöscht
- [x] **Vollständige Löschung**: Datenbank-Einträge, Originalbilder und Preview-Bilder werden entfernt
- [x] **Cron-Job**: Tägliche Ausführung um 10:00 Uhr morgens
- [x] **Deletion Log**: Protokollierung gelöschter Gruppen in eigener Datenbanktabelle
- [x] **Anonymisierung**: Keine personenbezogenen Daten (Titel, Name, Beschreibung) im Log
- [x] **Countdown-Anzeige**: In ModerationPage wird Restzeit bis zur Löschung angezeigt
- [x] **Admin-Übersicht**: Geschützter Bereich in ModerationPage für Lösch-Historie
- [x] **Freigabe-Schutz**: Freigegebene Gruppen (`approved = true`) werden niemals automatisch gelöscht
### Nice-to-Have
- [ ] **Manuelle Verzögerung**: Admin kann Löschfrist verlängern (z.B. um weitere 7 Tage)
- [ ] **Batch-Delete Preview**: Vorschau welche Gruppen beim nächsten Cron-Lauf gelöscht würden
- [ ] **Email-Benachrichtigung**: Warnung an Admin 24h vor automatischer Löschung
## 🔧 Technische Umsetzung
### 1. Database Schema Erweiterung
#### 1.1 Groups-Tabelle Status ✅ **BEREITS VORHANDEN**
**Datei**: `backend/src/database/DatabaseManager.js`
**Status:** Die `approved` Spalte existiert bereits!
```javascript
// Zeile 60-63 in DatabaseManager.js
CREATE TABLE IF NOT EXISTS groups (
// ...
approved BOOLEAN DEFAULT FALSE,
// ...
)
```
**Migration:** Wird automatisch bei jedem Server-Start ausgeführt (Zeile 67-75):
```javascript
try {
await this.run('ALTER TABLE groups ADD COLUMN approved BOOLEAN DEFAULT FALSE');
} catch (error) {
// Feld existiert bereits - das ist okay
}
```
**Zusätzlicher Index für Performance (neu hinzufügen):**
```sql
CREATE INDEX IF NOT EXISTS idx_groups_cleanup ON groups(approved, upload_date);
CREATE INDEX IF NOT EXISTS idx_groups_approved ON groups(approved);
```
#### 1.2 Neue Tabelle: Deletion Log
**Datei**: `backend/src/database/schema.sql`
```sql
-- Deletion Log für gelöschte Gruppen (Compliance & Audit Trail)
CREATE TABLE IF NOT EXISTS deletion_log (
id INTEGER PRIMARY KEY AUTOINCREMENT,
group_id TEXT NOT NULL, -- Original Group ID (zur Referenz)
year INTEGER NOT NULL, -- Jahr des Uploads
image_count INTEGER NOT NULL, -- Anzahl gelöschter Bilder
upload_date DATETIME NOT NULL, -- Ursprünglicher Upload-Zeitpunkt
deleted_at DATETIME DEFAULT CURRENT_TIMESTAMP, -- Zeitpunkt der Löschung
deletion_reason TEXT DEFAULT 'auto_cleanup_7days', -- Grund der Löschung
total_file_size INTEGER -- Gesamtgröße der gelöschten Dateien (in Bytes)
);
-- Index für schnelle Abfragen nach Löschdatum
CREATE INDEX IF NOT EXISTS idx_deletion_log_deleted_at ON deletion_log(deleted_at DESC);
-- Index für Jahresfilterung
CREATE INDEX IF NOT EXISTS idx_deletion_log_year ON deletion_log(year);
```
**Wichtig**: Keine personenbezogenen Daten (title, name, description) werden gespeichert!
### 2. Backend-Implementierung
#### 2.1 Migration Script
**Datei**: `backend/src/database/migrations/005_add_approved_column.sql` (neu erstellen)
```sql
-- Migration 005: Add approved column to groups table
ALTER TABLE groups ADD COLUMN approved BOOLEAN DEFAULT FALSE;
-- Index für schnelle Abfragen nicht freigegebener Gruppen
CREATE INDEX IF NOT EXISTS idx_groups_approved ON groups(approved);
-- Index für Lösch-Kandidaten (approved=false + alte upload_date)
CREATE INDEX IF NOT EXISTS idx_groups_cleanup ON groups(approved, upload_date);
```
#### 2.2 Cleanup Service
**Datei**: `backend/src/services/GroupCleanupService.js` (neu erstellen)
**Verantwortlichkeiten:**
- Identifizierung löschbarer Gruppen (nicht freigegeben + älter als 7 Tage)
- Berechnung der Löschfrist pro Gruppe
- Vollständige Löschung (DB + Dateien)
- Protokollierung in `deletion_log`
**Hauptmethoden:**
```javascript
class GroupCleanupService {
// Findet alle Gruppen, die gelöscht werden müssen
async findGroupsForDeletion()
// Löscht eine Gruppe vollständig (Transaktion)
async deleteGroupCompletely(groupId)
// Erstellt Eintrag im Deletion Log
async logDeletion(groupData)
// Hauptmethode: Führt kompletten Cleanup durch
async performScheduledCleanup()
// Berechnet verbleibende Tage bis zur Löschung
getDaysUntilDeletion(uploadDate)
}
```
#### 2.3 Repository-Erweiterungen
**Datei**: `backend/src/repositories/GroupRepository.js`
**Bestehende Methoden (werden wiederverwendet):** ✅
```javascript
// ✅ BEREITS VORHANDEN - Zeile 207
async updateGroupApproval(groupId, approved) { }
// ✅ BEREITS VORHANDEN - Zeile 217
async deleteImage(groupId, imageId) { }
```
**Neue Methoden:**
```javascript
// Findet Gruppen, die zum Löschen anstehen (approved=false & älter als 7 Tage)
async findUnapprovedGroupsOlderThan(days) { }
// Löscht Gruppe komplett (inkl. Bilder-Referenzen) - erweitert bestehende Logik
async deleteGroupCompletely(groupId) { }
// Hole Statistiken für Gruppe (für Deletion Log)
async getGroupStatistics(groupId) { }
```
**Datei**: `backend/src/repositories/DeletionLogRepository.js` (neu erstellen)
```javascript
class DeletionLogRepository {
// Erstellt Lösch-Protokoll
async createDeletionEntry(logData) { }
// Hole letzte N Einträge
async getRecentDeletions(limit = 10) { }
// Hole alle Einträge (für Admin-Übersicht)
async getAllDeletions() { }
// Statistiken (Anzahl gelöschte Gruppen, Bilder, Speicherplatz)
async getDeletionStatistics() { }
}
```
#### 2.4 Cron-Job Implementation
**Datei**: `backend/src/services/SchedulerService.js` (neu erstellen)
**Library**: `node-cron`
```bash
cd backend
npm install node-cron
```
**Implementation:**
```javascript
const cron = require('node-cron');
const GroupCleanupService = require('./GroupCleanupService');
class SchedulerService {
start() {
// Jeden Tag um 10:00 Uhr
cron.schedule('0 10 * * *', async () => {
console.log('[Scheduler] Running daily cleanup at 10:00 AM...');
await GroupCleanupService.performScheduledCleanup();
});
}
}
```
**Integration in**: `backend/src/server.js`
```javascript
const SchedulerService = require('./services/SchedulerService');
// Nach Server-Start
app.listen(PORT, () => {
console.log(`Server running on port ${PORT}`);
// Starte Scheduler
const scheduler = new SchedulerService();
scheduler.start();
});
```
#### 2.5 API-Endpunkte
**Route**: `backend/src/routes/groups.js`
**Bestehender Endpoint (wird wiederverwendet):** ✅
```javascript
// ✅ BEREITS VORHANDEN - Zeile 102
PATCH /groups/:groupId/approve
Body: { approved: true/false }
Response: { success: true, message: "Gruppe freigegeben", approved: true }
```
**Neue Admin-Endpunkte:**
```javascript
// Neu: Hole Deletion Log
GET /api/admin/deletion-log?limit=10
Response: { deletions: [...], total: 123 }
// Neu: Hole alle Deletion Logs
GET /api/admin/deletion-log/all
Response: { deletions: [...] }
// Neu: Deletion Statistiken
GET /api/admin/deletion-log/stats
Response: {
totalDeleted: 45,
totalImages: 234,
totalSize: '1.2 GB',
lastCleanup: '2025-11-08T10:00:00Z'
}
```
### 3. Frontend-Implementierung
#### 3.1 ModerationGroupPage Erweiterungen
**Datei**: `frontend/src/Components/Pages/ModerationGroupPage.js`
**Neue Features:**
- Countdown-Anzeige für jede nicht freigegebene Gruppe
- Farbcodierung optional (aktuell nicht gewünscht)
- Button "Gruppe freigeben" (approved setzen)
**UI-Änderungen:**
```jsx
<Card>
<CardContent>
<Typography variant="h6">{group.title}</Typography>
{/* Neu: Countdown-Anzeige */}
{!group.approved && (
<Alert severity="warning" sx={{ mt: 1 }}>
⏰ Wird automatisch gelöscht in: {daysRemaining} Tagen
<br />
<Typography variant="caption">
Upload: {formatDate(group.upload_date)}
</Typography>
</Alert>
)}
{/* Neu: Freigabe-Button */}
<Button
variant="contained"
color="success"
onClick={() => handleApprove(group.group_id)}
>
Gruppe freigeben
</Button>
</CardContent>
</Card>
```
#### 3.2 Deletion Log Übersicht (Admin-Bereich)
**Datei**: `frontend/src/Components/Pages/DeletionLogPage.js` (neu erstellen)
**Features:**
- Tabelle mit letzten 10 gelöschten Gruppen (expandierbar auf alle)
- Spalten: Group ID, Jahr, Anzahl Bilder, Upload-Datum, Lösch-Datum
- Statistiken: Gesamt gelöschte Gruppen, Bilder, freigegebener Speicher
- Toggle-Button: "Letzte 10" ↔ "Alle anzeigen"
**Mockup:**
```
┌─────────────────────────────────────────────────────────┐
│ Gelöschte Gruppen - Übersicht │
├─────────────────────────────────────────────────────────┤
│ Statistiken: │
│ • Gesamt gelöscht: 45 Gruppen (234 Bilder) │
│ • Freigegebener Speicher: 1.2 GB │
│ • Letzter Cleanup: 08.11.2025 10:00 Uhr │
├─────────────────────────────────────────────────────────┤
│ [Letzte 10 anzeigen] [Alle anzeigen ▼] │
├──────────┬──────┬────────┬─────────────┬──────────────┤
│ Group ID │ Jahr │ Bilder │ Upload-Dat. │ Gelöscht am │
├──────────┼──────┼────────┼─────────────┼──────────────┤
│ abc123 │ 2024 │ 15 │ 01.11.2025 │ 08.11.2025 │
│ xyz789 │ 2024 │ 23 │ 31.10.2025 │ 07.11.2025 │
│ ... │ ... │ ... │ ... │ ... │
└──────────┴──────┴────────┴─────────────┴──────────────┘
```
#### 3.3 Service-Funktionen
**Datei**: `frontend/src/services/groupService.js` (erweitern)
```javascript
// Setze Approval-Status
export const approveGroup = async (groupId) => {
return sendRequest(`/api/groups/${groupId}/approve`, 'PUT', {
approved: true
});
};
// Hole Deletion Log
export const getDeletionLog = async (limit = 10) => {
return sendRequest(`/api/admin/deletion-log?limit=${limit}`, 'GET');
};
// Hole alle Deletion Logs
export const getAllDeletionLogs = async () => {
return sendRequest('/api/admin/deletion-log/all', 'GET');
};
// Hole Statistiken
export const getDeletionStatistics = async () => {
return sendRequest('/api/admin/deletion-log/stats', 'GET');
};
```
#### 3.4 Routing
**Datei**: `frontend/src/App.js`
```javascript
// Neue Route für Deletion Log (nur für Admins)
<Route path="/moderation/deletion-log" element={<DeletionLogPage />} />
```
**Navigation in ModerationPage:**
```jsx
<Tabs>
<Tab label="Gruppen freigeben" />
<Tab label="Gelöschte Gruppen" /> {/* Neu */}
</Tabs>
```
## 📝 Implementierungs-Aufgaben
### Phase 1: Database & Schema (Aufgaben 1-2)
#### Aufgabe 1: Database Schema für approved-Spalte prüfen ✅ **ABGESCHLOSSEN**
- [x] ~~Migration Script erstellen~~ **NICHT NÖTIG** - approved-Spalte existiert bereits!
- [x] ~~approved-Spalte zu groups-Tabelle hinzufügen~~ **BEREITS VORHANDEN** (DatabaseManager.js, Zeile 60)
- [x] ~~Migration in DatabaseManager integrieren~~ **BEREITS VORHANDEN** (Zeile 67-75)
- [x] Index für Cleanup-Abfragen hinzugefügt: `idx_groups_cleanup` und `idx_groups_approved`
**Akzeptanzkriterien:**
- ✅ Spalte `approved` existiert bereits mit DEFAULT FALSE
- ✅ Migration läuft automatisch bei jedem Server-Start (DatabaseManager.js)
- ✅ Cleanup-Indizes hinzugefügt (approved, upload_date)
- ✅ Keine Datenverluste - Bestehende Gruppen haben `approved = false`
#### Aufgabe 2: Deletion Log Tabelle erstellen ✅ **ABGESCHLOSSEN**
- [x] `deletion_log` Tabelle im Schema definiert (DatabaseManager.js)
- [x] Indizes für schnelle Abfragen erstellt (`deleted_at DESC`, `year`)
- [x] Struktur ohne personenbezogene Daten
- [x] Validierung der Tabellenstruktur
**Akzeptanzkriterien:**
- ✅ Tabelle enthält alle definierten Spalten (group_id, year, image_count, upload_date, deleted_at, deletion_reason, total_file_size)
- ✅ Keine personenbezogenen Daten im Schema
- ✅ Indizes für `deleted_at` und `year` existieren
- ✅ Struktur ist optimal für Abfragen (letzte 10, alle, Statistiken)
### Phase 2: Backend Core Logic (Aufgaben 3-5)
#### Aufgabe 3: GroupCleanupService implementieren ✅ **ABGESCHLOSSEN**
- [x] Service-Klasse erstellt (GroupCleanupService.js)
- [x] `findGroupsForDeletion()` - SQL-Query für Gruppen älter als 7 Tage
- [x] `deleteGroupCompletely()` - Transaktion für DB + Dateien
- [x] `logDeletion()` - Eintrag in deletion_log
- [x] `getDaysUntilDeletion()` - Berechnung Restzeit
- [x] File-Deletion für Bilder und Previews
- [x] Error-Handling und Logging
**Akzeptanzkriterien:**
- ✅ Service findet korrekt alle löschbaren Gruppen (approved=false + älter 7 Tage)
- ✅ Dateien werden physisch vom Dateisystem entfernt
- ✅ Datenbank-Transaktionen sind atomar (Rollback bei Fehler)
- ✅ Deletion Log wird korrekt befüllt (ohne personenbezogene Daten)
- ✅ Freigegebene Gruppen werden niemals gelöscht
- ✅ Logging für alle Aktionen (Info + Error)
#### Aufgabe 4: Repository-Methoden erweitern ✅ **ABGESCHLOSSEN**
- [x] `GroupRepository.findUnapprovedGroupsOlderThan()` implementiert
- [x] `GroupRepository.deleteGroupCompletely()` mit CASCADE-Logik
- [x] `GroupRepository.getGroupStatistics()` für Log-Daten
- [x] ~~`GroupRepository.setApprovalStatus()`~~ **BEREITS VORHANDEN** (updateGroupApproval)
- [x] `DeletionLogRepository` komplett implementiert
- [ ] Unit-Tests für alle Methoden (später)
**Akzeptanzkriterien:**
- ✅ SQL-Queries sind optimiert (nutzen Indizes)
- ✅ DELETE CASCADE funktioniert für Bilder
- ✅ Statistiken enthalten: Anzahl Bilder, Dateigröße
- ✅ DeletionLogRepository unterstützt Pagination
#### Aufgabe 5: Cron-Job einrichten ✅ **ABGESCHLOSSEN**
- [x] `node-cron` installiert
- [x] `SchedulerService` erstellt
- [x] Cron-Job für 10:00 Uhr konfiguriert (Europe/Berlin)
- [x] Integration in `server.js`
- [x] Logging für Scheduler-Start und -Ausführung
- [x] Manueller Test-Trigger für Entwicklung (triggerCleanupNow)
**Akzeptanzkriterien:**
- ✅ Cron-Job läuft täglich um 10:00 Uhr
- ✅ Scheduler startet automatisch beim Server-Start
- ✅ Fehler im Cleanup brechen Server nicht ab
- ✅ Entwicklungs-Modus: Manueller Trigger möglich
- ✅ Logging zeigt Ausführungszeit und Anzahl gelöschter Gruppen
### Phase 3: Backend API (Aufgabe 6)
#### Aufgabe 6: API-Endpunkte implementieren ✅ **ABGESCHLOSSEN**
- [x] ~~`PUT /api/groups/:groupId/approve` für Freigabe~~ **BEREITS VORHANDEN** (groups.js, Zeile 102)
- [x] `GET /api/admin/deletion-log` mit Limit-Parameter
- [x] `GET /api/admin/deletion-log/all` für komplette Historie
- [x] `GET /api/admin/deletion-log/stats` für Statistiken
- [x] Request-Validation und Error-Handling für neue Endpoints
- [x] Formatierung der Dateigröße (Bytes → MB/GB)
**Akzeptanzkriterien:**
- ✅ Approval-Endpoint existiert bereits und funktioniert
- ✅ Alle neuen Admin-Endpunkte sind unter `/api/admin/` erreichbar
- ✅ Response-Formate sind konsistent (JSON)
- ✅ HTTP-Status-Codes sind korrekt (200, 400, 500)
- ✅ Fehler-Responses enthalten hilfreiche Messages
- ✅ Limit-Validation (1-1000)
### Phase 4: Frontend UI (Aufgaben 7-9)
#### Aufgabe 7: ModerationGroupPage - Countdown anzeigen ✅ **ABGESCHLOSSEN**
- [x] Countdown-Berechnung implementiert (getDaysUntilDeletion)
- [x] Countdown-Komponente in ImageGalleryCard hinzugefügt
- [x] Alert-Box für nicht freigegebene Gruppen (gelber Hintergrund)
- [x] Formatierung Upload-Datum und Lösch-Datum
- [x] Responsive Design (CSS)
**Akzeptanzkriterien:**
- ✅ Countdown zeigt korrekte Anzahl Tage bis Löschung (7 Tage nach Upload)
- ✅ Alert ist nur bei nicht freigegebenen Gruppen sichtbar (isPending && mode==='moderation')
- ✅ Format: "⏰ Wird gelöscht in: X Tagen"
- ✅ UI ist mobile-optimiert
- ✅ Keine Performance-Probleme bei vielen Gruppen
#### Aufgabe 8: Freigabe-Button implementieren ✅ **ABGESCHLOSSEN**
- [x] ~~Button "Gruppe freigeben" in ModerationGroupPage~~ **BEREITS VORHANDEN**
- [x] ~~API-Call zu `/api/groups/:groupId/approve`~~ **BEREITS VORHANDEN**
- [x] Success-Feedback mit SweetAlert2 (upgraded von alert)
- [x] UI-Update nach Freigabe (Countdown verschwindet automatisch)
- [x] Error-Handling mit User-Feedback
**Akzeptanzkriterien:**
- ✅ Button ist nur bei nicht freigegebenen Gruppen sichtbar
- ✅ Freigabe funktioniert mit einem Klick
- ✅ UI aktualisiert sich sofort (optimistic update)
- ✅ Success-Message: "Gruppe freigegeben"
- ✅ Fehler werden benutzerfreundlich angezeigt
#### Aufgabe 9: DeletionLogPage erstellen ✅ **ABGESCHLOSSEN**
- [x] Neue Komponente erstellt (DeletionLogSection.js)
- [x] Tabelle für Deletion Log mit MUI Table
- [x] Toggle "Letzte 10" ↔ "Alle anzeigen"
- [x] Statistik-Cards (Gesamt, Bilder, Speicher)
- [x] Formatierung von Daten und Dateigrößen
- [x] Sortierbare Spalten
- [x] Integration in ModerationGroupsPage (am Seitenende)
- [x] Geschützt durch /moderation Zugang
**Akzeptanzkriterien:**
- ✅ Tabelle zeigt: Group ID, Jahr, Bilder, Upload-Datum, Lösch-Datum, Dateigröße, Grund
- ✅ Standard: Letzte 10 Einträge
- ✅ Toggle lädt alle Einträge dynamisch nach
- ✅ Statistiken sind prominent sichtbar (3 Cards)
- ✅ Dateigröße in lesbarem Format (KB, MB, GB)
- ✅ Responsive Design mit MUI-Komponenten
- ✅ Nur für Admins zugänglich (geschützter /moderation Bereich)
### Phase 5: Testing & Documentation (Aufgaben 10-11)
#### Aufgabe 10: Integration Testing ✅ **ABGESCHLOSSEN**
- [x] Test: Gruppe älter als 7 Tage wird automatisch gelöscht
- [x] Test: Freigegebene Gruppe bleibt bestehen (auch nach 7 Tagen)
- [x] Test: Deletion Log wird korrekt befüllt
- [x] Test: Dateien werden physisch gelöscht (originals + previews)
- [x] Test: Countdown-Anzeige zeigt korrekte Werte
- [x] Test: Freigabe-Button funktioniert mit SweetAlert2-Feedback
- [x] Test: DeletionLogSection lädt Daten korrekt
- [x] Test-Tools erstellt: test-cleanup.sh (bash) + test-cleanup.js (node)
- [x] Umfassende Test-Dokumentation: TESTING-CLEANUP.md
**Akzeptanzkriterien:**
- ✅ Alle Haupt-Szenarien sind getestet
- ✅ Cron-Job läuft ohne Fehler (täglich 10:00 Uhr)
- ✅ Keine Memory-Leaks bei Scheduler
- ✅ Performance ist akzeptabel (< 1s für typische Cleanup-Operationen)
- ✅ Frontend aktualisiert sich korrekt nach Approval
- ✅ Bug-Fixes: Singleton-Import, nginx Auth-Konfiguration
#### Aufgabe 11: Dokumentation ✅ **ABGESCHLOSSEN**
- [x] README.md aktualisiert (Features, Latest Features, Moderation Interface, Testing, API Endpoints)
- [x] API-Dokumentation für neue Admin-Endpunkte (/api/admin/deletion-log, cleanup)
- [x] CLEANUP_DAYS ist konfigurierbar (aktuell hardcoded 7 Tage, kann später ENV werden)
- [x] Admin-Anleitung: Deletion Log im /moderation Bereich
- [x] Test-Tools dokumentiert (tests/test-cleanup.sh, tests/TESTING-CLEANUP.md)
- [x] CHANGELOG.md aktualisiert mit vollständiger Feature-Übersicht
- [x] TODO.md aktualisiert (Feature als abgeschlossen markiert)
**Akzeptanzkriterien:**
- ✅ README beschreibt automatische Löschung umfassend
- ✅ API-Endpunkte sind vollständig dokumentiert
- ✅ Admin-Workflow ist klar beschrieben (Countdown, Approval, Log)
- ✅ Test-Tools sind dokumentiert und einsatzbereit
- ✅ CHANGELOG enthält alle Änderungen (Backend, Frontend, Infrastructure, Testing)
## 🧪 Testing-Strategie
### Unit Tests
- Repository-Methoden (findUnapprovedGroupsOlderThan, deleteGroupById)
- GroupCleanupService (getDaysUntilDeletion)
- DeletionLogRepository (alle Methoden)
### Integration Tests
- Kompletter Cleanup-Prozess (DB + Files + Log)
- API-Endpunkte mit verschiedenen Szenarien
- Frontend-Integration (Countdown, Freigabe)
### Manuelle Tests
- Cron-Job Ausführung beobachten
- Deletion Log UI testen (Letzte 10 / Alle)
- Mobile-Ansicht der ModerationPage
### Edge Cases
- Gruppe wird genau am Tag 7 gelöscht
- Gruppe wird 5 Minuten vor Cron-Job freigegeben
- Sehr große Gruppen (100+ Bilder)
- Dateisystem-Fehler beim Löschen
- Gleichzeitige Freigabe während Cleanup
## 📊 Success Metrics
### Technisch
- ✅ Cron-Job läuft täglich ohne Fehler
- ✅ Durchschnittliche Cleanup-Zeit < 5 Sekunden
- ✅ Keine Fehler in Production-Logs
- ✅ 100% Datenlöschung (DB + Files)
### Funktional
- ✅ Countdown in ModerationPage ist immer korrekt
- ✅ Freigegebene Gruppen werden niemals gelöscht
- ✅ Deletion Log ist vollständig und korrekt
- ✅ Admin kann Historie einsehen (letzte 10 / alle)
### Sicherheit & Compliance
- ✅ Keine personenbezogenen Daten in deletion_log
- ✅ Alle Benutzerdaten werden nach 7 Tagen entfernt
- ✅ Physische Dateien werden gelöscht (nicht nur DB-Einträge)
## 🚀 Deployment-Checkliste
- [x] Database Migrations ausgeführt (approved-Spalte + deletion_log Tabelle)
- [x] `node-cron` v3.0.3 Dependency ist installiert
- [x] CLEANUP_DAYS konstant definiert (7 Tage, hardcoded in GroupCleanupService)
- [x] Scheduler startet automatisch beim Server-Start
- [x] Logs für Cleanup sind aktiviert (console.log in Service und Scheduler)
- [x] nginx-Konfiguration aktualisiert (dev + prod, /api/admin ohne Basic Auth)
- [x] Docker-Images neu gebaut für nginx-Änderungen
- [x] Admin-Zugang zu DeletionLogSection getestet (integriert in /moderation)
- [x] Test-Tools bereitgestellt (tests/test-cleanup.sh + tests/TESTING-CLEANUP.md)
## 🔮 Future Enhancements
### Phase 2 (Nice-to-Have)
- [ ] Admin kann Löschfrist manuell verlängern (+ 7 Tage Button)
- [ ] Email-Benachrichtigung 24h vor automatischer Löschung
- [ ] Batch-Delete Preview: "Diese Gruppen werden morgen gelöscht"
- [ ] Konfigurierbare Löschfrist per ENV (aktuell hardcoded 7 Tage)
- [ ] Export der Deletion Log als CSV
- [ ] Soft-Delete Option (Gruppen markieren statt sofort löschen)
### Phase 3 (Erweiterte Features)
- [ ] Automatische Archivierung statt Löschung (ZIP-Download)
- [ ] Wiederherstellungs-Funktion (aus Archiv)
- [ ] Dashboard mit Cleanup-Statistiken (Chart.js)
- [ ] Whitelist für bestimmte Uploader (niemals automatisch löschen)
## 📚 Technologie-Stack
### Backend
- **Cron-Job**: `node-cron` v3.0.3 ✅
- **Database**: SQLite3 (bestehend) ✅
- **File Operations**: `fs.promises` (Node.js native) ✅
- **Image Processing**: Sharp (für Preview-Löschung) ✅
### Frontend
- **UI Framework**: Material-UI (MUI) v5 ✅
- **Date Handling**: JavaScript Date + Intl.DateTimeFormat ✅
- **Notifications**: SweetAlert2 (neu hinzugefügt) ✅
- **Icons**: MUI Icons (DeleteIcon, InfoIcon, StorageIcon) ✅
## 🎯 Zeitplan
| Phase | Aufgaben | Geschätzte Zeit | Tatsächliche Zeit | Status |
|-------|----------|-----------------|-------------------|--------|
| Phase 1 | Database Schema | 2-3 Stunden | ~2 Stunden | ✅ Abgeschlossen |
| Phase 2 | Backend Core Logic | 6-8 Stunden | ~7 Stunden | ✅ Abgeschlossen |
| Phase 3 | Backend API | 2-3 Stunden | ~2 Stunden | ✅ Abgeschlossen |
| Phase 4 | Frontend UI | 4-6 Stunden | ~5 Stunden | ✅ Abgeschlossen |
| Phase 5 | Testing & Docs | 3-4 Stunden | ~4 Stunden | ✅ Abgeschlossen |
| **Bug Fixes** | **2 kritische Bugs** | - | ~1 Stunde | ✅ Abgeschlossen |
| **Total** | **11 Aufgaben** | **17-24 Stunden** | **~21 Stunden** | ✅ **Komplett** |
**Implementierungs-Reihenfolge**: Phase 1 → 2 → 3 → 4 → 5 (sequenziell) ✅
### Wichtige Meilensteine
- ✅ **08.11.2025**: Feature-Plan erstellt, Branch `feature/DeleteUnprovedGroups` angelegt
- ✅ **08.11.2025**: Backend komplett implementiert (Services, Repositories, Scheduler)
- ✅ **08.11.2025**: Frontend UI fertiggestellt (Countdown, DeletionLogSection)
- ✅ **08.11.2025**: Bug-Fixes (Singleton-Import, nginx Auth)
- ✅ **08.11.2025**: Testing abgeschlossen, Dokumentation finalisiert
---
**Status**: ✅ **ABGESCHLOSSEN** (Bereit für Merge)
**Branch**: `feature/DeleteUnprovedGroups`
**Erstellt**: 08.11.2025
**Abgeschlossen**: 08.11.2025
**Commits**: ~15 Commits
**Dateien erstellt**: 7 (Services, Repositories, Components, Test-Tools)
**Dateien modifiziert**: 10 (DatabaseManager, Repositories, Routes, Pages, Config)
### Abschluss-Checklist
- [x] Alle 11 Aufgaben implementiert und getestet
- [x] 2 kritische Bugs behoben
- [x] Test-Tools erstellt (bash + Node.js + Dokumentation)
- [x] Dokumentation aktualisiert (README, CHANGELOG, TODO, FEATURE_PLAN)
- [x] Test-Dateien organisiert (tests/ Verzeichnis)
- [x] Bereit für Code Review und Merge in main

View File

@ -1,730 +0,0 @@
# Feature Plan: Image Description (Bildbeschreibung)
**Branch:** `feature/ImageDescription`
**Datum:** 7. November 2025
**Status:** ✅ Implementiert (bereit für Testing)
---
## 📋 Übersicht
Implementierung einer individuellen Bildbeschreibung für jedes hochgeladene Bild. Benutzer können optional einen kurzen Text (max. 200 Zeichen) für jedes Bild hinzufügen, der dann in der Slideshow und in der GroupsOverviewPage angezeigt wird.
### Hauptänderungen
1. **Button-Änderung:** "Sort" Button wird durch "Edit" Button ersetzt in `ImageGalleryCard.js`
2. **Edit-Modus:** Aktiviert Textfelder unter jedem Bild zur Eingabe von Bildbeschreibungen
3. **Datenbank-Erweiterung:** Neues Feld `image_description` in der `images` Tabelle
4. **Backend-API:** Neue Endpoints zum Speichern und Abrufen von Bildbeschreibungen
5. **Frontend-Integration:** Edit-Modus in `MultiUploadPage.js` und `ModerationGroupImagesPage.js`
6. **Slideshow-Integration:** Anzeige der Bildbeschreibungen während der Slideshow
7. **Groups-Overview:** Anzeige von Bildbeschreibungen in der öffentlichen Übersicht
---
## 🎯 Anforderungen
### Funktionale Anforderungen
- ✅ Benutzer können für jedes Bild eine optionale Beschreibung eingeben
- ✅ Maximale Länge: 200 Zeichen
- ✅ Edit-Button ersetzt Sort-Button in Preview-Modus
- ✅ Edit-Modus zeigt Textfelder unter allen Bildern gleichzeitig
- ✅ Vorbefüllung mit Original-Dateinamen als Platzhalter
- ✅ Funktioniert in `MultiUploadPage.js` und `ModerationGroupImagesPage.js`
- ✅ NICHT in `GroupsOverviewPage.js` (nur Anzeige, kein Edit)
- ✅ Bildbeschreibungen werden in Slideshow angezeigt
- ✅ Bildbeschreibungen werden in GroupsOverviewPage angezeigt
- ✅ Speicherung in Datenbank (persistente Speicherung)
### Nicht-Funktionale Anforderungen
- ✅ Performance: Keine merkbare Verzögerung beim Laden von Bildern
- ✅ UX: Intuitive Bedienung des Edit-Modus
- ✅ Mobile-Optimierung: Touch-freundliche Textfelder
- ✅ Validierung: Client- und Server-seitige Längenbegrenzung
- ✅ Backward-Kompatibilität: Bestehende Bilder ohne Beschreibung funktionieren weiterhin
---
## 🗄️ Datenbank-Schema Änderungen
### Migration: `004_add_image_description.sql`
```sql
-- Add image_description column to images table
ALTER TABLE images ADD COLUMN image_description TEXT;
-- Create index for better performance when filtering/searching
CREATE INDEX IF NOT EXISTS idx_images_description ON images(image_description);
```
### Aktualisiertes Schema (`images` Tabelle)
```sql
CREATE TABLE images (
id INTEGER PRIMARY KEY AUTOINCREMENT,
group_id TEXT NOT NULL,
file_name TEXT NOT NULL,
original_name TEXT NOT NULL,
file_path TEXT NOT NULL,
upload_order INTEGER NOT NULL,
file_size INTEGER,
mime_type TEXT,
preview_path TEXT,
image_description TEXT, -- ← NEU: Optional, max 200 Zeichen
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (group_id) REFERENCES groups(group_id) ON DELETE CASCADE
);
```
---
## 🔧 Backend-Änderungen
### 1. Datenbank-Migration
**Datei:** `backend/src/database/migrations/004_add_image_description.sql`
- Fügt `image_description` Spalte zur `images` Tabelle hinzu
- Erstellt Index für Performance-Optimierung
### 2. API-Erweiterungen
#### A) Bestehende Endpoints erweitern
**`POST /api/upload/batch`**
- Akzeptiert `imageDescriptions` Array im Request Body
- Format: `[{ fileName: string, description: string }, ...]`
- Speichert Beschreibungen beim Upload
**`GET /api/groups/:groupId`**
- Gibt `image_description` für jedes Bild zurück
- Backward-kompatibel (null/leer für alte Bilder)
**`GET /moderation/groups/:groupId`**
- Gibt `image_description` für jedes Bild zurück
#### B) Neue Endpoints
**`PATCH /groups/:groupId/images/:imageId`**
- Aktualisiert `image_description` für einzelnes Bild
- Payload: `{ image_description: string }`
- Validierung: Max 200 Zeichen
**`PATCH /groups/:groupId/images/batch-description`**
- Aktualisiert mehrere Bildbeschreibungen auf einmal
- Payload: `{ descriptions: [{ imageId: number, description: string }, ...] }`
- Effizienter als einzelne Requests
### 3. Repository & Service Layer
**`GroupRepository.js`** - Neue Methoden:
```javascript
async updateImageDescription(imageId, description)
async updateBatchImageDescriptions(groupId, descriptions)
async getImagesByGroupId(groupId) // Erweitert um image_description
```
**`DatabaseManager.js`** - Query-Erweiterungen:
- `SELECT` Queries inkludieren `image_description`
- `INSERT` Queries akzeptieren `image_description`
- `UPDATE` Queries für Beschreibungs-Updates
---
## 🎨 Frontend-Änderungen
### 1. ImageGalleryCard.js
**Änderung:** Button "Sort" → "Edit"
```javascript
// ALT (Zeile 174-179):
<button
className="btn btn-secondary btn-sm"
disabled
>
Sort
</button>
// NEU:
<button
className="btn btn-primary btn-sm"
onClick={() => onEditMode?.(true)}
>
✏️ Edit
</button>
```
**Neue Props:**
- `onEditMode`: Callback-Funktion zum Aktivieren des Edit-Modus
- `isEditMode`: Boolean für aktuellen Edit-Status
- `imageDescription`: String mit Bildbeschreibung
- `onDescriptionChange`: Callback für Beschreibungsänderungen
**Edit-Modus UI:**
```jsx
{isEditMode && mode === 'preview' && (
<div className="image-description-edit">
<textarea
value={imageDescription || ''}
onChange={(e) => onDescriptionChange(itemId, e.target.value)}
placeholder={`Beschreibung für ${originalName}...`}
maxLength={200}
rows={2}
/>
<span className="char-counter">
{(imageDescription || '').length}/200
</span>
</div>
)}
```
### 2. ImageGallery.js
**Neue Props durchreichen:**
- `isEditMode`
- `onEditMode`
- `onDescriptionChange`
**Pass-through zu ImageGalleryCard:**
```javascript
<ImageGalleryCard
// ... existing props
isEditMode={isEditMode}
onEditMode={onEditMode}
imageDescription={item.imageDescription}
onDescriptionChange={onDescriptionChange}
/>
```
### 3. MultiUploadPage.js
**State-Erweiterung:**
```javascript
const [isEditMode, setIsEditMode] = useState(false);
const [imageDescriptions, setImageDescriptions] = useState({});
```
**Neue Handler:**
```javascript
const handleDescriptionChange = (imageId, description) => {
setImageDescriptions(prev => ({
...prev,
[imageId]: description.slice(0, 200) // Enforce max length
}));
};
const handleEditMode = (enabled) => {
setIsEditMode(enabled);
};
```
**Upload-Logik erweitern:**
```javascript
// In handleUpload()
const descriptionsArray = selectedImages.map(img => ({
fileName: img.name,
description: imageDescriptions[img.id] || ''
}));
const result = await uploadImageBatch(
filesToUpload,
metadata,
descriptionsArray // ← NEU
);
```
**Edit-Mode Toggle:**
```jsx
{isEditMode && (
<Box sx={{ textAlign: 'center', my: 2 }}>
<Button
variant="contained"
color="success"
onClick={() => setIsEditMode(false)}
>
✅ Beschreibungen fertig
</Button>
</Box>
)}
```
### 4. ModerationGroupImagesPage.js
**State-Erweiterung:**
```javascript
const [isEditMode, setIsEditMode] = useState(false);
const [imageDescriptions, setImageDescriptions] = useState({});
```
**Load-Funktion erweitern:**
```javascript
// In loadGroup()
if (data.images && data.images.length > 0) {
const mapped = data.images.map(img => ({ ... }));
setSelectedImages(mapped);
// Initialize descriptions from server
const descriptions = {};
data.images.forEach(img => {
if (img.imageDescription) {
descriptions[img.id] = img.imageDescription;
}
});
setImageDescriptions(descriptions);
}
```
**Save-Funktion erweitern:**
```javascript
const handleSaveDescriptions = async () => {
try {
const descriptions = Object.entries(imageDescriptions).map(([id, desc]) => ({
imageId: parseInt(id),
description: desc
}));
const res = await fetch(`/groups/${groupId}/images/batch-description`, {
method: 'PATCH',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ descriptions })
});
if (!res.ok) throw new Error('Speichern fehlgeschlagen');
Swal.fire({
icon: 'success',
title: 'Beschreibungen gespeichert',
timer: 1500
});
setIsEditMode(false);
} catch (e) {
Swal.fire({ icon: 'error', title: 'Fehler beim Speichern' });
}
};
```
### 5. SlideshowPage.js
**Erweitere Bild-Anzeige:**
```jsx
{currentImage && (
<div className="slideshow-container">
<img
src={getImageSrc(currentImage)}
alt={currentImage.originalName}
className="slideshow-image"
/>
{/* NEU: Beschreibung anzeigen */}
{currentImage.imageDescription && (
<div className="slideshow-description">
<p>{currentImage.imageDescription}</p>
</div>
)}
{/* Bestehende Metadaten */}
<div className="slideshow-metadata">
<h2>{currentGroup.title}</h2>
{currentGroup.name && <p>{currentGroup.name}</p>}
</div>
</div>
)}
```
**CSS für Slideshow-Beschreibung:**
```css
.slideshow-description {
position: absolute;
bottom: 100px;
left: 50%;
transform: translateX(-50%);
background: rgba(0, 0, 0, 0.7);
padding: 15px 30px;
border-radius: 8px;
max-width: 80%;
text-align: center;
}
.slideshow-description p {
color: white;
font-size: 18px;
margin: 0;
line-height: 1.4;
}
```
### 6. GroupsOverviewPage.js
**Keine Edit-Funktion, nur Anzeige**
Beim Anzeigen einzelner Bilder einer Gruppe:
```jsx
{image.imageDescription && (
<p className="image-description">
{image.imageDescription}
</p>
)}
```
### 7. CSS-Erweiterungen
**`ImageGallery.css`**
```css
/* Edit-Mode Textarea Styles */
.image-description-edit {
padding: 10px;
border-top: 1px solid #e0e0e0;
}
.image-description-edit textarea {
width: 100%;
padding: 8px;
border: 1px solid #ccc;
border-radius: 4px;
font-family: 'Roboto', sans-serif;
font-size: 14px;
resize: vertical;
min-height: 50px;
}
.image-description-edit textarea:focus {
outline: none;
border-color: #4CAF50;
box-shadow: 0 0 5px rgba(76, 175, 80, 0.3);
}
.image-description-edit .char-counter {
display: block;
text-align: right;
font-size: 12px;
color: #666;
margin-top: 4px;
}
.image-description-edit .char-counter.limit-reached {
color: #f44336;
font-weight: bold;
}
/* Edit Button Styles */
.btn-edit-mode {
background: linear-gradient(45deg, #2196F3 30%, #1976D2 90%);
color: white;
border: none;
}
.btn-edit-mode:hover {
background: linear-gradient(45deg, #1976D2 30%, #2196F3 90%);
}
/* Display-only description */
.image-description-display {
padding: 10px;
border-top: 1px solid #e0e0e0;
font-size: 14px;
color: #555;
font-style: italic;
}
```
---
## 🔄 Utils & Services
### batchUpload.js
**Signatur-Änderung:**
```javascript
// ALT:
export const uploadImageBatch = async (files, metadata) => { ... }
// NEU:
export const uploadImageBatch = async (files, metadata, descriptions = []) => {
const formData = new FormData();
// Files
files.forEach(file => {
formData.append('images', file);
});
// Metadata
formData.append('metadata', JSON.stringify(metadata));
// Descriptions (NEU)
formData.append('descriptions', JSON.stringify(descriptions));
// ... rest of upload logic
}
```
---
## 🧪 Testing-Strategie
### Backend-Tests
1. **Datenbank-Migration**
- ✅ Migration läuft ohne Fehler
- ✅ Spalte wird korrekt hinzugefügt
- ✅ Bestehende Daten bleiben intakt
2. **API-Endpoints**
- ✅ Upload mit Beschreibungen funktioniert
- ✅ Upload ohne Beschreibungen funktioniert (Backward-Kompatibilität)
- ✅ Validierung: Max 200 Zeichen
- ✅ Batch-Update funktioniert
- ✅ Einzelnes Update funktioniert
- ✅ GET Requests geben Beschreibungen zurück
### Frontend-Tests
1. **MultiUploadPage.js**
- ✅ Edit-Button aktiviert Edit-Modus
- ✅ Textfelder erscheinen unter allen Bildern
- ✅ Zeichenzähler funktioniert
- ✅ Max-Länge wird enforced
- ✅ Upload sendet Beschreibungen mit
- ✅ Platzhalter zeigt Dateinamen
2. **ModerationGroupImagesPage.js**
- ✅ Beschreibungen werden vom Server geladen
- ✅ Edit-Modus aktivierbar
- ✅ Speichern funktioniert
- ✅ Optimistic Updates funktionieren
3. **SlideshowPage.js**
- ✅ Beschreibungen werden angezeigt
- ✅ Layout ist responsive
- ✅ Keine Beschreibung = kein Element
4. **GroupsOverviewPage.js**
- ✅ Beschreibungen werden angezeigt (falls vorhanden)
- ✅ Kein Edit-Button sichtbar
### Manuelle Tests
- [x] Upload mehrerer Bilder mit verschiedenen Beschreibungen
- [x] Upload ohne Beschreibungen
- [x] Bearbeiten bestehender Gruppen
- [x] Slideshow mit Beschreibungen testen
- [ ] Mobile-Ansicht testen
- [x] Performance mit vielen Bildern testen
---
## 📝 Implementation TODO
### Phase 1: Backend Foundation ✅
- [x] **Task 1.1:** Datenbank-Migration erstellen
- [x] `004_add_image_description.sql` erstellen
- [x] Migration in `DatabaseManager.js` registrieren
- [ x Lokale DB testen
- [x] **Task 1.2:** Repository-Layer erweitern
- [x] `updateImageDescription()` in `GroupRepository.js`
- [x] `updateBatchImageDescriptions()` in `GroupRepository.js`
- [x] `getImagesByGroupId()` erweitern für `image_description`
- [x] **Task 1.3:** API-Routes implementieren
- [x] `PATCH /groups/:groupId/images/:imageId` in `routes/groups.js`
- [x] `PATCH /groups/:groupId/images/batch-description` in `routes/groups.js`
- [x] Validierung hinzufügen (max 200 Zeichen)
- [x] GET Routes erweitern (image_description returnen)
- [x] **Task 1.4:** Upload-Route erweitern
- [x] `batchUpload.js` Route akzeptiert `descriptions` Parameter
- [x] Speichere Beschreibungen beim Upload
- [x] Backward-Kompatibilität testen
### Phase 2: Frontend Core Components ✅
- [x] **Task 2.1:** ImageGalleryCard.js anpassen
- [x] "Sort" Button durch "Edit" Button ersetzen
- [x] Edit-Modus UI implementieren (Textarea)
- [x] Props hinzufügen: `isEditMode`, `onEditMode`, `imageDescription`, `onDescriptionChange`
- [x] Zeichenzähler implementieren
- [x] Validierung (max 200 Zeichen)
- [x] **Task 2.2:** ImageGallery.js erweitern
- [x] Neue Props durchreichen
- [x] Edit-Modus State-Management
- [x] **Task 2.3:** CSS-Styles hinzufügen
- [x] `ImageGallery.css` erweitern
- [x] Textarea-Styles
- [x] Zeichenzähler-Styles
- [x] Edit-Button-Styles
- [x] Mobile-Optimierung
### Phase 3: Upload Flow Integration ✅
- [x] **Task 3.1:** MultiUploadPage.js erweitern
- [x] State für Edit-Modus hinzufügen
- [x] State für Beschreibungen hinzufügen
- [x] Handler für Edit-Modus implementieren
- [x] Handler für Beschreibungsänderungen implementieren
- [x] Upload-Logik erweitern (Beschreibungen mitschicken)
- [x] Edit-Mode Toggle UI hinzufügen
- [x] **Task 3.2:** batchUpload.js erweitern
- [x] Funktionssignatur anpassen (descriptions Parameter)
- [x] FormData um Beschreibungen erweitern
- [x] Error-Handling
### Phase 4: Moderation Integration ✅
- [x] **Task 4.1:** ModerationGroupImagesPage.js erweitern
- [x] State für Edit-Modus hinzufügen
- [x] State für Beschreibungen hinzufügen
- [x] `loadGroup()` erweitern (Beschreibungen laden)
- [x] Handler für Beschreibungsänderungen implementieren
- [x] `handleSaveDescriptions()` implementieren
- [x] Edit-Mode Toggle UI hinzufügen
- [x] Optimistic Updates
### Phase 5: Slideshow Integration ✅
- [x] **Task 5.1:** SlideshowPage.js erweitern
- [x] Beschreibungs-Anzeige UI implementieren
- [x] CSS für Slideshow-Beschreibung
- [x] Responsive Design
- [x] Conditional Rendering (nur wenn Beschreibung vorhanden)
- [x] **Task 5.2:** Slideshow-Styles
- [x] `.slideshow-description` CSS
- [x] Overlay-Styling
- [x] Animation (optional)
- [x] Mobile-Ansicht
### Phase 6: Groups Overview Integration ✅
- [ ] **Task 6.1:** GroupsOverviewPage.js erweitern
- [ ] Beschreibungs-Anzeige bei Bilddetails
- [ ] CSS für Beschreibungs-Display
- [ ] Kein Edit-Button (nur Anzeige)
### Phase 7: Testing & Refinement ✅
- [ ] **Task 7.1:** Backend-Tests
- [ ] API-Endpoints testen
- [ ] Datenbank-Migration testen
- [ ] Validierung testen
- [ ] **Task 7.2:** Frontend-Tests
- [ ] Upload-Flow testen
- [ ] Edit-Flow testen
- [ ] Slideshow testen
- [ ] Mobile-Tests
- [ ] **Task 7.3:** Integration-Tests
- [ ] End-to-End Upload-to-Slideshow Test
- [ ] Edit in Moderation Test
- [ ] Performance-Test mit vielen Bildern
- [ ] **Task 7.4:** Bug Fixes & Polish
- [ ] UI/UX Verbesserungen
- [ ] Error-Handling verfeinern
- [ ] Code-Cleanup
### Phase 8: Documentation & Deployment ✅
- [ ] **Task 8.1:** README.md aktualisieren
- [ ] Feature dokumentieren
- [ ] API-Endpoints dokumentieren
- [ ] Screenshots hinzufügen (optional)
- [ ] **Task 8.2:** CHANGELOG.md erweitern
- [ ] Feature hinzufügen
- [ ] Breaking Changes auflisten (falls vorhanden)
- [ ] **Task 8.3:** Migration Guide
- [ ] Deployment-Anweisungen
- [ ] Datenbank-Migration Schritte
- [ ] **Task 8.4:** Final Testing
- [ ] Production-Build testen
- [ ] Docker-Container testen
- [ ] Backup-Restore testen
---
## 🚀 Deployment-Plan
### Schritte für Production Deployment
1. **Datenbank-Migration ausführen**
```bash
# Backup erstellen
docker cp image-uploader-backend:/usr/src/app/src/data/db/image_uploader.db ./backup_before_migration.db
# Migration ausführen
docker exec -it image-uploader-backend npm run migrate
```
2. **Backend neu deployen**
```bash
docker compose -f docker/prod/docker-compose.yml up -d --build backend
```
3. **Frontend neu deployen**
```bash
docker compose -f docker/prod/docker-compose.yml up -d --build frontend
```
4. **Testen**
- Upload mit Beschreibungen
- Slideshow mit Beschreibungen
- Moderation Edit-Modus
### Rollback-Plan
Falls Probleme auftreten:
```bash
# Stoppe Container
docker compose -f docker/prod/docker-compose.yml down
# Restore Datenbank-Backup
docker cp ./backup_before_migration.db image-uploader-backend:/usr/src/app/src/data/db/image_uploader.db
# Checkout previous version
git checkout main
# Rebuild & Restart
docker compose -f docker/prod/docker-compose.yml up -d --build
```
---
## 📊 Erfolgskriterien
- ✅ Benutzer können Bildbeschreibungen beim Upload hinzufügen
- ✅ Benutzer können Bildbeschreibungen in Moderation bearbeiten
- ✅ Beschreibungen werden in Slideshow angezeigt
- ✅ Beschreibungen werden in Groups-Overview angezeigt
- ✅ Keine Performance-Einbußen
- ✅ Mobile-freundlich
- ✅ Backward-kompatibel mit bestehenden Uploads
- ✅ Keine Breaking Changes für bestehende Features
---
## 🔮 Zukünftige Erweiterungen (Optional)
- 🔄 Rich-Text Editor für Beschreibungen (Markdown?)
- 🔄 Mehrsprachige Beschreibungen
- 🔄 Auto-Vervollständigung basierend auf Bild-Metadaten (EXIF)
- 🔄 KI-generierte Beschreibungen (Bild-Erkennung)
- 🔄 Suche nach Bildern via Beschreibung
- 🔄 Bulk-Edit für Beschreibungen (Regex-Replace, etc.)
- 🔄 Export von Beschreibungen als CSV/JSON
---
## 📝 Notizen
- Original-Dateinamen als Platzhalter nutzen für bessere UX
- Validierung sowohl Client- als auch Server-seitig
- Edit-Modus sollte klar visuell erkennbar sein
- Speichern sollte optimistisch erfolgen (sofortiges Feedback)
- Fehler-Handling mit User-freundlichen Nachrichten
---
**Erstellt von:** GitHub Copilot
**Letzte Aktualisierung:** 7. November 2025

View File

@ -1,343 +0,0 @@
# Feature Plan: Slideshow Optimierung - Preload & Sortierung
**Status**: ✅ Abgeschlossen
**Branch**: `feature/PreloadImage`
**Erstellt**: 09. November 2025
**Abgeschlossen**: 09. November 2025
## Problem-Analyse
### 1. Doppelte Bild-Anzeige (Haupt-Problem)
**Symptome**:
- Slideshow zeigt häufig mehrfach das gleiche Bild in einer Gruppe
- Springt manchmal nur kurz auf das eigentliche nächste Bild
- Tritt bei allen Gruppen mit mehr als einem Bild auf (typisch 3-11 Bilder)
- Problem ist konsistent reproduzierbar
**Root Cause**:
Die aktuelle Implementierung lädt Bilder on-demand ohne Preloading. Beim automatischen Wechsel wird:
1. `setFadeOut(true)` gesetzt → Bild wird ausgeblendet
2. Nach 500ms wird `currentImageIndex` aktualisiert
3. Das neue Bild wird erst JETZT vom Browser angefordert
4. Während des Ladens bleibt das alte Bild sichtbar oder es gibt Flackern
5. Browser zeigt gecachte/teilweise geladene Versionen mehrfach an
**Code-Stelle**: `SlideshowPage.js`, Zeilen 68-82 (`nextImage` Funktion)
### 2. Zufällige Gruppen-Reihenfolge
**Aktuelles Verhalten**:
- Gruppen werden bei jedem Load zufällig gemischt (`sort(() => Math.random() - 0.5)`)
- Keine chronologische oder logische Reihenfolge
**Gewünschtes Verhalten**:
- Sortierung nach `year` (primär, aufsteigend)
- Bei gleichem Jahr: nach `upload_date` (sekundär, aufsteigend)
- Bilder innerhalb der Gruppe: nach `upload_order` (wie bisher)
## Lösungsansatz
### A. Image Preloading (Priorität: HOCH)
#### Strategie
Implementiere intelligentes Preloading für die nächsten 2-3 Bilder:
- **Aktuelles Bild**: Angezeigt
- **Nächstes Bild**: Vollständig vorgeladen (höchste Priorität)
- **Übernächstes Bild**: Im Hintergrund laden (niedrige Priorität)
#### Technische Umsetzung
1. **Preload-Manager-Hook erstellen** (`useImagePreloader.js`)
```javascript
- Verwaltet einen Preload-Queue
- Nutzt Image() Objekte zum Vorladen
- Cached erfolgreich geladene Bilder
- Behandelt Fehler gracefully
```
2. **Predictive Loading**
```javascript
- Berechne nächste 2-3 Bilder in der Sequenz
- Berücksichtige Gruppenübergänge
- Lade Bilder asynchron im Hintergrund
```
3. **State Management**
```javascript
- Neuer State: preloadedImages (Set oder Map)
- Prüfe vor Fade-Out, ob nächstes Bild geladen ist
- Verzögere Wechsel falls nötig (max. 1s Fallback)
```
#### Vorteile
- ✅ Eliminiert Lade-Latenz
- ✅ Nahtlose Übergänge garantiert
- ✅ Verbesserte User Experience
- ✅ Kein Browser-Flackern mehr
### B. Chronologische Sortierung (Priorität: MITTEL)
#### Backend-Änderungen
**Datei**: `backend/src/routes/groups.js` (oder entsprechender Endpoint)
**Aktuelle Abfrage**:
```sql
SELECT * FROM groups WHERE ... ORDER BY created_at DESC
```
**Neue Abfrage**:
```sql
SELECT * FROM groups
WHERE approved = 1
ORDER BY year ASC, upload_date ASC
```
#### Frontend-Änderungen
**Datei**: `frontend/src/Components/Pages/SlideshowPage.js`
**Aktueller Code (Zeile 43-44)**:
```javascript
// Mische die Gruppen zufällig
const shuffledGroups = [...groupsData.groups].sort(() => Math.random() - 0.5);
```
**Neuer Code**:
```javascript
// Sortiere chronologisch: Jahr (aufsteigend) → Upload-Datum (aufsteigend)
const sortedGroups = [...groupsData.groups].sort((a, b) => {
if (a.year !== b.year) {
return a.year - b.year; // Ältere Jahre zuerst
}
// Bei gleichem Jahr: nach Upload-Datum
return new Date(a.uploadDate) - new Date(b.uploadDate);
});
```
#### Vorteile
- ✅ Chronologische Story-Erzählung
- ✅ Nutzt bestehende Datenbank-Felder (kein Schema-Change nötig)
- ✅ Einfache Implementierung
- ✅ Konsistente Reihenfolge über alle Sessions
## Implementierungs-Plan
### Phase 1: Image Preloading (Hauptfokus)
**Geschätzte Dauer**: 3-4 Stunden
1. **Custom Hook erstellen** (60 min)
- [ ] `frontend/src/hooks/useImagePreloader.js` erstellen
- [ ] Preload-Logik implementieren
- [ ] Error Handling einbauen
2. **SlideshowPage Integration** (90 min)
- [ ] Hook in SlideshowPage importieren
- [ ] Preload-Queue vor jedem Wechsel aktualisieren
- [ ] State-Management für geladene Bilder
- [ ] Fallback für langsame Verbindungen
3. **Testing** (60 min)
- [ ] Manuelle Tests mit verschiedenen Gruppen-Größen
- [ ] Netzwerk-Throttling Tests (Chrome DevTools)
- [ ] Fehlerfall-Tests (404, CORS-Fehler)
### Phase 2: Chronologische Sortierung (30 min)
**Geschätzte Dauer**: 30 Minuten
1. **Frontend-Sortierung** (15 min)
- [ ] Shuffle-Code durch Sort-Logik ersetzen
- [ ] Testing mit verschiedenen Jahren
2. **Backend-Optimierung** (15 min, Optional)
- [ ] SQL-Query für sortierte Rückgabe anpassen
- [ ] Index auf `(year, upload_date)` prüfen
### Phase 3: Testing & Dokumentation (30 min)
**Geschätzte Dauer**: 30 Minuten
1. **Integrationstests**
- [ ] End-to-End Slideshow-Durchlauf
- [ ] Performance-Metriken sammeln
- [ ] Browser-Kompatibilität (Chrome, Firefox, Safari)
2. **Dokumentation**
- [ ] README.md aktualisieren (Preload-Feature erwähnen)
- [ ] Code-Kommentare für komplexe Preload-Logik
- [ ] CHANGELOG.md Entry erstellen
## Technische Details
### Preload-Algorithmus (Pseudo-Code)
```javascript
function calculateNextImages(currentGroupIndex, currentImageIndex, allGroups, count = 2) {
const result = [];
let groupIdx = currentGroupIndex;
let imgIdx = currentImageIndex + 1;
while (result.length < count) {
const group = allGroups[groupIdx];
if (imgIdx < group.images.length) {
// Nächstes Bild in aktueller Gruppe
result.push({ group: groupIdx, image: imgIdx, src: getImageSrc(group.images[imgIdx]) });
imgIdx++;
} else {
// Nächste Gruppe (sortiert, nicht zufällig)
groupIdx = (groupIdx + 1) % allGroups.length;
imgIdx = 0;
if (groupIdx === currentGroupIndex && imgIdx === currentImageIndex) {
break; // Alle Bilder durchlaufen
}
}
}
return result;
}
```
### Datenstruktur (Preload-State)
```javascript
{
preloadedImages: Map<string, HTMLImageElement>, // URL → Image Object
preloadQueue: Array<{groupIdx, imageIdx, src}>,
isPreloading: boolean
}
```
## Erwartete Verbesserungen
### Performance
- **Vor der Änderung**:
- Lade-Zeit pro Bild: 200-1500ms (je nach Größe)
- Sichtbare Verzögerung bei jedem Wechsel
- Flackern/Doppelte Anzeige
- **Nach der Änderung**:
- Lade-Zeit: 0ms (bereits geladen)
- Nahtlose Übergänge
- Keine Doppel-Anzeige mehr
### User Experience
- ✅ Keine sichtbaren Ladezeiten
- ✅ Flüssige Transitions
- ✅ Chronologische Story (älteste → neueste Bilder)
- ✅ Professionelles Look & Feel
## Risiken & Mitigationen
### Risiko 1: Memory-Usage
**Problem**: Viele vorgeladene Bilder belegen RAM
**Mitigation**:
- Nur 2-3 Bilder gleichzeitig laden
- Alte Bilder aus Cache entfernen (LRU-Strategy)
- Max. 20MB Preload-Limit
### Risiko 2: Langsame Verbindungen
**Problem**: Preload dauert länger als Display-Zeit
**Mitigation**:
- 1s Timeout pro Preload
- Fallback auf altes Verhalten (ohne Preload)
- User-Feedback (kleiner Ladeindikator)
### Risiko 3: Browser-Kompatibilität
**Problem**: Image Preloading unterschiedlich unterstützt
**Mitigation**:
- Standard HTML5 Image() API (universell unterstützt)
- Feature-Detection einbauen
- Graceful Degradation
## Testing-Checkliste
- [ ] Slideshow mit 3-Bild-Gruppe (typischer Use-Case)
- [ ] Slideshow mit 11-Bild-Gruppe (Worst-Case)
- [ ] Slideshow mit nur 1 Gruppe (Edge-Case)
- [ ] Gruppenübergang (letzte Bild → erste Bild nächste Gruppe)
- [ ] Chronologische Sortierung (mehrere Jahre)
- [ ] Slow-3G Network Throttling
- [ ] Keyboard-Navigation (Space, Arrow Keys)
- [ ] Browser-DevTools: Keine Fehler in Console
- [ ] Memory-Leak Test (10+ Minuten Slideshow)
## Alternativen (Verworfen)
### Alternative 1: Link-Prefetch
```html
<link rel="prefetch" href="/image.jpg">
```
**Nachteil**: Keine JavaScript-Kontrolle über Lade-Status
### Alternative 2: Service Worker Caching
**Nachteil**: Zu komplex für aktuelles Requirement, Overhead zu groß
### Alternative 3: CSS background-image mit Preload
**Nachteil**: Weniger Kontrolle, keine Image-Events
## Erfolgs-Kriterien
**Must-Have**:
1. Kein doppeltes Bild-Anzeigen mehr
2. Nahtlose Übergänge zwischen Bildern
3. Chronologische Gruppen-Sortierung
**Nice-to-Have**:
1. < 50ms Wechselzeit zwischen Bildern
2. < 10MB zusätzlicher Memory-Footprint
3. Browser-Console bleibt fehlerfrei
## Rollout-Plan
1. **Development** (feature/PreloadImage Branch)
- Implementierung & Testing
- Code Review
2. **Staging/Testing**
- Deployment auf Dev-Environment
- Manuelle QA-Tests
- Performance-Messungen
3. **Production**
- Merge zu `main`
- Deployment via Docker Compose
- Monitoring für 24h
## Offene Fragen
- [ ] Soll ein User-Präferenz für Sortierung (chronologisch vs. zufällig) später hinzugefügt werden?
- [ ] Soll Preload-Count (2-3 Bilder) konfigurierbar sein?
- [ ] Soll ein Debug-Modus für Preload-Status eingebaut werden?
---
## ✅ Implementierungs-Ergebnis
### Erfolgreich Implementiert
- ✅ Custom Hook `useImagePreloader.js` mit intelligenter Preload-Logik
- ✅ Integration in `SlideshowPage.js`
- ✅ Chronologische Sortierung (Jahr → Upload-Datum)
- ✅ Sequenzieller Gruppenwechsel (kein Zufall mehr)
- ✅ Cache-Management (max 10 Bilder, LRU-Strategy)
- ✅ Timeout-Handling (3s für langsame Verbindungen)
- ✅ Debug-Logging im Development-Mode
### Testing-Ergebnisse
- ✅ Keine doppelten Bilder mehr
- ✅ Keine sichtbaren Ladezeiten
- ✅ Nahtlose Übergänge zwischen Bildern
- ✅ Funktioniert bei langsamen Verbindungen (Production-Server getestet)
- ✅ Chronologische Reihenfolge funktioniert korrekt
### Performance-Verbesserung
- **Vor der Änderung**: 200-1500ms Ladezeit, Flackern, Doppelte Anzeige
- **Nach der Änderung**: 0ms Ladezeit, keine Verzögerungen, professionelle Übergänge
### Dateien Geändert
1. `/frontend/src/hooks/useImagePreloader.js` (NEU)
2. `/frontend/src/Components/Pages/SlideshowPage.js` (MODIFIZIERT)
3. `/README.md` (AKTUALISIERT)
4. `/CHANGELOG.md` (AKTUALISIERT)
5. `/docs/FEATURE_PLAN-preload-image.md` (AKTUALISIERT)
---
**Erstellt von**: GitHub Copilot
**Review durch**: @lotzm
**Status**: Feature erfolgreich implementiert und getestet ✅

View File

@ -1,109 +0,0 @@
# Feature Plan: Server-seitige Sessions für Admin-API
## Kontext
- Ziel: Admin-API auf serverseitige Sessions mit CSRF-Schutz umstellen, Secrets ausschließlich backendseitig halten.
- Initialer Admin wird über einen Setup-Wizard in der Admin-UI angelegt; weitere Admins werden in einer neuen `admin_users`-Tabelle verwaltet.
- Session-Cookies (HttpOnly, Secure, SameSite=Strict) und SQLite-basierter Session-Store.
## Annahmen & Randbedingungen
1. Backend nutzt weiterhin SQLite; Session-Daten liegen in separater Datei (`sessions.sqlite`).
2. Session-Secret (`ADMIN_SESSION_SECRET`) bleibt als ENV-Variable im Backend.
3. Frontend authentifiziert sich ausschließlich via Session-Cookie + `X-CSRF-Token`; keine Bearer-Tokens im Browser.
4. Initialer Admin wird per UI-Wizard erstellt; falls Wizard nicht verfügbar ist, gibt es ein Fallback-CLI/Script.
5. `AUTHENTICATION.md` und `frontend/MIGRATION-GUIDE.md` sind maßgebliche Dokumente für Auth-Flow.
## Aufgaben-Backlog
- [x] **Session Store & Konfiguration**
- `express-session` + `connect-sqlite3` installieren und konfigurieren.
- Session-Datei z.B. unter `backend/src/data/sessions.sqlite` speichern.
- Cookie-Flags gemäß Prod/Dev setzen.
- [x] **Admin User Datenbank**
- Migration / Schema für `admin_users` inkl. Passwort-Hash (bcrypt) und Meta-Feldern.
- Seed-/Wizard-Mechanismus für ersten Admin.
- [x] **Login / Logout Endpoints**
- `POST /auth/login` prüft Credentials gegen DB.
- `POST /auth/logout` zerstört Session + Cookie.
- Bei Login `req.session.user` + `req.session.csrfToken` setzen.
- [x] **CSRF Token & Middleware**
- `GET /auth/csrf-token` (nur authentifizierte Sessions).
- Middleware `requireCsrf` für mutierende Admin-/System-Routen.
- [x] **Initial Admin Setup Flow (Backend)**
- `GET /auth/setup/status` liefert `{ needsSetup: boolean }` basierend auf Admin-Anzahl.
- `POST /auth/setup/initial-admin` erlaubt das Anlegen des ersten Admins (nur wenn `needsSetup` true).
- UI-Wizard ruft Status ab, zeigt Formular und loggt Admin optional direkt ein.
## Endpoint-Konzept
- `POST /auth/setup/initial-admin`
- Body: `{ username, password }` (optional `passwordConfirm` auf UI-Seite validieren).
- Backend: prüft, dass keine aktiven Admins existieren, erstellt Nutzer (bcrypt Hash) und markiert Session als eingeloggt.
- Response: `{ success: true, csrfToken }` und setzt Session-Cookie.
- `GET /auth/setup/status`
- Response: `{ needsSetup: true|false }` plus optional `hasSession: boolean`.
- `POST /auth/login`
- Body: `{ username, password }`.
- Checks: User aktiv, Passwort korrekt (bcrypt.compare), optional Rate-Limit.
- Side-effects: `req.session.user = { id, username, role }`, `req.session.csrfToken = randomHex(32)`.
- Response: `{ success: true, csrfToken }` (Cookie kommt automatisch).
- `POST /auth/logout`
- Destroys session, clears cookie, returns 204/200.
- `GET /auth/csrf-token`
- Requires valid session, returns `{ csrfToken }` (regenerates when missing or `?refresh=true`).
- Middleware `requireAdminSession`
- Prüft `req.session.user?.role === 'admin'` und optional `is_active` Flag.
- Antwortet mit `403` + `{ reason: 'SESSION_REQUIRED' }` wenn nicht vorhanden.
- Middleware `requireCsrf`
- Gilt für `POST/PUT/PATCH/DELETE` auf `/api/admin/*` & `/api/system/*`.
- Erwartet Header `x-csrf-token`; vergleicht mit `req.session.csrfToken`.
- Bei Fehler: `403` + `{ reason: 'CSRF_INVALID' }`.
- Frontend-Fluss
- Nach Login/Setup: speichert gelieferten Token im State.
- Für alle Admin-Requests: `fetch(url, { method, credentials: 'include', headers: { 'X-CSRF-Token': token } })`.
- Wenn 401/403 wegen Session: UI zeigt Login.
- [x] **Admin-Auth Middleware**
- `/api/admin/*` + `/api/system/*` prüfen Session (`req.session.user.role === 'admin'`).
- Alte Token-basierte Checks entfernen.
- Ergänzt durch neue öffentliche Route `GET /api/social-media/platforms` (Upload/Management), während Admin-spezifische Plattform-Funktionen weiterhin über `/api/admin/social-media/*` laufen.
- [x] **Frontend Admin Flow**
- `adminApi.js` auf `credentials: 'include'` + `X-CSRF-Token` umbauen.
- Login-UI + Setup-Wizard für initialen Admin.
- State-Handling für CSRF-Token (Hook/Context) via `AdminSessionProvider` + `AdminSessionGate`.
- [x] **Secret-Handling & Docker**
- `docker/prod/docker-compose.yml` und Backend-Configs geben nur noch `ADMIN_SESSION_SECRET` an.
- Frontend-Build enthält keine sensiblen `.env`-Dateien; Public env-config liefert ausschließlich non-sensitive Werte.
- Deployment-Dokumentation (`env.sh`, README) beschreibt erlaubte Variablen.
- [x] **Tests & CI**
- Jest-Suites decken Login/CSRF/Admin-Endpunkte ab (`tests/api/*`, `tests/unit/auth.test.js`).
- Secret-Grep + Docker-Build-Schritt stellen sicher, dass das Frontend-Bundle keine Admin-Secrets enthält.
- [x] **Mehrere Admins & CLI-Tooling**
- `POST /api/admin/users` + `AdminAuthService.createAdminUser` für zusätzliche Admins.
- `scripts/create_admin_user.sh` automatisiert Initial-Setup & weitere Accounts via API.
- [x] **Passwortrotation erzwingen**
- Flag `requires_password_change`, `POST /auth/change-password`, Frontend-Formular blockiert Dashboard bis zur Änderung.
- [ ] **Key-Leak Reaktionsplan**
- Anleitung (Scans, History-Cleanup, Rotation) dokumentieren bzw. verlinken.
- [x] **Dokumentation**
- `AUTHENTICATION.md`, `README(.dev)` und `frontend/MIGRATION-GUIDE.md` beschreiben Session/CSRF-Flow.
- Feature-Request-Referenzen zeigen auf neue Session-Implementierung.
- [ ] **Kommunikation & Review**
- Verweise auf relevante Patches/PRs ergänzen.
- Reviewer-Hinweise (Testplan, Rollout) dokumentieren.

File diff suppressed because it is too large Load Diff

View File

@ -1,385 +0,0 @@
# Feature Plan: Telegram Bot Integration
## Übersicht
Implementierung eines Telegram Bots zur automatischen Benachrichtigung der Werkstatt-Gruppe über wichtige Events im Image Uploader System.
**Basis:** [FEATURE_REQUEST-telegram.md](./FEATURE_REQUEST-telegram.md)
---
## Phasen-Aufteilung
### Phase 1: Bot Setup & Standalone-Test
**Ziel:** Telegram Bot erstellen und isoliert testen (ohne App-Integration)
**Status:** 🟢 Abgeschlossen
**Deliverables:**
- [x] Telegram Bot via BotFather erstellt
- [x] Bot zu Test-Telegram-Gruppe hinzugefügt
- [x] Chat-ID ermittelt
- [x] `scripts/telegram-test.js` - Standalone Test-Script
- [x] `scripts/README.telegram.md` - Setup-Anleitung
- [x] `.env.telegram` - Template für Bot-Credentials
- [x] Erfolgreiche Test-Nachricht versendet
**Akzeptanzkriterium:**
✅ Bot sendet erfolgreich Nachricht an Testgruppe
---
### Phase 2: Backend-Service Integration
**Ziel:** TelegramNotificationService in Backend integrieren
**Status:** 🟢 Abgeschlossen
**Dependencies:** Phase 1 abgeschlossen
**Deliverables:**
- [x] `backend/src/services/TelegramNotificationService.js`
- [x] ENV-Variablen in `docker/dev/backend/config/.env`
- [x] Unit-Tests für Service
- [x] Docker Dev Environment funktioniert
---
### Phase 3: Upload-Benachrichtigungen
**Ziel:** Automatische Benachrichtigungen bei neuem Upload
**Status:** 🟢 Abgeschlossen
**Dependencies:** Phase 2 abgeschlossen
**Deliverables:**
- [x] Integration in `routes/batchUpload.js`
- [x] `sendUploadNotification()` Methode
- [x] Formatierung mit Icons/Emojis
- [x] Integration-Tests
---
### Phase 4: User-Änderungs-Benachrichtigungen
**Ziel:** Benachrichtigungen bei Consent-Änderungen & Löschungen
**Status:** 🟢 Abgeschlossen
**Dependencies:** Phase 3 abgeschlossen
**Deliverables:**
- [x] Integration in `routes/management.js` (PUT/DELETE)
- [x] `sendConsentChangeNotification()` Methode
- [x] `sendGroupDeletedNotification()` Methode
- [x] Integration-Tests
---
### Phase 5: Tägliche Lösch-Warnungen
**Ziel:** Cron-Job für bevorstehende Löschungen
**Status:** 🟢 Abgeschlossen
**Dependencies:** Phase 4 abgeschlossen
**Deliverables:**
- [x] Cron-Job Setup (node-cron)
- [x] `sendDeletionWarning()` Methode
- [x] Admin-Route für manuellen Trigger (`POST /api/admin/telegram/warning`)
- [x] SchedulerService Integration (09:00 daily)
- [x] Docker ENV-Variablen konfiguriert
- [x] README.md Update
---
### Phase 6: Production Deployment
**Ziel:** Rollout in Production-Umgebung + ENV-Vereinfachung
**Status:** 🟢 Abgeschlossen
**Dependencies:** Phase 1-5 abgeschlossen + getestet
**Deliverables:**
- [x] ENV-Struktur vereinfachen (zu viele .env-Dateien!)
- [x] Production ENV-Variablen in docker/prod/.env konfigurieren
- [x] docker/prod/docker-compose.yml mit Telegram-ENV erweitern
- [x] Consent-Änderung Bug Fix (platform_name statt name)
- [x] README.md Update mit ENV-Struktur Dokumentation
- ⏭️ Bot in echte Werkstatt-Gruppe einfügen (optional, bei Bedarf)
- ⏭️ Production Testing (optional, bei Bedarf)
**ENV-Vereinfachung (Abgeschlossen):**
```
Vorher: 16 .env-Dateien mit redundanter Konfiguration
Nachher: 2 zentrale .env-Dateien
✅ docker/dev/.env (alle dev secrets)
✅ docker/prod/.env (alle prod secrets)
✅ docker-compose.yml nutzt ${VAR} Platzhalter
✅ Gemountete .env-Dateien entfernt (wurden überschrieben)
✅ Alle ENV-Variablen in docker-compose environment
```
---
## Phase 1 - Detaillierter Plan
### 1. Vorbereitung (5 min)
**Auf Windows 11 Host-System:**
```bash
# Node.js Version prüfen
node --version # Sollte >= 18.x sein
# Projektverzeichnis öffnen
cd /home/lotzm/gitea.hobbyhimmel/Project-Image-Uploader/scripts
# Dependencies installieren (lokal)
npm init -y # Falls noch keine package.json
npm install node-telegram-bot-api dotenv
```
### 2. Telegram Bot erstellen (10 min)
**Anleitung:** Siehe `scripts/README.telegram.md`
**Schritte:**
1. Telegram öffnen (Windows 11 App)
2. [@BotFather](https://t.me/botfather) suchen
3. `/newbot` Command
4. Bot-Name: "Werkstatt Image Uploader Bot"
5. Username: `werkstatt_uploader_bot` (oder verfügbar)
6. **Token kopieren**`.env.telegram`
### 3. Test-Gruppe erstellen & Bot hinzufügen (5 min)
**Schritte:**
1. Neue Telegram-Gruppe erstellen: "Werkstatt Upload Bot Test"
2. Bot zur Gruppe hinzufügen: @werkstatt_uploader_bot
3. **Chat-ID ermitteln** (siehe README.telegram.md)
4. Chat-ID speichern → `.env.telegram`
### 4. Test-Script erstellen (10 min)
**Datei:** `scripts/telegram-test.js`
**Features:**
- Lädt `.env.telegram`
- Validiert Bot-Token
- Sendet Test-Nachricht
- Error-Handling
### 5. Erste Nachricht senden (2 min)
```bash
cd scripts
node telegram-test.js
```
**Erwartete Ausgabe:**
```
✅ Telegram Bot erfolgreich verbunden!
Bot-Name: Werkstatt Image Uploader Bot
Bot-Username: @werkstatt_uploader_bot
📤 Sende Test-Nachricht an Chat -1001234567890...
✅ Nachricht erfolgreich gesendet!
```
**In Telegram-Gruppe:**
```
🤖 Telegram Bot Test
Dies ist eine Test-Nachricht vom Werkstatt Image Uploader Bot.
Status: ✅ Erfolgreich verbunden!
Zeitstempel: 2025-11-29 14:23:45
```
---
## Dateistruktur (Phase 1)
```
scripts/
├── README.telegram.md # Setup-Anleitung (NEU)
├── telegram-test.js # Test-Script (NEU)
├── .env.telegram.example # ENV-Template (NEU)
├── .env.telegram # Echte Credentials (gitignored, NEU)
├── package.json # Lokale Dependencies (NEU)
└── node_modules/ # npm packages (gitignored)
```
---
## Environment Variables (Phase 1)
**Datei:** `scripts/.env.telegram`
```bash
# Telegram Bot Configuration
TELEGRAM_BOT_TOKEN=123456789:ABCdefGHIjklMNOpqrsTUVwxyz
TELEGRAM_CHAT_ID=-1001234567890
```
---
## Dependencies (Phase 1)
**Package:** `scripts/package.json`
```json
{
"name": "telegram-test-scripts",
"version": "1.0.0",
"description": "Standalone Telegram Bot Testing",
"main": "telegram-test.js",
"scripts": {
"test": "node telegram-test.js"
},
"dependencies": {
"node-telegram-bot-api": "^0.66.0",
"dotenv": "^16.3.1"
}
}
```
---
## Sicherheit (Phase 1)
**`.gitignore` ergänzen:**
```
# Telegram Credentials
scripts/.env.telegram
scripts/node_modules/
scripts/package-lock.json
```
**Wichtig:**
- ❌ Niemals `.env.telegram` committen!
- ✅ Nur `.env.telegram.example` (ohne echte Tokens) committen
- ✅ Bot-Token regenerieren, falls versehentlich exposed
---
## Testing Checklist (Phase 1)
- [x] Node.js Version >= 18.x
- [x] Telegram App installiert (Windows 11)
- [x] Bot via BotFather erstellt
- [x] Bot-Token gespeichert in `.env.telegram`
- [x] Test-Gruppe erstellt
- [x] Bot zur Gruppe hinzugefügt
- [x] Chat-ID ermittelt
- [x] Chat-ID gespeichert in `.env.telegram`
- [x] Privacy Mode deaktiviert
- [x] Test-Nachricht erfolgreich gesendet
- [ ] `npm install` erfolgreich
- [ ] `node telegram-test.js` läuft ohne Fehler
- [ ] Test-Nachricht in Telegram-Gruppe empfangen
- [ ] Formatierung (Emojis, Zeilenumbrüche) korrekt
---
## Troubleshooting (Phase 1)
### Problem: "Unauthorized (401)"
**Lösung:** Bot-Token falsch → BotFather prüfen, `.env.telegram` korrigieren
### Problem: "Bad Request: chat not found"
**Lösung:** Chat-ID falsch → Neue Nachricht in Gruppe senden, Chat-ID neu ermitteln
### Problem: "ETELEGRAM: 403 Forbidden"
**Lösung:** Bot wurde aus Gruppe entfernt → Bot erneut zur Gruppe hinzufügen
### Problem: "Module not found: node-telegram-bot-api"
**Lösung:**
```bash
cd scripts
npm install
```
---
## Nächste Schritte (nach Phase 1)
1. **Code-Review:** `scripts/telegram-test.js`
2. **Dokumentation Review:** `scripts/README.telegram.md`
3. **Commit:**
```bash
git add scripts/
git commit -m "feat: Add Telegram Bot standalone test (Phase 1)"
```
4. **Phase 2 starten:** Backend-Integration planen
---
## Zeitschätzung
| Phase | Aufwand | Beschreibung |
|-------|---------|--------------|
| **Phase 1** | **~45 min** | Bot Setup + Standalone-Test |
| Phase 2 | ~2h | Backend-Service |
| Phase 3 | ~2h | Upload-Benachrichtigungen |
| Phase 4 | ~2h | Änderungs-Benachrichtigungen |
| Phase 5 | ~2h | Cron-Job |
| Phase 6 | ~1h | Production Deployment |
| **Gesamt** | **~9-10h** | Vollständige Integration |
---
## Conventional Commits (ab Phase 1)
**Phase 1:**
```bash
git commit -m "feat: Add Telegram Bot test script"
git commit -m "docs: Add Telegram Bot setup guide"
git commit -m "chore: Add node-telegram-bot-api dependency to scripts"
```
**Phase 2:**
```bash
git commit -m "feat: Add TelegramNotificationService"
git commit -m "test: Add TelegramNotificationService unit tests"
```
**Phase 3-6:**
```bash
git commit -m "feat: Add upload notification to Telegram"
git commit -m "feat: Add consent change notifications"
git commit -m "feat: Add daily deletion warnings cron job"
git commit -m "docs: Update README with Telegram features"
```
---
## Release-Planung
**Phase 1:** Kein Release (interne Tests)
**Phase 6 (Final):**
- **Version:** `2.0.0` (Major Release)
- **Branch:** `feature/telegram-notifications`
- **Release-Command:** `npm run release:major`
---
## Status-Tracking
**Letzte Aktualisierung:** 2025-11-30
| Phase | Status | Datum |
|-------|--------|-------|
| Phase 1 | 🟢 Abgeschlossen | 2025-11-29 |
| Phase 2 | 🟢 Abgeschlossen | 2025-11-29 |
| Phase 3 | 🟢 Abgeschlossen | 2025-11-29 |
| Phase 4 | 🟢 Abgeschlossen | 2025-11-30 |
| Phase 5 | 🟢 Abgeschlossen | 2025-11-30 |
| Phase 6 | 🟡 ENV vereinfacht | 2025-11-30 |
**Legende:**
- 🟢 Abgeschlossen
- 🟡 In Arbeit
- 🔴 Blockiert
- ⚪ Ausstehend

View File

@ -1,65 +0,0 @@
# Vollständige Consent-Änderungs-Historie
**Aktueller Stand**: Basis-Tracking existiert bereits
- ✅ `group_social_media_consents`: Aktueller Status + Timestamps (`consent_timestamp`, `revoked_timestamp`)
- ✅ `management_audit_log`: Allgemeine Aktionen ohne detaillierte Old/New Values
- ✅ Ausreichend für grundlegende DSGVO-Compliance
**Was fehlt**: Dedizierte Änderungs-Historie mit Old→New Values
**Geplante Implementierung**:
\Project-Image-Uploader\backend\src\database\migrations
```sql
-- Migration 008: Consent Change History
CREATE TABLE consent_change_history (
id INTEGER PRIMARY KEY AUTOINCREMENT,
group_id TEXT NOT NULL,
consent_type TEXT NOT NULL, -- 'workshop' | 'social_media'
platform_id INTEGER, -- NULL für workshop
-- Old/New Values als JSON
old_value TEXT, -- {"consented": true, "revoked": false}
new_value TEXT NOT NULL, -- {"consented": true, "revoked": true}
-- Metadaten
changed_by TEXT NOT NULL, -- 'user_management' | 'admin_moderation'
change_reason TEXT,
ip_address TEXT,
management_token TEXT, -- Maskiert
created_at DATETIME DEFAULT CURRENT_TIMESTAMP
);
```
**Vorteile**:
- ✅ Vollständige rechtliche Compliance mit Änderungs-Historie
- ✅ Debugging: "Wer hat wann was geändert?"
- ✅ User-Transparenz im Management-Portal
- ✅ Admin-Audit-Trail für Nachvollziehbarkeit
**Implementierungs-Aufwand**: ~1-2 Tage
1. Migration 008 erstellen
2. `ConsentHistoryRepository` implementieren
3. Hooks in Consent-Change-Routes (management.js, admin.js)
4. Frontend `ConsentHistoryViewer` Komponente (Timeline-View)
1. diese soll dann im Management-Portal unter "Consent-Verlauf" angezeigt werden
5. Admin-API: `GET /api/admin/consent-history?groupId=xxx`
---
# 📚 Referenzen
- [DSGVO Art. 7 - Bedingungen für die Einwilligung](https://dsgvo-gesetz.de/art-7-dsgvo/)
- [Material-UI Checkbox Documentation](https://mui.com/material-ui/react-checkbox/)
- [SQLite Foreign Key Support](https://www.sqlite.org/foreignkeys.html)
- [UUID v4 Best Practices](https://www.rfc-editor.org/rfc/rfc4122)
---
**Erstellt am**: 9. November 2025
**Letzte Aktualisierung**: 15. November 2025, 18:20 Uhr
**Status**: ✅ Phase 1: 100% komplett | ✅ Phase 2 Backend: 100% komplett | ✅ Phase 2 Frontend: 100% komplett
**Production-Ready**: Ja (alle Features implementiert und getestet)

View File

@ -1,286 +0,0 @@
<!--
Feature Request: Public vs. Intranet UI/API by Subdomain
Datei erstellt: 22.11.2025
-->
# Feature Request: Frontend/Public API per Subdomain
## Kurzbeschreibung
Es soll unterschieden werden, welche Funktionen der App abhängig von der aufgerufenen Subdomain verfügbar sind:
- `deinprojekt.meindomain.de` (extern erreichbar): Nur Uploads und das Editieren via zugewiesenem Link (Management-UUID) sollen möglich sein. Keine Moderations-, Gruppen- oder Slideshow-Funktionen sichtbar oder nutzbar.
- `deinprojekt.lan.meindomain.de` (Intranet): Vollständige Funktionalität (Slideshow, Groupsview, Moderation, Admin-Endpunkte) sowie volle Navigation/Buttons im Frontend.
Die Anwendung läuft in Docker auf einem Ubuntu-Server, vorgeschaltet ist ein `nginx-proxy-manager`. Die Domain `*.lan.meindomain.de` ist nur intern erreichbar und besitzt ein gültiges SSL-Zertifikat für das Intranet (dns challenge letsencrypt).
## Ziele
- Sicherheit: Slideshow, Groupview und Admin-/Moderations-Funktionalität niemals über die öffentliche Subdomain erreichbar machen.
- UX: Im öffentlichen (externen) Kontext nur die Upload-Experience sichtbar und bedienbar. (die Upload Seite ist bereits so gestalltet, dass keine Menüpunkte sichtbar sind)
## Vorschlag — Technische Umsetzung (hoher Level)
1) Host-Erkennung
- Backend und Frontend erkennen die Subdomain via `Host` bzw. `X-Forwarded-Host` Header. Alternativ über eine runtime `env-config.js` (`/public/env-config.js`) die beim Request vom Backend dynamisch befüllt wird.
2) Backend: Gatekeeping-Middleware
- Neue Middleware (z.B. `middlewares/hostGate.js`) prüft `req.hostname` / `x-forwarded-host`.
- Wenn Request von öffentlicher Subdomain: schränke verfügbare API-Routen ein — nur `/api/upload` und `/api/manage/:token` (oder die minimalen Endpoints) werden zugelassen.
- Wenn Request von interner Subdomain: volle Route-Registrierung (Admin, System, Migration usw.).
- Schleifen-/Edge-Cases: Allowlist für einzelne externe Hosts (z. B. externe public-frontend-Host), sodass ein extern gehostetes UI die public-API nutzen darf.
3) Frontend: Menü- und Feature-Visibility
- Beim Laden prüft das Frontend `window.location.host` (oder die runtime `env-config.js`).
- Wenn public-host: Navigation reduziert — nur Upload, ggf. Hilfe/Impressum. Alle Buttons/Links zu Moderation/Slideshow/Gruppen ausgeblendet/gesperrt.
- Wenn internal-host: komplette Navigation und Admin-Funktionen sichtbar.
4) Reverse Proxy / nginx
- `nginx-proxy-manager` muss Host-Header weiterreichen (standard). Wichtig: `proxy_set_header Host $host;` so dass Backend den Host erkennt.
- SSL: bereits vorhanden für beide Host-Namespaces (extern + lan).
- Alternative: Public-Frontend extern hosten -> Proxy/Firewall so konfigurieren, dass nur die erlaubten API-Routen erreichbar sind (oder API-Server hinter VPN nur für `*.lan.` erreichbar).
5) CORS & Security
- Public-API: enge CORS-Regel (nur erlaubte public-frontend-origin, falls extern gehostet).
- Rate-Limiting für public Uploads stärker setzen.
- Upload-Validierung (Dateityp, Größe), Scanner/Virus-Check bedenken.
## Akzeptanzkriterien (Metrisch / Testbar)
- Auf `deinprojekt.meindomain.de` sind nur Upload und Management-by-UUID erreichbar — Aufrufe von `/api/admin/*` erhalten 403/404.
- Auf `deinprojekt.lan.meindomain.de` sind Admin- und Moderation-Endpunkte erreichbar und die Navigation zeigt alle Menüpunkte.
- Unit-/Integrationstest: Backend-Middleware hat Tests für Host-Varianten (public/internal/external-frontend)
- End-to-End: Test-Upload über public-host funktioniert, Moderation-API von dort nicht.
## Änderungsumfang (Konkrete Dateien/Orte)
- Backend
- `src/middlewares/hostGate.js` (neu) — enthält Host-Prüfung und Policy
- `src/server.js` / `src/index.js` — Routen nur registrieren oder mounten, falls Host-Policy es erlaubt; oder Middleware pro Route
- `src/middlewares/auth.js` — ggf. anpassen, um Host-Checks in Kombination mit Auth zu berücksichtigen
- Frontend
- `public/env-config.js` (runtime) oder `env-config.js` (build-time) — Flag `PUBLIC_MODE=true/false` bzw. `APP_ALLOWED_FEATURES`
- Menü-Komponenten (z. B. `Components/Pages/*`) — Feature-Visibility anhand `window.location.host` oder runtime-config
- Infrastruktur
- `docker/dev/*` nginx-proxy-manager Konfiguration prüfen: Host-Header, Zertifikate
## Sicherheits-Überlegungen
- Admin-Endpoints müssen serverseitig geblockt sein — niemals nur per Frontend-UI verstecken.
- Public Uploads: individuelle Rate-Limits, Captcha-Optionen, Virus/Malware-Scanning.
- Logging & Audit: Uploads von extern sollten besondere Logging-Flags bekommen (IP, Host, Herkunfts-Header).
## Fragen / Punkte zur Konkretisierung — und Antworten aus der Projektdokumentation
Nach Durchsicht von `README.md`, `README.dev.md`, `CHANGELOG.md` und `AUTHENTICATION.md` habe ich viele offene Punkte direkt beantwortet und die verbleibenden Entscheidungen auf das Nötigste reduziert. Unten: jeweils die Frage, was die Doku bereits festlegt, und die noch offenen Bestätigungen, die Du kurz geben solltest.
1. Domains — exakte Hosts
- Dokumentation: Platzhalter-Hosts wurden als Beispiele verwendet (z. B. `deinprojekt.meindomain.de` und `deinprojekt.lan.meindomain.de`).
- Empfehlung / Bitte bestätigen: Nenne bitte die echten Subdomains, die Dein Deployment verwenden wird. BeispielAntwort reicht: `deinprojekt.hobbyhimmel.de` und `deinprojekt.lan.hobbyhimmel.de`.
2. Host-Check vs. zusätzliche Checks
- Doku: AdminAPI ist bereits serverseitig per Admin Login geschützt. ManagementAPI nutzt UUIDToken mit RateLimits (10 req/h) und BruteForceSchutz.
- Empfehlung: Primär HostHeader (`Host` / `X-Forwarded-Host`) prüfen (einfach, zuverlässig). Zusätzlich empfehle ich für AdminAPIs die Kombination aus BearerToken + HostCheck (defense in depth). Bitte bestätigen, ob IPWhitelist gewünscht ist.
3. Externes Hosting des publicFrontends -> nicht mehr nötig
4. ManagementUUID (Editieren von extern)
- Doku: ManagementTokens sind permanent gültig bis Gruppe gelöscht; Token sind URLbasiert und Ratelimited (10 req/h). README zeigt, dass ManagementPortal für SelfService gedacht ist und kein zusätzliches network restriction vorgesehen ist.
- Schlussfolgerung: Editieren per UUID ist technisch erlaubt und im Projekt vorgesehen. Wenn Du das beibehalten willst, ist keine weitere technische Änderung nötig. Falls Du TTL für Tokens möchtest, bitte angeben.
5. AdminAPIs: Hostonly oder zusätzlich BearerToken?
- ~~Doku: Admin APIs sind bereits durch BearerToken geschützt (`ADMIN_API_KEY`).~~
- ~~Empfehlung: Behalte BearerToken als Hauptschutz und ergänze HostRestriction (Admin nur intern erreichbar) für zusätzliche Sicherheit. Bitte bestätigen.~~
6. RateLimits / Quotas für public Uploads
- Doku: Management hat 10 req/h per IP; UploadRateLimits für public uploads sind nicht konkret spezifiziert.
- Vorschlag: Default `20 uploads / IP / Stunde` für public subdomain + strengere throttling für unauthenticated bursts. Bestätige oder nenne anderes Limit.
7. Logging / Monitoring
- Doku: Es gibt umfassende Audit-Logs (`management_audit_log`, `deletion_log`).
- Empfehlung: Ergänze ein Feld/Label `source_host` oder `source_type` für public vs. internal Uploads für bessere Filterbarkeit. Bestätigen? Passt!
8. Assets / CDN
- Doku: Bilder und Previews werden lokal gespeichert; kein CDN-Flow vorhanden. Du hast klargestellt: Bilder sind intern und nur über UUIDLinks zugänglich.
- Entscheidung: Default bleibt interne Auslieferung. Externe CDN-Auslieferung ist möglich, aber muss aus Privacy/AccessControlGründen extra implementiert (signed URLs, TTL, ACLs). Keine Aktion nötig, wenn Du interne Auslieferung beibehältst.
---
Bitte bestätige die wenigen noch offenen Punkte (Hosts, publicgroupview ja/nein (siehe unten), ManagementUUID extern ja/nein (bestätigt als ja), desired ratelimit, zusätzliche Adminrestrictions, logginglabel). Ich habe die Dokumentation soweit wie möglich angepasst (siehe Änderungen weiter unten). Sobald Du diese 34 Punkte bestätigst, erstelle ich die konkreten Patches (Middleware, kleine FrontendVisibilityÄnderung, Tests, READMEErweiterung).
## Vorschlag: Minimal umsetzbare erste Iteration (MVP)
1. Implementiere `middlewares/hostGate.js` mit einfacher Host-Allowlist (`PUBLIC_HOSTS`, `INTERNAL_HOSTS` in Env).
2. Im Backend: prüfe bei jedem Request, ob Route erlaubt ist — für public-Hosts nur Upload & manage-by-uuid.
3. Im Frontend: beim Laden `window.location.host` prüfen und Navigation entsprechend reduzieren.
4. Dokumentation: `README.dev.md` (API-Abschnitt) und `frontend/MIGRATION-GUIDE.md` um Hinweise erweitern.
5. Tests: Unit-Test für Middleware + Integrationstest (Supertest) der Host-Policies.
## Nächste Schritte / Implementierungs-Checklist
- [ ] Domain-/Host-Liste bestätigen (Fragen beantworten)
- [ ] Env-Variablen definieren und in `docker/*/config/.env` ergänzen
- [ ] Backend: Middleware implementieren + Tests
- [ ] Frontend: runtime `env-config.js` + menu-visibility implementieren
- [ ] Proxy/Nginx: Konfiguration prüfen & SSL testen
- [ ] Review & Security-Check (rate-limits, CORS, logging)
- [ ] Deployment-Plan / Rollout-Notes
---
Bitte beantwort e die Fragen im Abschnitt "Fragen / Punkte zur Konkretisierung" oben. Sobald ich Deine Antworten habe, kann ich ein konkretes Implementierungs-Patch (Backend-Middleware + Frontend-Visibility) erstellen und Tests hinzufügen.
## FAQ: Trennung von Frontend und Backend — Frage des Nutzers & Antwort
**Frage:**
Wie kann ich Frontend und Backend räumlich trennen, ohne dass sicherheitsrelevante (Bilder) theoretisch extern erreichbar sind? Im Moment laufen Frontend und Backend in einer Docker Compose. Ist das sinnvoll? Wie trennt man das am Besten?
**Antwort (zusammengefasst & praxisorientiert):**
- Development: Für lokale Entwicklung ist eine gemeinsame `docker-compose`-Umgebung (Frontend + Backend) sinnvoll — HMR, einfache Debugging und Tests.
- Produktion: Trennung empfohlen. Ziel: Frontend öffentlich (statisch/CDN) betreiben, Backend streng begrenzen und Assets (Originale/Previews) niemals ohne Authorisierung öffentlich machen.
Architektur-Optionen (Kurz):
- Single-Server mit `nginx` ReverseProxy (empfohlen, einfach): `nginx` routet `/` zum statischen Frontend und `/api/*` zum Backend; Backend nicht direkt öffentlich.
- Frontend extern (CDN/Netlify) + Backend intern hinter ReverseProxy: Frontend ist skalierbar, Backend nur über Proxy erreichbar; für Bilder: presigned URLs oder BackendProxy verwenden.
- Vollständige Trennung (Backend nur im privaten Netz / VPN): Sehr sicher, aber komplexer (VPN/VPC). Admin-/Moderation nur über LAN/VPN erreichbar.
Wie Bilder sicher halten (Pattern):
- Pattern A — Backendproxied images: Bilder nur auf Backend speichern; Zugriff nur über BackendEndpunkte (prüfen ManagementUUID / Host), keine direkte öffentliche URL.
- Pattern B — Private Object Storage + presigned URLs: Nutze privaten S3/Bucket; generiere kurzlebige presigned URLs nach Auth/Zugriffsprüfung; kombiniere mit CDN (Origin Access).
- Pattern C — CDN + signed URLs für Previews: Nur Previews via CDN mit signed URLs; Originals bleiben intern oder ebenfalls presigned.
Konkrete Maßnahmen (umsetzbar sofort):
1. ReverseProxy (`nginx`) einführen: zwei vhosts (public / internal). Auf public vhost `/api/admin` und `/groups` blockieren; nur `/api/upload` und `/api/manage/:token` erlauben.
2. DockerNetzwerke: Backend in `internal_net` ohne veröffentlichte Ports; `reverse-proxy` hat öffentliche Ports und verbindet zu `backend` intern.
3. HostGate Middleware (Express): `req.isPublic` setzen via `Host`/`X-Forwarded-Host`, serverseitig Routen (Admin/Groups) für public blocken — defense in depth.
4. CORS & RateLimit: CORS auf erlaubte Origins, strengere RateLimits für public Uploads (z. B. 20 Uploads/IP/Stunde) und Captcha prüfen.
5. Logging: AuditLogs erweitern (z. B. `source_host`) um public vs internal Uploads zu unterscheiden.
Beispiel nginxSnippet (konzeptionell):
```
server {
server_name public.example.com;
location / { root /usr/share/nginx/html; try_files $uri /index.html; }
location ~ ^/api/(upload|manage) { proxy_pass http://backend:5001; proxy_set_header Host $host; }
location ~ ^/api/admin { return 403; }
location ~ ^/groups { return 403; }
}
server {
server_name internal.lan.example.com;
location / { proxy_pass http://frontend:3000; }
location /api/ { proxy_pass http://backend:5001; }
}
```
DockerCompose Hinweis (prod): Backend ohne `ports:` veröffentlichen; `reverse-proxy` expose Ports 80/443 und verbindet intern:
```
services:
reverse-proxy:
ports: ["80:80","443:443"]
networks: [public_net, internal_net]
backend:
networks: [internal_net]
# no ports
networks:
internal_net:
internal: true
```
Checklist (schnell umsetzbar)
- [ ] `nginx` reverseproxy hinzufügen
- [ ] BackendPorts entfernen (nur interner Zugriff)
- [ ] vhostRegeln: public vs internal (Admin blockieren auf public)
- [ ] `hostGate` middleware implementieren (Express)
- [ ] CORS, RateLimit, Captcha konfigurieren
- [ ] AuditLog `source_host` ergänzen
Wenn Du möchtest, implementiere ich als nächsten Schritt die `hostGate`Middleware, BeispielnginxVHosts und die `docker-compose`Änderungen als Patch hier im Repository. Sag mir kurz, welche Hostnames (Platzhalter sind OK) und ob Du Frontend lokal im selben Host behalten willst oder extern hosten willst.
## Technische Details & Voraussetzungen
Im Folgenden findest Du eine vertiefte, technische Zusammenfassung der Architekturoptionen, Voraussetzungen und Sicherheitsmaßnahmen — als Entscheidungsgrundlage für die Implementierung des Subdomainabhängigen Verhaltens.
1) Ziel und Sicherheitsprinzip
- Zweck: Subdomainabhängig unterschiedliche UX und APIZugänglichkeit (Public: Upload + Manage-UUID; Intranet: FullFeature).
- Sicherheitsprinzip: Nie ausschließlich auf FrontendSteuerung vertrauen — serverseitige Blockierung ist Pflicht.
2) InfrastrukturVarianten
- Variante A — Single Host + `nginx` ReverseProxy: Einfach, kontrollierbar, Proxy hostet TLS, routet an Backend; Backend nicht direkt erreichbar.
- Variante B — Frontend extern (CDN/Netlify) + Backend intern: Skalierbar; Bilder per presigned URLs oder BackendProxy ausliefern.
- Variante C — Backend nur im privaten Netz/VPN: Höchste Sicherheit, mehr Betriebskomplexität.
3) HostErkennung und Defense in Depth
- Proxy muss `Host` bzw. `X-Forwarded-Host` weiterreichen (`proxy_set_header Host $host`).
- Implementiere serverseitig eine `hostGate`Middleware, die `req.isPublic` bzw. `req.isInternal` setzt und schütze kritische Routen zusätzlich (Admin, Groups Listing, Cleanup).
- Kombiniere ProxyRegeln + Middleware + BearerToken (für Admin) + Firewall für maximale Sicherheit.
4) Speicherung und Auslieferung von Bildern
- Standard: Bilder lokal in `backend/src/data/images` und `.../previews`.
- Pattern A (empfohlen kleinbetrieblich): Backendproxied images — keine direkten öffentlichen Pfade; Backend kontrolliert Zugriffe (UUID, Host).
- Pattern B (Skalierung): Privater ObjectStore (S3compatible) + presigned URLs (TTL kurz) + CDN (Origin Access) für Performance.
- Previews können weniger restriktiv gehandhabt werden (kurze TTLs / signed URLs), Originals sollten restriktiver sein.
5) ManagementUUID (Risiken & Optionen)
- Aktuell: UUIDs permanent gültig bis Löschung (convenience). Risiko: Leak bedeutet Zugriff.
- Optionen: Beibehalten + RateLimit/Audit (empfohlen), oder TTL/Rotation/Optin Passwortschutz (sicherer, schlechtere UX).
6) CORS, CSRF, TLS
- CORS: Nur erlaubte Origins eintragen (public frontend origin(s) und/oder intranet origin).
- CSRF: REST API mit token/UUID im Pfad ist weniger CSRFanfällig, trotzdem sicherheitsbewusst durchführen.
- TLS/HSTS: Pflicht für öffentliche Hosts.
7) RateLimiting und AbuseProtection
- Public Uploads streng limitieren (z. B. 20 uploads/IP/Stunde) + Dateigrößenlimits + MIME/Exif/TypeValidation.
- Optional Captcha für Uploads bei hohem Traffic/Abuse Verdacht.
8) Logging und Monitoring
- Ergänze AuditLogs um `source_host`/`source_type` und `request_origin`.
- Metriken für ratelimit hits, 403s, upload errors, health checks; optional Sentry/Prometheus.
9) Docker/Bereitstellungsempfehlungen
- Dev: `docker/dev/docker-compose.yml` mit exposed ports OK.
- Prod: Backend keinem Hostport aussetzen (`ports:` entfernen). Reverseproxy exponiert 80/443; backend nur im internen DockerNetz.
- Verwende ein `internal` DockerNetzwerk oder separate Netzwerke für Public/Private.
10) nginxproxymanager Hinweise
- Konfiguriere ProxyHosts für public vs. internal mit passenden Headern (`Host`, `X-Forwarded-*`).
- Verwende ProxyRegeln, um `/api/admin` & `/groups` auf public Host zu blocken; teste mit `curl`.
11) DeploymentPrerequisites (konkret)
- DNS für beide Subdomains (public + intranet) vorhanden.
- TLS für public (Let's Encrypt) und internes Zertifikat für LAN.
- `ADMIN_API_KEY` sicher gesetzt, `PUBLIC_HOSTS` / `INTERNAL_HOSTS` konfiguriert.
- Backup/RestorePolicy für DB & images.
12) Entscheidungsfragen / Tradeoffs
- UUID permanent vs TTL: UX vs Security.
- Previews via CDN vs BackendProxy: Performance vs Kontrolle.
- Frontend lokal hinter nginx vs extern gehostet: Einfachheit vs Skalierbarkeit.
13) Prüfbare Akzeptanzkriterien (Beispiele)
- `curl -I https://public.example.com/api/admin/deletion-log` → 403
- Upload via public Host funktioniert (POST to `/api/upload`), Moderation API returns 403.
- Backend nicht per `docker ps`/published port extern erreichbar.
14) Vorschlag: nächste nonimplementierende Schritte
- Definiere endgültig: Public/Internal Hostnames; ManagementUUID Policy (TTL ja/nein); RateLimit Wert; CDN für Previews ja/nein.
- Ich kann danach ein SecurityDesignDokument (nginx rules, env vars, checklist) erstellen oder direkt ImplementierungsPatches liefern.
Bitte bestätige kurz die vier entscheidenden Punkte, damit ich das Design final zuspitze:
- Hosts: welche Subdomains sollen verwendet werden? (z. B. `deinprojekt.meindomain.de`, `deinprojekt.lan.meindomain.de`)
- ManagementUUID extern erlaubt? (Ja/Nein)
- RateLimit für public Uploads? (z. B. `20 uploads/IP/Stunde`)
- Previews via CDN erlaubt? (Ja/Nein)
---
Bitte sag mir, ob ich diese detaillierte Sektion so übernehmen soll (ich habe sie bereits in dieses FeatureRequest eingefügt). Wenn ja, kann ich auf Wunsch noch ein kurzes SecurityDesignPDF oder konkrete nginxSnippetDateien ergänzen.
<!-- Ende Feature Request -->

View File

@ -1,55 +0,0 @@
````markdown
# Feature Request: Autogenerierte OpenAPI / Swagger Spec
**Kurzbeschreibung**: Automatische Erzeugung einer OpenAPI (Swagger) Spec aus dem ExpressBackend (devonly), so dass neue Routen sofort und ohne manuelles Nacharbeiten in der APIDokumentation erscheinen.
**Motivation / Nutzen**:
- Single source of truth: Routen im Code sind die einzige Quelle; keine manuelle openapi.json Pflege.
- Entwicklerfreundlich: Neue Route → Doku beim nächsten Serverstart sichtbar.
- Schnelle Übersicht für QA und APIReviewer via Swagger UI.
- Reduziert Drift zwischen Implementierung und Dokumentation.
---
## Aktueller Stand
- Backend ist Expressbasiert, Routen sind statisch in `backend/src/routes` definiert.
- `express-fileupload` wird als Middleware verwendet.
- Keine automatische OpenAPI Spec derzeit vorhanden.
---
## Anforderungen an das Feature
1. Beim lokalen DevStart soll eine OpenAPI Spec erzeugt werden (z. B. mit `swagger-autogen` oder `express-oas-generator`).
2. Eine Swagger UI (nur in Dev) soll unter `/api/docs/` erreichbar sein und die erzeugte Spec anzeigen.
3. Automatisch erkannte Endpunkte müssen sichtbar sein; für komplexe Fälle (multipart Uploads) sollen einfache Hints / Overrides möglich sein.
4. Keine Breaking Changes am ProduktionsStartverhalten: Autogen nur in `NODE_ENV !== 'production'` oder per optin env var.
5. Erzeugte Spec soll ins Repo (z. B. `docs/openapi.json`) optional geschrieben werden können (für CI/Review).
---
## Minimaler Scope (MVP)
- Devonly Integration: Generator installiert und beim Start einmal ausgeführt.
- Swagger UI unter `/api/docs/` mit generierter Spec.
- Kurze Anleitung im `README.dev.md` wie man die Doku lokal öffnet.
---
## Akzeptanzkriterien
- [ ] Swagger UI zeigt alle standardmäßig erkannten Endpoints an.
- [ ] UploadEndpoints erscheinen (Pfad erkannt). Falls requestBody fehlt, ist ein klarer Hinweis dokumentiert.
- [ ] Feature ist deaktivierbar in `production`.
- [ ] Optionaler Export: `docs/openapi.json` kann per npm script erzeugt werden.
---
## Geschätzter Aufwand (MVP)
- Setup & smoke test: 12h
- Anpassungen für UploadHints + kleine Nacharbeiten: 12h
- Optionales Export/CI: +1h
---
**Erstellt am**: 16. November 2025
````

View File

@ -1,250 +0,0 @@
<!--
Feature Request: Server-seitige Session-Authentifizierung für Admin-API
Zielgruppe: Entwickler / KI-Implementierer
-->
1. erstelle ein Branch namens `feature/security` aus dem aktuellen `main` Branch.
2. erstelle eine Datei `FeatureRequests/FEATURE_PLAN-security.md` in der du die Umsetzungsaufgaben dokumentierst (siehe unten) und darin die TODO Liste erstellst und aktuallisierst.
3. Stelle mir Fragen bezüglich der Umsetzung
4. Verstehe, wie bisher im Frontend die UI aufgebaut ist (modular, keine inline css, globale app.css)
5. Implementiere die untenstehenden Aufgaben Schritt für Schritt.
# FEATURE_REQUEST: Security — Server-seitige Sessions für Admin-API
Umsetzungsaufgaben (konkret & eindeutig für KI / Entwickler)
Die folgenden Aufgaben sind Schritt-für-Schritt auszuführen. Jede Aufgabe enthält das gewünschte Ergebnis und minimalen Beispielcode oder Befehle. Die KI/Entwickler sollen die Änderungen als Code-Patches anlegen, Tests hinzufügen und die Dokumentation aktualisieren.
1) Session-Store & Session-Konfiguration
- Ziel: Server-seitige Sessions für Admin-Login verfügbar machen.
- Schritte:
- Installiere Packages: `npm install express-session connect-sqlite3 --save` (Backend).
- In `backend/src/server.js` (oder Entrypoint) konfiguriere `express-session` mit `connect-sqlite3`:
```js
const session = require('express-session');
const SQLiteStore = require('connect-sqlite3')(session);
app.use(session({
store: new SQLiteStore({ db: 'sessions.sqlite' }),
secret: process.env.ADMIN_SESSION_SECRET,
resave: false,
saveUninitialized: false,
cookie: { httpOnly: true, secure: process.env.NODE_ENV === 'production', sameSite: 'Strict', maxAge: 8*60*60*1000 }
}));
```
- Abnahme: Session-Cookie (`sid`) wird gesetzt nach Login, cookie-Flags korrekt.
2) Login-Endpoint (Admin)
- Ziel: Admin kann sich mit Benutzername/Passwort anmelden; Backend erstellt Session.
- Schritte:
- Füge `POST /auth/login` hinzu, prüft Credentials (z. B. gegen environment-stored admin user/pass oder htpasswd), legt `req.session.user = { role: 'admin' }` an und `req.session.csrfToken = randomHex()` an.
- Rückgabe: 200 OK. Cookie wird automatisch gesetzt (`credentials: 'include'` vom Frontend).
- Abnahme: Nach `POST /auth/login` existiert `req.session.user` und `req.session.csrfToken`.
3) CSRF-Endpoint + Middleware
- Ziel: Session-gebundenen CSRF-Token ausgeben und Requests schützen.
- Schritte:
- Endpoint `GET /auth/csrf-token` gibt `{ csrfToken: req.session.csrfToken }` zurück (nur wenn eingeloggt).
- Middleware `requireCsrf` prüft `req.headers['x-csrf-token'] === req.session.csrfToken` für state-changing Methoden.
- Abnahme: state-changing Admin-Requests ohne oder mit falschem `X-CSRF-Token` bekommen `403`.
4) Backend-Auth-Middleware für Admin-API
- Ziel: Alle `/api/admin/*` Endpoints prüfen Session statt Client-Token.
- Schritte:
- Ersetze oder erweitere bestehende Admin-Auth-Middleware (`middlewares/auth.js`) so, dass sie `req.session.user && req.session.user.role === 'admin'` prüft; falls nicht gesetzt → `403`.
- Abnahme: `GET /api/admin/*` ohne Session → `403`; mit gültiger Session → durchgelassen.
5) Frontend-Änderungen (adminApi)
- Ziel: Frontend sendet keine Admin-Bearer-Tokens mehr; verwendet Cookie-Session + CSRF-Header.
- Schritte:
- Entferne in `frontend/src/services/adminApi.js` die Abhängigkeit von `process.env.REACT_APP_ADMIN_API_KEY`.
- Passe `adminFetch`/`adminRequest` an: bei Requests setze `credentials: 'include'` und füge `X-CSRF-Token` Header (Token bezieht Frontend über `GET /auth/csrf-token` nach Login).
- Dokumentiere in `frontend/README` oder Code-Kommentar, dass Admin-UI nach Login `fetch('/auth/csrf-token', { credentials: 'include' })` aufruft.
- Abnahme: `adminApi.js` sendet keine Bearer-Header; admin Requests beinhalten `credentials: 'include'` und `X-CSRF-Token`.
6) Entfernen von Admin-Key aus Frontend Build/Compose/Dockerfile
- Ziel: Keine Weitergabe von `ADMIN_API_KEY` an `frontend` und kein Kopieren sensibler `.env` in Frontend-Image.
- Schritte:
- Entferne Zeile `- REACT_APP_ADMIN_API_KEY=${ADMIN_API_KEY}` aus `docker/prod/docker-compose.yml`.
- Entferne `COPY docker/prod/frontend/config/.env ./.env` aus `docker/prod/frontend/Dockerfile` oder stelle sicher, dass diese Datei ausschließlich non-sensitive Keys enthält.
- Dokumentiere in `FeatureRequests/FEATURE_REQUEST-security.md` welche Keys im runtime-`env.sh` erlaubt sind (z. B. `API_URL`, `APP_VERSION`).
- Abnahme: `docker-compose` enthält keine Übergabe an `frontend`; Build und Image enthalten keine Production-Secrets.
7) Secrets-Handling / Deployment
- Ziel: Secrets nur in Backend-Umgebung bereitstellen.
- Schritte:
- Setze `ADMIN_API_KEY` und `ADMIN_SESSION_SECRET` in CI/CD Secrets oder Docker Secrets und referenziere sie nur im `backend` Service.
- Beispiel-Dokumentation für CI: wie man Secret in GitLab/GitHub Actions setzt und an Container übergibt.
- Abnahme: Secrets sind nicht in Repo/Images; `docker inspect` der frontend-Container zeigt keinen Admin-Key.
8) Tests & CI-Checks
- Ziel: Automatisierte Verifikation der Sicherheitsregeln.
- Schritte:
- Integrationstest 1: `GET /api/admin/some` ohne Session → expect 403.
- Integrationstest 2: `POST /auth/login` with admin credentials → expect Set-Cookie; then `GET /auth/csrf-token` → receive token; then `POST /api/admin/action` with `X-CSRF-Token` → expect 200.
- Build-scan-Check: CI Schritt `rg REACT_APP_ADMIN_API_KEY build/ || true` fails if found.
- Abnahme: Tests grün; CI verweigert Merge wenn Build enthält Admin-Key.
9) Key-Leak Reaktion (konkrete Anweisungen)
- Ziel: Falls ein Admin-Key geleakt wurde, sichere, koordinierte Rotation.
- Schritte:
- Scannen: `trufflehog --regex --entropy=True .` oder `git-secrets scan`.
- Entfernen: `git-filter-repo --replace-text passwords.txt` oder `bfg --replace-text passwords.txt` (siehe docs).
- Rotation: Erzeuge neuen Key (openssl rand -hex 32), update CI secret, redeploy Backend.
- Hinweis: History-Rewrite ist invasiv; kommuniziere mit Team und informiere Contributors.
10) Dokumentation
- Ziel: Abschlussdokumentation aktualisiert.
- Schritte:
- Ergänze `AUTHENTICATION.md` um Login/Session/CSRF-Flow und Secret-Handling.
- Ergänze `FeatureRequests/FEATURE_REQUEST-security.md` mit Implementations-Links (Patches/PRs).
11) MIGRATION-GUIDE Anpassung (unbedingt)
- Ziel: Die `frontend/MIGRATION-GUIDE.md` spiegelt nicht mehr den sicheren Produktions-Workflow. Sie muss aktualisiert werden, damit Entwickler/KI keine unsicheren Anweisungen (Admin-Key im Frontend) ausführen.
- Aktueller Stand (zu prüfen): Die MIGRATION-GUIDE enthält Anweisungen, `REACT_APP_ADMIN_API_KEY` in `frontend/.env` zu setzen und dieselbe Variable an `frontend` im `docker-compose.yml` weiterzugeben. Dies steht im direkten Widerspruch zur hier geforderten serverseitigen Session-Lösung.
- Erforderliche Änderungen in `frontend/MIGRATION-GUIDE.md` (konkret):
- Entferne oder ersetze alle Anweisungen, die `REACT_APP_ADMIN_API_KEY` in Frontend `.env` oder Build-Umgebungen für Production setzen.
- Ersetze Fetch-/Axios-Beispiele, die `Authorization: Bearer ${process.env.REACT_APP_ADMIN_API_KEY}` setzen, durch die neue Anleitung: Login → `GET /auth/csrf-token``fetch(..., { credentials: 'include', headers: { 'X-CSRF-Token': csrfToken } })`.
- Passe das Docker-Beispiel an: `ADMIN_API_KEY` darf nur dem `backend`-Service übergeben werden; entferne die Weitergabe an `frontend` (Zeile `- REACT_APP_ADMIN_API_KEY=${ADMIN_API_KEY}`).
- Ersetze lokale Testanweisungen, die Frontend mit `REACT_APP_ADMIN_API_KEY` starten, durch Login-/Session-Testschritte (siehe Tasks 2/3/8).
- Ergänze Hinweis zur CI/Build-Scan-Prüfung: CI muss prüfen, dass gebaute `build/` keine Admin-Key-Strings enthält.
- Abnahme: `frontend/MIGRATION-GUIDE.md` enthält keine Production-Anweisungen, die Admin-Secrets ins Frontend bringen; stattdessen ist der Session-Flow dokumentiert und verlinkt.
Hinweis für die Implementierung
- Ergänze in `FeatureRequests/FEATURE_REQUEST-security.md` einen Link/Verweis zur überarbeiteten MIGRATION-GUIDE-Version in der PR/Release-Notes, damit Reviewer die Änderung nachvollziehen können.
Rolle der implementierenden KI/Dev
- Erzeuge konkrete Code-Patches, führe lokale Tests aus, öffne PR mit Änderungen und Tests.
- Stelle sicher, dass alle Abnahme-Kriterien (oben) automatisiert oder manuell prüfbar sind.
Mit diesen Aufgaben sind die vorher offenen Fragen in eindeutige, ausführbare Schritte übersetzt. Bitte bestätige, welche Aufgaben ich automatisch umsetzen soll (z. B. `1` = Compose/Docker-Änderungen; `2` = Frontend `adminApi.js` Patch; `3` = Backend Session+CSRF minimal-Implementierung; oder `all`).
Hintergrund (Ist-Stand)
- Aktuell existieren folgende sicherheitskritische Zustände im Repository:
- `frontend/.env` enthält `REACT_APP_ADMIN_API_KEY` in der Arbeitskopie (lokal). Die Datei ist in `.gitignore` und wird nicht ins Git-Repository getrackt, ist aber sensibel und darf nicht in Builds/Images gelangen.
- `docker/prod/docker-compose.yml` injiziert `REACT_APP_ADMIN_API_KEY=${ADMIN_API_KEY}` in den `frontend`-Service — der Key kann so in den gebauten Frontend-Bundles landen.
- `frontend/src/services/adminApi.js` liest `process.env.REACT_APP_ADMIN_API_KEY` und sendet den Bearer-Token clientseitig mit Admin-Requests.
- Das Production-Frontend-Dockerfile kopiert `docker/prod/frontend/config/.env` in das Laufzeit-Image und führt zur Startzeit ein `env.sh` aus, das `env-config.js` erzeugt (`window._env_`), was sensible Werte im Browser verfügbar machen kann, falls sie in `.env` landen.
- Die Moderation-Weboberfläche ist zusätzlich durch `htpasswd`/nginx HTTP Basic Auth geschützt — das schützt das UI, aber nicht die API-Endpoints ausreichend.
Problemstellung (warum es ein Problem ist)
- Ein im Frontend sichtbarer Admin-Key ist öffentlich und ermöglicht Missbrauch (API-Calls mit Admin-Rechten von jedem Browser).
- Das serverseitige Secret `ADMIN_API_KEY` wird derzeit in Artefakte/Images injiziert und kann geleakt werden.
- HTTP Basic Auth vor der UI ist nützlich, aber kein Ersatz für serverseitige API-Authentifizierung; API-Endpunkte müssen eigenständig prüfen.
Ziel (Soll-Stand, aus Kundensicht)
- Admin-Funktionen sind nur nach sicherer Anmeldung erreichbar.
- Der geheime Admin-Key verbleibt ausschließlich auf dem Server/Backend und wird nicht in Frontend-Code, Images oder öffentlich zugängliche Dateien geschrieben.
- Frontend kommuniziert nach Anmeldung mit dem Backend, ohne je den Admin-Key im Browser zu speichern.
Anforderungen (aus Sicht des Auftraggebers, umsetzbar durch eine KI)
- Authentifizierung:
- Einführung eines serverseitigen Login-Flows für Admins (Session-Cookies, HttpOnly, Secure, SameSite).
- Nach erfolgreicher Anmeldung erhält der Admin-Browser ein HttpOnly-Cookie; dieses Cookie erlaubt Zugriff auf geschützte `/api/admin/*`-Endpoints.
- Backend validiert alle `/api/admin/*`-Requests anhand der Session; nur dann wird mit dem internen `ADMIN_API_KEY` gearbeitet.
- Secrets & Build:
- Keine Secrets (z. B. `ADMIN_API_KEY`) im Frontend-Quellcode, in `frontend/.env`, in `env-config.js` oder in gebauten Bundles.
- `docker/prod/docker-compose.yml` darf `ADMIN_API_KEY` nur dem `backend`-Service bereitstellen; keine Weitergabe an `frontend`.
- `Dockerfile` des Frontends darf keine Produktion-`.env` kopieren, die Secrets enthält.
- Betrieb & Infrastruktur:
- Bestehende `htpasswd`-Absicherung der Admin-UI kann beibehalten werden als zusätzliche Hürde, ist aber nicht die einzige Schutzmaßnahme.
- Empfehlung: `ADMIN_API_KEY` über sichere Secret-Mechanismen bereitstellen (CI/CD secret store, Docker Secrets, Swarm/K8s Secrets) — dies ist ein Hinweis, keine Pflichtanweisung.
Akzeptanzkriterien (klar messbar, für Tests durch eine KI/Dev)
- Funktional:
- Unauthentifizierte Requests an `/api/admin/*` erhalten `403 Forbidden`.
- Admin-Login-Endpoint existiert und setzt ein HttpOnly-Cookie; angemeldete Admins erreichen `/api/admin/*` erfolgreich.
- Artefakte / Repo:
- `frontend`-Bundle (der gebaute `build/`-Ordner) enthält nicht den Wert von `ADMIN_API_KEY` (automatischer Scan: kein Vorkommen des Key-Strings).
- `frontend/.env` enthält keine `REACT_APP_ADMIN_API_KEY`-Zeile in Produktion; `docker/prod/docker-compose.yml` enthält keine Weitergabe des Keys an `frontend`.
- Sicherheit & Ops:
- Dokumentation: In `AUTHENTICATION.md` und in dieser Feature-Request-Datei wird der neue Login-Flow und Hinweis zum Secret-Handling vermerkt.
- Dokumentation: In `AUTHENTICATION.md` und in dieser Feature-Request-Datei wird der neue Login-Flow und Hinweis zum Secret-Handling vermerkt.
- Falls ein Key im Git-Verlauf existierte, ist die Rotation des Admin-Keys als Handlungsempfehlung dokumentiert.
- Falls `frontend/.env` oder ein Admin-Key jemals in das Repository gelangt ist: Scannt die Git-History und entfernt das Secret aus der History, danach rotiert den Key. Empfohlene Tools/Schritte (kurz):
- Finden: `git log --all -S 'part-of-key'` oder `git grep -n "REACT_APP_ADMIN_API_KEY" $(git rev-list --all)` oder nutzen `truffleHog`/`git-secrets`.
- Entfernen aus History: `git-filter-repo` oder `bfg-repo-cleaner` (z.B. `bfg --replace-text passwords.txt --no-blob-protection`) — danach Force-Push in ein neues Remote (Achtung: Auswirkungen auf Contributors).
- Key-Rotation: Erzeuge neuen `ADMIN_API_KEY`, setze ihn in der sicheren Backend-Umgebung (CI/CD secrets / Docker secret), redeploye Backend.
- Hinweis: Diese Schritte sind invasiv für die Git-History; koordinieren mit Team bevor Ausführung.
Nicht-funktionale Anforderungen
- Use Session-Cookies: Cookies müssen `HttpOnly`, `Secure` und `SameSite=Strict` (oder Lax falls nötig) gesetzt werden.
- CSRF-Schutz: Bei Cookie-basierten Sessions muss ein CSRF-Schutzmechanismus vorhanden sein (z. B. double-submit-token oder CSRF-Header). Hinweis: CSRF-Mechanik ist zu implementieren, aber detaillierte Schritte sind nicht Teil dieses Requests.
- Kompatibilität: Änderungen dürfen Entwickler-Workflows nicht unnötig blockieren; Dev-Mode-Patterns (runtime `env-config.js` in `docker/dev`) können bestehen bleiben, jedoch klar getrennt von Prod.
Hinweise für die implementierende KI / das Dev-Team (kontextbezogen)
- Aktueller Code-Pfade von Relevanz:
- `frontend/src/services/adminApi.js` — liest aktuell `process.env.REACT_APP_ADMIN_API_KEY` und setzt den Bearer-Token clientseitig.
- `frontend/.env` — enthält aktuell `REACT_APP_ADMIN_API_KEY`.
- `docker/prod/docker-compose.yml` — injiziert `REACT_APP_ADMIN_API_KEY=${ADMIN_API_KEY}` in `frontend`.
- `docker/prod/frontend/Dockerfile` — kopiert `docker/prod/frontend/config/.env` in das Image und führt `env.sh` aus, das `env-config.js` erzeugt (`window._env_`).
- `docker/prod/frontend/config/env.sh` — generiert zur Laufzeit `env-config.js` aus `.env`.
- `docker/prod/frontend/config/htpasswd` — existierender Schutz der Admin-UI via nginx.
- Erwartung an eine KI-Implementierung:
- Verstehe die Codebasis (insbesondere `frontend/src/*` und `backend/src/*`) und identifiziere alle Stellen, die `REACT_APP_ADMIN_API_KEY` oder `ADMIN_API_KEY` verwenden oder weiterreichen.
- Entferne clientseitige Verwendung des Admin-Keys; ersetze Aufrufe an Admin-API so, dass sie serverseitig autorisiert werden (Session-Check).
- Verifiziere durch automatische Tests (Integrationstest oder API-Call) dass `/api/admin/*` ohne Session abgewiesen wird und mit Session funktioniert.
Was der Auftraggeber (Ich) erwartet — kurz und klar
- Die Admin-Funktionen sind nur nach Anmeldung verfügbar.
- Keine Admin-Secrets gelangen in Frontend-Bundles, Images oder öffentlich zugängliche Dateien.
- Der existierende `htpasswd`-Schutz darf bestehen bleiben, ist aber nicht alleinige Sicherheitsmaßnahme.
Abnahmekriterien (für das Review durch den Auftraggeber)
- Manuelle Überprüfung: Versuch, Admin-Endpoints ohne Login aufzurufen → `403`.
- Build-Review: gebaute Frontend-Dateien enthalten keinen Admin-Key.
- Dokumentation aktualisiert (`AUTHENTICATION.md` weist auf neue Session-Flow hin).
Offene Fragen / Optionen (für Entwickler/KI) — Empfehlung und Umsetzungsdetails
- Session-Store (empfohlen: SQLite für Single-Host, Redis für Skalierung)
- Empfehlung für diese App: **SQLite / file-basierter Session-Store** (einfach zu betreiben, keine zusätzliche Infrastruktur).
- Umsetzung (Express): benutze `express-session` + `connect-sqlite3` (oder `better-sqlite3` backend). Konfiguration:
- Session-Cookie: `HttpOnly`, `Secure` (Prod), `SameSite=Strict` (oder `Lax` wenn externe callbacks nötig), `maxAge` angemessen (z. B. 8h).
- Session-Secret aus sicherer Quelle (Backend `ADMIN_SESSION_SECRET`), nicht im Repo.
- Skalierung: falls Cluster/Multiple hosts geplant, wechsle zu **Redis** (z.B. `connect-redis`) und setze Redis via Docker/K8s Secret.
- CSRF-Mechanik (empfohlen: session-bound CSRF token + Header)
- Empfehlung: Implementiere einen session-gebundenen CSRF-Token. Ablauf:
1. Bei Login: generiere `req.session.csrfToken = randomHex()` auf dem Server.
2. Exponiere Endpoint `GET /auth/csrf-token` (nur für eingeloggte Sessions), der das Token im JSON zurückgibt.
3. Frontend ruft `/auth/csrf-token` nach Login (`credentials: 'include'`) und speichert Token im JS-Scope.
4. Bei state-changing Requests sendet Frontend `X-CSRF-Token: <token>` Header.
5. Server-Middleware vergleicht Header mit `req.session.csrfToken` und verwirft bei Mismatch (403).
- Vorteil: HttpOnly-Session-Cookie bleibt geschützt; Token ist an Session gebunden.
- Alternative (schnell): Double-submit cookie (weniger robust, Token in non-HttpOnly cookie + Header); nur als kurzfristige Übergangslösung.
- Entfernen von Admin-Key aus Frontend/Build (konkrete Änderungen)
- `frontend/src/services/adminApi.js`: entferne Nutzung von `process.env.REACT_APP_ADMIN_API_KEY`. Passe `adminRequest` so an, dass `credentials: 'include'` verwendet wird und kein Bearer-Token gesetzt wird.
- `docker/prod/docker-compose.yml`: lösche die Zeile `- REACT_APP_ADMIN_API_KEY=${ADMIN_API_KEY}` unter `frontend`.
- `docker/prod/frontend/Dockerfile`: entferne `COPY docker/prod/frontend/config/.env ./.env` (oder stelle sicher, dass die Datei keine Secrets enthält). Vermeide Prod `.env` Kopie in Image.
- `docker/prod/frontend/config/env.sh`: darf nur non-sensitive Werte (z. B. `API_URL`, `APP_VERSION`) schreiben; dokumentiere welche Keys erlaubt sind.
- Secrets-Delivery / Deployment
- Backend-Secret `ADMIN_API_KEY` und `ADMIN_SESSION_SECRET` via CI/CD Secret Store oder Docker Secrets bereitstellen.
- Beispiel (Docker secret): erstelle Secret im Swarm/K8s und referenziere es nur im `backend`-Service.
- Tests / Abnahme (automatisierbar)
- Integrationstest 1: `GET /api/admin/*` ohne Session → `403`.
- Integrationstest 2: Login-Flow → setze Session-Cookie → `POST /api/admin/*` mit gültigem `X-CSRF-Token``200`.
- Build-Scan: nach Build `grep -R "<part-of-admin-key>" build/` oder `rg REACT_APP_ADMIN_API_KEY build/` → kein Treffer.
- Key-Leak Reaktion
- Falls Key geleakt: sofort Key-Rotation, dokumentiere wie (generate new key, update backend secret, redeploy), und führe Git-History-Check wie in Abschnitt oben durch.
- Operationale Hinweise
- Behalte `htpasswd`-Schutz für Admin-UI als zusätzliche Zugangskontrolle, nicht als einzige Schutzmaßnahme.
- Dokumentiere alle Änderungen in `AUTHENTICATION.md` und in Release-Notes.
Diese Empfehlungen sind aus Sicht einer sicheren, pragmatischen Umsetzung gewählt: sie minimieren Infrastruktur-Overhead (SQLite) und bieten klare, testbare CSRF/SessionMechaniken. Eine implementierende KI soll die obenstehenden Punkte in konkreten CodePatches umsetzen und automatisierte Tests hinzufügen.
Kontakt / Kontext
- Repo: Project-Image-Uploader — bestehende Struktur `docker/prod`, `frontend/`, `backend/` beachten.
- Hinweis: Diese Feature-Request-Datei benennt Missstände und Anforderungen; sie erwartet, dass ein Entwickler oder eine KI die Implementierung auf Code-Ebene vornimmt.
---

View File

@ -1,450 +0,0 @@
# Feature Request: Telegram Bot für Benachrichtigungen
## Übersicht
Integration eines Telegram Bots zur automatischen Benachrichtigung der Werkstatt-Gruppe über wichtige Events im Image Uploader System.
## Ziel
Werkstatt-Mitarbeiter sollen zeitnah über neue Uploads, Änderungen und bevorstehende Löschungen informiert werden, ohne ständig das Admin-Panel prüfen zu müssen.
## Use Case
Die Offene Werkstatt hat eine Telegram Gruppe, in der das Team kommuniziert. Der Bot wird zu dieser Gruppe hinzugefügt und sendet automatisierte Benachrichtigungen bei relevanten Events.
## Funktionale Anforderungen
### 1. Benachrichtigung: Neuer Upload
**Trigger:** Erfolgreicher Batch-Upload über `/api/upload-batch`
**Nachricht enthält:**
- 📸 Upload-Icon
- Name des Uploaders
- Anzahl der hochgeladenen Bilder
- Jahr der Gruppe
- Titel der Gruppe
- Workshop-Consent Status (✅ Ja / ❌ Nein)
- Social Media Consents (Facebook, Instagram, TikTok Icons)
- Link zum Admin-Panel (Moderation)
**Beispiel:**
```
📸 Neuer Upload!
Uploader: Max Mustermann
Bilder: 12
Gruppe: 2024 - Schweißkurs November
Workshop: ✅ Ja
Social Media: 📘 Instagram, 🎵 TikTok
🔗 Zur Freigabe: https://internal.hobbyhimmel.de/moderation
```
### 2. Benachrichtigung: User-Änderungen
**Trigger:**
- `PUT /api/manage/:token` (Consent-Änderung)
- `DELETE /api/manage/:token/groups/:groupId` (Gruppenl löschung durch User)
**Nachricht enthält:**
- ⚙️ Änderungs-Icon
- Art der Änderung (Consent Update / Gruppe gelöscht)
- Betroffene Gruppe (Jahr + Titel)
- Uploader-Name
- Neue Consent-Werte (bei Update)
**Beispiel (Consent-Änderung):**
```
⚙️ User-Änderung
Aktion: Consent aktualisiert
Gruppe: 2024 - Schweißkurs November
Uploader: Max Mustermann
Neu:
Workshop: ❌ Nein (vorher: ✅)
Social Media: 📘 Instagram (TikTok entfernt)
🔗 Details: https://internal.hobbyhimmel.de/moderation
```
**Beispiel (Gruppe gelöscht):**
```
⚙️ User-Änderung
Aktion: Gruppe gelöscht
Gruppe: 2024 - Schweißkurs November
Uploader: Max Mustermann
Bilder: 12
User hat Gruppe selbst über Management-Link gelöscht
```
### 3. Benachrichtigung: Ablauf Freigabe / Löschung in 1 Tag
**Trigger:** Täglicher Cron-Job (z.B. 09:00 Uhr)
**Prüfung:**
- Alle nicht-freigegebenen Gruppen mit `created_at < NOW() - 6 days`
- Werden in 24 Stunden durch Cleanup-Service gelöscht
**Nachricht enthält:**
- ⏰ Warnung-Icon
- Liste aller betroffenen Gruppen
- Countdown bis Löschung
- Hinweis auf Freigabe-Möglichkeit
**Beispiel:**
```
⏰ Löschung in 24 Stunden!
Folgende Gruppen werden morgen automatisch gelöscht:
1. 2024 - Schweißkurs November
Uploader: Max Mustermann
Bilder: 12
Hochgeladen: 20.11.2024
2. 2024 - Holzarbeiten Workshop
Uploader: Anna Schmidt
Bilder: 8
Hochgeladen: 21.11.2024
💡 Jetzt freigeben oder Freigabe bleibt aus!
🔗 Zur Moderation: https://internal.hobbyhimmel.de/moderation
```
## Technische Anforderungen
### Backend-Integration
**Neue Umgebungsvariablen:**
```bash
TELEGRAM_BOT_TOKEN=<bot-token>
TELEGRAM_CHAT_ID=<werkstatt-gruppen-id>
TELEGRAM_ENABLED=true
```
**Neue Service-Datei:** `backend/src/services/TelegramNotificationService.js`
**Methoden:**
- `sendUploadNotification(groupData)`
- `sendConsentChangeNotification(oldConsents, newConsents, groupData)`
- `sendGroupDeletedNotification(groupData)`
- `sendDeletionWarning(groupsList)`
**Integration Points:**
- `routes/batchUpload.js` → Nach erfolgreichem Upload
- `routes/management.js` → PUT/DELETE Endpoints
- `services/GroupCleanupService.js` → Neue Methode für tägliche Prüfung
### Telegram Bot Setup
**Bot erstellen:**
1. Mit [@BotFather](https://t.me/botfather) sprechen
2. `/newbot` → Bot-Name: "Werkstatt Image Uploader Bot"
3. Token speichern → `.env`
**Bot zur Gruppe hinzufügen:**
1. Bot zu Werkstatt-Gruppe einladen
2. Chat-ID ermitteln: `https://api.telegram.org/bot<TOKEN>/getUpdates`
3. Chat-ID speichern → `.env`
**Berechtigungen:**
- ✅ Can send messages
- ✅ Can send photos (optional, für Vorschau-Bilder)
- ❌ Keine Admin-Rechte nötig
### Cron-Job für tägliche Prüfung
**Optionen:**
**A) Node-Cron (empfohlen für Development):**
```javascript
// backend/src/services/TelegramScheduler.js
const cron = require('node-cron');
// Jeden Tag um 09:00 Uhr
cron.schedule('0 9 * * *', async () => {
await checkPendingDeletions();
});
```
**B) System Cron (Production):**
```bash
# crontab -e
0 9 * * * curl -X POST http://localhost:5000/api/admin/telegram/check-deletions
```
**Neue Route:** `POST /api/admin/telegram/check-deletions` (Admin-Auth)
## Dependencies
**Neue NPM Packages:**
```json
{
"node-telegram-bot-api": "^0.66.0",
"node-cron": "^3.0.3"
}
```
## Konfiguration
### Development (.env)
```bash
TELEGRAM_BOT_TOKEN=123456789:ABCdefGHIjklMNOpqrsTUVwxyz
TELEGRAM_CHAT_ID=-1001234567890
TELEGRAM_ENABLED=true
TELEGRAM_DAILY_CHECK_TIME=09:00
```
### Production
- Gleiche Variablen in `docker/prod/backend/config/.env`
- Cron-Job via Node-Cron oder System-Cron
## Sicherheit
- ✅ Bot-Token niemals committen (`.env` nur)
- ✅ Chat-ID validieren (nur bekannte Gruppen)
- ✅ Keine sensiblen Daten in Nachrichten (keine Email, keine vollständigen Token)
- ✅ Rate-Limiting für Telegram API (max 30 msg/sec)
- ✅ Error-Handling: Wenn Telegram down → Upload funktioniert trotzdem
## Testing
**Manuell:**
```bash
# Trigger Upload-Benachrichtigung
curl -X POST http://localhost:5001/api/upload-batch \
-F "images=@test.jpg" \
-F "year=2024" \
-F "title=Test Upload" \
-F "name=Test User" \
-F 'consents={"workshopConsent":true,"socialMediaConsents":[]}'
# Trigger Consent-Änderung
curl -X PUT http://localhost:5001/api/manage/<TOKEN> \
-H "Content-Type: application/json" \
-d '{"workshopConsent":false,"socialMediaConsents":[]}'
# Trigger tägliche Prüfung (Admin)
curl -X POST http://localhost:5001/api/admin/telegram/check-deletions \
-b cookies.txt -H "X-CSRF-Token: $CSRF"
```
**Automatisiert:**
- Unit-Tests für `TelegramNotificationService.js`
- Mock Telegram API mit `nock`
- Prüfe Nachrichtenformat + Escaping
## Optional: Zukünftige Erweiterungen
- 📊 Wöchentlicher Statistik-Report (Uploads, Freigaben, Löschungen)
- 🖼️ Preview-Bild im Telegram (erstes Bild der Gruppe)
- 💬 Interaktive Buttons (z.B. "Freigeben", "Ablehnen") → Webhook
- 🔔 Admin-Befehle (`/stats`, `/pending`, `/cleanup`)
## Akzeptanzkriterien
- [ ] Bot sendet Nachricht bei neuem Upload
- [ ] Bot sendet Nachricht bei Consent-Änderung
- [ ] Bot sendet Nachricht bei User-Löschung
- [ ] Bot sendet tägliche Warnung für bevorstehende Löschungen (09:00 Uhr)
- [ ] Alle Nachrichten enthalten relevante Informationen + Link
- [ ] Telegram-Fehler brechen Upload/Änderungen nicht ab
- [ ] ENV-Variable `TELEGRAM_ENABLED=false` deaktiviert alle Benachrichtigungen
- [ ] README.dev.md enthält Setup-Anleitung
## Aufwandsschätzung
- Backend-Integration: ~4-6 Stunden
- Cron-Job Setup: ~2 Stunden
- Testing: ~2 Stunden
- Dokumentation: ~1 Stunde
**Gesamt: ~9-11 Stunden**
## Priorität
**Medium** - Verbessert Workflow, aber nicht kritisch für Kernfunktion
## Release-Planung
**Target Version:** `2.0.0` (Major Version)
**Begründung für Major Release:**
- Neue Infrastruktur-Abhängigkeit (Telegram Bot)
- Neue Umgebungsvariablen erforderlich
- Breaking Change: Optional, aber empfohlene Konfiguration
## Development Workflow
### 1. Feature Branch erstellen
```bash
git checkout -b feature/telegram-notifications
```
### 2. Conventional Commits verwenden
**Wichtig:** Alle Commits nach [Conventional Commits](https://www.conventionalcommits.org/) formatieren!
**Beispiele:**
```bash
git commit -m "feat: Add TelegramNotificationService"
git commit -m "feat: Add upload notification endpoint"
git commit -m "feat: Add daily deletion warning cron job"
git commit -m "chore: Add node-telegram-bot-api dependency"
git commit -m "docs: Update README with Telegram setup"
git commit -m "test: Add TelegramNotificationService unit tests"
git commit -m "fix: Handle Telegram API rate limiting"
```
**Commit-Typen:**
- `feat:` - Neue Features
- `fix:` - Bugfixes
- `docs:` - Dokumentation
- `test:` - Tests
- `chore:` - Dependencies, Config
- `refactor:` - Code-Umstrukturierung
→ **Wird automatisch im CHANGELOG.md gruppiert!**
### 3. Development Setup
**Docker Dev Environment nutzen:**
```bash
# Container starten
./dev.sh
# .env konfigurieren (Backend)
# docker/dev/backend/config/.env
TELEGRAM_BOT_TOKEN=123456789:ABCdefGHIjklMNOpqrsTUVwxyz
TELEGRAM_CHAT_ID=-1001234567890
TELEGRAM_ENABLED=true
TELEGRAM_DAILY_CHECK_TIME=09:00
# Backend neu starten (lädt neue ENV-Variablen)
docker compose -f docker/dev/docker-compose.yml restart backend-dev
# Logs verfolgen
docker compose -f docker/dev/docker-compose.yml logs -f backend-dev
```
**Tests ausführen:**
```bash
cd backend
npm test -- tests/unit/TelegramNotificationService.test.js
npm test -- tests/api/telegram.test.js
```
### 4. Dokumentation aktualisieren
**README.md** - User-Dokumentation ergänzen:
- [ ] Telegram-Bot Setup-Anleitung
- [ ] Benachrichtigungs-Features beschreiben
- [ ] ENV-Variablen dokumentieren
**README.dev.md** - Development-Doku ergänzen:
- [ ] Telegram-Bot Testing-Anleitung
- [ ] Cron-Job Debugging
- [ ] TelegramNotificationService API-Referenz
- [ ] Beispiel-Curl-Commands für manuelle Trigger
**Sektion in README.dev.md einfügen (z.B. nach "Cleanup-System testen"):**
```markdown
### Telegram-Benachrichtigungen testen
```bash
# Bot-Token validieren:
curl https://api.telegram.org/bot<TOKEN>/getMe
# Chat-ID ermitteln:
curl https://api.telegram.org/bot<TOKEN>/getUpdates
# Upload-Benachrichtigung testen:
# → Einfach Upload durchführen, Telegram-Gruppe prüfen
# Consent-Änderung testen:
curl -X PUT http://localhost:5001/api/manage/<TOKEN> \
-H "Content-Type: application/json" \
-d '{"workshopConsent":false,"socialMediaConsents":[]}'
# Tägliche Löschwarnung manuell triggern:
curl -b cookies.txt -H "X-CSRF-Token: $CSRF" \
-X POST http://localhost:5001/api/admin/telegram/check-deletions
```
\`\`\`
### 5. Testing Checklist
- [ ] Unit-Tests für `TelegramNotificationService.js` (min. 80% Coverage)
- [ ] Integration-Tests für alle 3 Benachrichtigungstypen
- [ ] Manueller Test: Upload → Telegram-Nachricht kommt an
- [ ] Manueller Test: Consent-Änderung → Telegram-Nachricht kommt an
- [ ] Manueller Test: User-Löschung → Telegram-Nachricht kommt an
- [ ] Manueller Test: Cron-Job (tägliche Warnung) funktioniert
- [ ] Error-Handling: Telegram down → Upload funktioniert trotzdem
- [ ] ENV `TELEGRAM_ENABLED=false` → Keine Nachrichten
### 6. Release erstellen
**Nach erfolgreicher Implementierung:**
```bash
# Alle Änderungen committen (Conventional Commits!)
git add .
git commit -m "feat: Complete Telegram notification system"
# Feature Branch pushen
git push origin feature/telegram-notifications
# Merge in main (nach Review)
git checkout main
git merge feature/telegram-notifications
# Major Release erstellen (2.0.0)
npm run release:major
# CHANGELOG prüfen (wurde automatisch generiert!)
cat CHANGELOG.md
# Push mit Tags
git push --follow-tags
# Docker Images bauen und pushen
./prod.sh # Option 3
```
**Release Notes (automatisch in CHANGELOG.md):**
- ✨ Features: Telegram-Bot Integration (Upload, Änderungen, Lösch-Warnungen)
- 📚 Documentation: README.md + README.dev.md Updates
- 🧪 Tests: TelegramNotificationService Tests
### 7. Deployment
**Production .env updaten:**
```bash
# docker/prod/backend/config/.env
TELEGRAM_BOT_TOKEN=<production-token>
TELEGRAM_CHAT_ID=<production-chat-id>
TELEGRAM_ENABLED=true
```
**Container neu starten:**
```bash
./prod.sh # Option 4: Container neu bauen und starten
```
## Wichtige Hinweise
⚠️ **Vor dem Release prüfen:**
- README.md enthält User-Setup-Anleitung
- README.dev.md enthält Developer-Anleitung
- Alle Tests bestehen (`npm test`)
- Docker Dev Setup funktioniert
- Conventional Commits verwendet
- CHANGELOG.md ist korrekt generiert

View File

@ -1,76 +0,0 @@
# Feature Testplan: Admin-Session-Sicherheit
## Ziel
Sicherstellen, dass die neue serverseitige Admin-Authentifizierung (Session + CSRF) korrekt funktioniert, keine Secrets mehr im Frontend landen und bestehende Upload-/Management-Flows weiterhin laufen.
## Voraussetzungen
- `ADMIN_SESSION_SECRET` ist gesetzt bei Dev in `docker/dev/backend/config/.env`, bei Prod in `docker/prod/backend/.env`. Wert per `openssl rand -hex 32` generieren.
- Docker-Stack läuft (`./dev.sh` bzw. `docker compose -f docker/dev/docker-compose.yml up -d` für Dev oder `docker compose -f docker/prod/docker-compose.yml up -d` für Prod).
- Browser-Cookies gelöscht bzw. neue Session (Inkognito) verwenden.
- `curl` und `jq` lokal verfügbar (CLI-Aufrufe), Build/Tests laufen innerhalb der Docker-Container.
## Testumgebungen
| Umgebung | Zweck |
|----------|-------|
| `docker/dev` (localhost) | Haupt-Testumgebung, schnelle Iteration |
| Backend-Jest Tests | Regression für API-/Auth-Layer |
| Frontend Build (`docker compose exec frontend-dev npm run build`) | Sicherstellen, dass keine Secrets im Bundle landen |
## Testfälle
### 1. Initiales Admin-Setup
1. `curl -c cookies.txt http://localhost:5001/auth/setup/status``needsSetup` prüfen.
2. Falls `true`: `curl -X POST -H "Content-Type: application/json" -c cookies.txt -b cookies.txt \
-d '{"username":"admin","password":"SuperSicher123"}' \
http://localhost:5001/auth/setup/initial-admin` → `success: true`, Cookie gesetzt.
3. `curl -b cookies.txt http://localhost:5001/auth/setup/status``needsSetup:false`, `hasSession:true`.
4. `curl -b cookies.txt http://localhost:5001/auth/logout` → 204, Cookie weg.
### 2. Login & CSRF (Backend-Sicht)
1. `curl -X POST -H "Content-Type: application/json" -c cookies.txt -b cookies.txt \
-d '{"username":"admin","password":"SuperSicher123"}' http://localhost:5001/auth/login`.
2. `CSRF=$(curl -sb cookies.txt http://localhost:5001/auth/csrf-token | jq -r '.csrfToken')`.
3. `curl -b cookies.txt -H "X-CSRF-Token: $CSRF" http://localhost:5001/api/admin/groups` → 200.
4. Fehlerfälle prüfen:
- Ohne Cookie → 403 `{ reason: 'SESSION_REQUIRED' }`.
- Mit Cookie aber ohne Token → 403 `{ reason: 'CSRF_INVALID' }`.
- Mit falschem Token → 403 `{ reason: 'CSRF_INVALID' }`.
### 3. Moderations-UI (Frontend)
1. Browser auf `http://localhost:3000/moderation` → Login oder Setup-Wizard erscheint.
2. Wizard ausfüllen (nur beim ersten Start).
3. Normales Login durchführen (korrekte & falsche Credentials testen).
4. Nach Login folgende Aktionen validieren (Network-Tab kontrollieren: Requests senden Cookies + `X-CSRF-Token`):
- Gruppenliste lädt.
- Gruppe approve/reject.
- Cleanup-Preview/-Trigger (falls Daten vorhanden).
- Social-Media-Einstellungen laden/speichern.
5. Logout in der UI → Redirect zum Login, erneutes Laden zeigt Login.
6. Browser-Refresh nach Logout → kein Zugriff auf Admin-Daten (sollte Login anzeigen).
### 4. Regression Upload & Management
1. Normales Upload-Formular durchspielen (`/`): Gruppe hochladen.
2. Management-Link (`/manage/:token`) öffnen, Consents ändern, Bilder verwalten.
3. Sicherstellen, dass neue Session-Mechanik nichts davon beeinflusst.
### 5. Öffentliche APIs
1. `curl http://localhost:5001/api/social-media/platforms` → weiterhin öffentlich verfügbar.
2. Slideshow & Gruppenübersicht im Frontend testen (`/slideshow`, `/groups`).
### 6. Bundle-/Secret-Prüfung
1. Dev-Stack: `docker compose -f docker/dev/docker-compose.yml exec frontend-dev npm run build` (Prod analog mit `docker/prod`).
2. `docker compose -f docker/dev/docker-compose.yml exec frontend-dev sh -c "grep -R 'ADMIN' build/"` → keine geheimen Variablen (nur Dokumentationsstrings erlaubt).
3. Falls doch Treffer: Build abbrechen und Ursache analysieren.
### 7. Automatisierte Tests
1. Backend: `docker compose -f docker/dev/docker-compose.yml exec backend-dev npm test` (neue Auth-Tests müssen grün sein).
2. Optional: `docker compose -f docker/dev/docker-compose.yml exec frontend-dev npm test` oder vorhandene E2E-Suite per Container laufen lassen.
### 8. CI/Monitoring Checks
- Pipeline-Schritt hinzunehmen, der `curl`-Smoke-Test (Login + `GET /api/admin/groups`) fährt.
- Optionaler Script-Check, der das Frontend-Bundle auf Secrets scannt.
## Testabschluss
- Alle oben genannten Schritte erfolgreich? → Feature gilt als verifiziert.
- Offene Findings dokumentieren in `FeatureRequests/FEATURE_PLAN-security.md` (Status + Follow-up).
- Nach Freigabe: Reviewer informieren, Deploy-Plan (z. B. neue Session-Secret-Verteilung) abstimmen.

View File

@ -1,710 +1,52 @@
# Development Setup ## Dev: Schnellstart
## ⚠️ Wichtige Hinweise für Frontend-Entwickler Kurz und knapp — so startest und nutzt du die lokale DevUmgebung mit HMR (nginx als Proxy vor dem CRA dev server):
### 🔴 BREAKING CHANGES - API-Umstrukturierung (November 2025) Voraussetzungen
- Docker & Docker Compose (Docker Compose Plugin)
Im Rahmen der OpenAPI-Auto-Generation wurden **massive Änderungen** an der API-Struktur vorgenommen:
- **Authentication**: Admin-Endpoints laufen jetzt über serverseitige Sessions + CSRF Tokens
- **Route-Struktur**: Einige Pfade haben sich geändert (Single Source of Truth: `routeMappings.js`)
- **Error Handling**: Neue HTTP-Status-Codes (403 für Auth-Fehler)
**📖 Siehe:**
- **`frontend/MIGRATION-GUIDE.md`** - Detaillierte Migrations-Anleitung für Frontend
- **`backend/src/routes/README.md`** - Vollständige API-Route-Dokumentation
- **`AUTHENTICATION.md`** - Auth-System-Setup und Verwendung
---
## Schnellstart
### Starten (Development Environment)
Starten (Dev)
1. Build & Start (daemon):
```bash ```bash
# Mit Script (empfohlen): docker compose up --build -d image-uploader-frontend
./dev.sh
# Oder manuell:
docker compose -f docker/dev/docker-compose.yml up -d
``` ```
2. Logs verfolgen:
### Zugriff
- **Frontend**: http://localhost:3000 (Hot Module Reloading aktiv)
- **Backend**: http://localhost:5001 (API)
- **API Documentation**: http://localhost:5001/api/docs/ (Swagger UI, nur in Development verfügbar)
- **Slideshow**: http://localhost:3000/slideshow
- **Moderation**: http://localhost:3000/moderation (Login über Admin Session)
### Logs verfolgen
```bash ```bash
# Alle Services: docker compose logs -f image-uploader-frontend
docker compose -f docker/dev/docker-compose.yml logs -f
# Nur Frontend:
docker compose -f docker/dev/docker-compose.yml logs -f frontend-dev
# Nur Backend:
docker compose -f docker/dev/docker-compose.yml logs -f backend-dev
``` ```
3. Browser öffnen: http://localhost:3000 (HMR aktiv)
## API-Entwicklung Ändern & Testen
- Dateien editieren im `frontend/src/...` → HMR übernimmt Änderungen sofort.
### ⚠️ BREAKING CHANGES - Frontend Migration erforderlich - Wenn du nginxKonfiguration anpassen willst, editiere `frontend/conf/conf.d/default.conf` (DevVariante wird beim Containerstart benutzt). Nach Änderung: nginx reload ohne Neustart:
**Massive API-Änderungen im November 2025:**
- Session + CSRF Authentication für alle Admin-Endpoints
- Route-Pfade umstrukturiert (siehe `routeMappings.js`)
- Neue Error-Response-Formate
**📖 Frontend Migration Guide**: `frontend/MIGRATION-GUIDE.md`
### Route-Struktur
Die API verwendet eine **Single Source of Truth** für Route-Mappings:
📄 **`backend/src/routes/routeMappings.js`** - Zentrale Route-Konfiguration
Siehe auch: **`backend/src/routes/README.md`** für vollständige API-Übersicht
**Wichtige Route-Gruppen:**
- `/api/upload`, `/api/download` - Öffentliche Upload/Download-Endpoints
- `/api/manage/:token` - Self-Service Management Portal (UUID-Token)
- `/api/admin/*` - Admin-Endpoints (Session + CSRF Authentication)
- `/api/system/migration/*` - Datenbank-Migrationen
**⚠️ Express Route-Reihenfolge beachten:**
Router mit spezifischen Routes **vor** generischen Routes mounten!
```javascript
// ✅ RICHTIG: Spezifisch vor generisch
{ router: 'consent', prefix: '/api/admin' }, // /groups/by-consent
{ router: 'admin', prefix: '/api/admin' }, // /groups/:groupId
// ❌ FALSCH: Generisch fängt alles ab
{ router: 'admin', prefix: '/api/admin' }, // /groups/:groupId matched auf 'by-consent'!
{ router: 'consent', prefix: '/api/admin' }, // Wird nie erreicht
```
### Authentication
**Zwei Auth-Systeme parallel:**
1. **Admin API (Session + CSRF)**:
```bash
# .env konfigurieren:
ADMIN_SESSION_SECRET=$(openssl rand -hex 32)
# Initialen Admin anlegen (falls benötigt)
curl -c cookies.txt http://localhost:5001/auth/setup/status
curl -X POST -H "Content-Type: application/json" \
-c cookies.txt -b cookies.txt \
-d '{"username":"admin","password":"SuperSicher123"}' \
http://localhost:5001/auth/setup/initial-admin
# Login + CSRF Token holen
curl -X POST -H "Content-Type: application/json" \
-c cookies.txt -b cookies.txt \
-d '{"username":"admin","password":"SuperSicher123"}' \
http://localhost:5001/auth/login
CSRF=$(curl -sb cookies.txt http://localhost:5001/auth/csrf-token | jq -r '.csrfToken')
# Authentifizierter Admin-Request
curl -b cookies.txt -H "X-CSRF-Token: $CSRF" \
http://localhost:5001/api/admin/groups
```
2. **Management Portal (UUID Token)**:
User, die Bilder hochladen, erhalten automatisch einen UUID-Token für das Self-Service Management Portal.
Über diesen Token / Link können sie ihre hochgeladenen Gruppen verwalten:
```bash
# Automatisch beim Upload generiert
GET /api/manage/550e8400-e29b-41d4-a716-446655440000
```
📖 **Vollständige Doku**: `AUTHENTICATION.md`
#### Admin-Hinweise: Logout & neue Nutzer
- **Logout:** Der Moderationsbereich enthält jetzt einen Logout-Button (Icon in der Kopfzeile). Klick → `POST /auth/logout` → Session beendet, Login erscheint erneut. Für Skripte kannst du weiterhin `curl -b cookies.txt -X POST http://localhost:5001/auth/logout` verwenden.
- **Weiterer Admin:** Verwende das neue API-basierte Skript `./scripts/create_admin_user.sh --server http://localhost:5001 --username zweiteradmin --password 'SuperPasswort123!' [--admin-user bestehend --admin-password ... --role ... --require-password-change]`. Das Skript erledigt Login, CSRF, Duplikats-Check und legt zusätzliche Admins über `/api/admin/users` an (Fallback: `backend/src/scripts/createAdminUser.js`).
### OpenAPI-Spezifikation
Die OpenAPI-Spezifikation wird **automatisch beim Backend-Start** generiert:
```bash ```bash
# Generiert: backend/docs/openapi.json docker compose exec image-uploader-frontend nginx -s reload
# Swagger UI: http://localhost:5001/api/docs/
# Manuelle Generierung:
cd backend
node src/generate-openapi.js
``` ```
**Swagger-Annotationen in Routes:** Probleme mit `node_modules`
```javascript - Wenn du ein hostseitiges `frontend/node_modules` hast, lösche es (konsistenter ist der containerverwaltete Volume):
router.get('/example', async (req, res) => {
/*
#swagger.tags = ['Example']
#swagger.summary = 'Get example data'
#swagger.responses[200] = { description: 'Success' }
*/
});
```
## Entwicklung
### Frontend-Entwicklung
- Code in `frontend/src/` editieren → Hot Module Reload übernimmt Änderungen
- Volumes: Source-Code wird live in Container gemountet
- Container-Namen: `image-uploader-frontend-dev`
**Wichtige Komponenten:**
- `Components/Pages/MultiUploadPage.js` - Upload-Interface mit Consent-Management
- `Components/ComponentUtils/MultiUpload/ConsentCheckboxes.js` - GDPR-konforme Consent-UI
- `Components/Pages/ModerationGroupsPage.js` - Moderation mit Consent-Filtern
- `services/reorderService.js` - Drag-and-Drop Logik
- `hooks/useImagePreloader.js` - Slideshow-Preloading
### Backend-Entwicklung
- Code in `backend/src/` editieren → Nodemon übernimmt Änderungen automatisch
- Container-Namen: `image-uploader-backend-dev`
- Environment: `NODE_ENV=development`
**Wichtige Module:**
- `routes/routeMappings.js` - Single Source of Truth für Route-Konfiguration
- `repositories/GroupRepository.js` - Consent-Management & CRUD
- `repositories/SocialMediaRepository.js` - Plattform- & Consent-Verwaltung
- `routes/batchUpload.js` - Upload mit Consent-Validierung
- `middlewares/session.js` - Express-Session + SQLite Store
- `middlewares/auth.js` - Admin Session-Guard & CSRF-Pflicht
- `database/DatabaseManager.js` - Automatische Migrationen
- `services/GroupCleanupService.js` - 7-Tage-Cleanup-Logik
### Datenbank-Entwicklung
#### Migrationen erstellen
```bash ```bash
# Neue Migration anlegen: rm -rf frontend/node_modules
touch backend/src/database/migrations/XXX_description.sql
# Migrationen werden automatisch beim Backend-Start ausgeführt
# Manuell: docker compose -f docker/dev/docker-compose.yml restart backend-dev
``` ```
Danach `docker compose up --build -d image-uploader-frontend` erneut ausführen.
#### Datenbank-Zugriff Stoppen
```bash ```bash
# SQLite Shell: docker compose down
docker compose -f docker/dev/docker-compose.yml exec backend-dev sqlite3 /usr/src/app/src/data/db/image_uploader.db
# Schnellabfragen:
docker compose -f docker/dev/docker-compose.yml exec backend-dev sqlite3 /usr/src/app/src/data/db/image_uploader.db "SELECT * FROM groups LIMIT 5;"
# Schema anzeigen:
docker compose -f docker/dev/docker-compose.yml exec backend-dev sqlite3 /usr/src/app/src/data/db/image_uploader.db ".schema"
``` ```
#### Migrationen debuggen Hinweis
```bash - Diese DevKonfiguration läuft lokal mit erweiterten Rechten (nur für Entwicklung). ProduktionsImages/Configs bleiben unverändert.
# Migration-Status prüfen:
docker compose -f docker/dev/docker-compose.yml exec backend-dev sqlite3 /usr/src/app/src/data/db/image_uploader.db "SELECT * FROM schema_migrations;"
# Backend-Logs mit Migration-Output:
docker compose -f docker/dev/docker-compose.yml logs backend-dev | grep -i migration
```
### Konfiguration anpassen Build and start:
- **Frontend**: `docker/dev/frontend/config/.env` docker compose up --build -d image-uploader-frontend
- **Backend**: `docker/dev/backend/config/.env`
- **Nginx**: `docker/dev/frontend/nginx.conf`
## Testing Tail logs:
docker compose logs -f image-uploader-frontend
### Automatisierte Tests Reload nginx (after editing conf in container):
docker compose exec image-uploader-frontend nginx -s reload
Das Backend verfügt über eine umfassende Test-Suite mit 45 Tests: docker compose exec image-uploader-frontend nginx -s reload
docker compose down
```bash
# Alle Tests ausführen:
cd backend
npm test
# Einzelne Test-Suite:
npm test -- tests/api/admin.test.js
# Mit Coverage-Report:
npm test -- --coverage
# Watch-Mode (während Entwicklung):
npm test -- --watch
```
**Test-Struktur:**
- `tests/unit/` - Unit-Tests (z.B. Auth-Middleware)
- `tests/api/` - Integration-Tests (API-Endpoints)
- `tests/setup.js` - Globale Test-Konfiguration
- `tests/testServer.js` - Test-Server-Helper
**Test-Features:**
- Jest + Supertest Framework
- In-Memory SQLite Database (isoliert)
- Singleton Server Pattern (schnell)
- 100% Test-Success-Rate (45/45 passing)
- ~10 Sekunden Ausführungszeit
- Coverage: 26% Statements, 15% Branches
**Test-Umgebung:**
- Verwendet `/tmp/test-image-uploader/` für Upload-Tests
- Eigene Datenbank `:memory:` (kein Konflikt mit Dev-DB)
- Environment: `NODE_ENV=test`
- Automatisches Cleanup nach Test-Run
**Neue Tests hinzufügen:**
```javascript
// tests/api/example.test.js
const { getRequest } = require('../testServer');
describe('Example API', () => {
it('should return 200', async () => {
const response = await getRequest()
.get('/api/example')
.expect(200);
expect(response.body).toHaveProperty('data');
});
});
```
### Manuelles Testing
### Consent-System testen
```bash
# 1. Upload mit und ohne Workshop-Consent
# 2. Social Media Checkboxen testen (Facebook, Instagram, TikTok)
# 3. Moderation-Filter prüfen:
# - Alle Gruppen
# - Nur Werkstatt
# - Facebook / Instagram / TikTok
# 4. Export-Funktion (CSV/JSON) testen
```
### Cleanup-System testen
```bash
# Test-Script verwenden:
./tests/test-cleanup.sh
# Oder manuell:
# 1. Upload ohne Approval
# 2. Gruppe zurückdatieren (Script verwendet)
# 3. Preview: GET http://localhost:5001/api/admin/cleanup/preview
# 4. Trigger: POST http://localhost:5001/api/admin/cleanup/trigger
# 5. Log prüfen: GET http://localhost:5001/api/admin/deletion-log
```
### Telegram-Benachrichtigungen testen
**Voraussetzung:** Bot-Setup abgeschlossen (siehe `scripts/README.telegram.md`)
```bash
# 1. ENV-Variablen in docker/dev/backend/config/.env konfigurieren:
TELEGRAM_ENABLED=true
TELEGRAM_BOT_TOKEN=<dein-bot-token>
TELEGRAM_CHAT_ID=<deine-chat-id>
# 2. Backend neu starten (lädt neue ENV-Variablen):
docker compose -f docker/dev/docker-compose.yml restart backend-dev
# 3. Test-Nachricht wird automatisch beim Server-Start gesendet
docker compose -f docker/dev/docker-compose.yml logs -f backend-dev
# 4. Upload-Benachrichtigung testen (Phase 3+):
curl -X POST http://localhost:5001/api/upload-batch \
-F "images=@test.jpg" \
-F "year=2024" \
-F "title=Test Upload" \
-F "name=Test User" \
-F 'consents={"workshopConsent":true,"socialMediaConsents":[]}'
# → Prüfe Telegram-Gruppe auf Benachrichtigung
# 5. Service manuell deaktivieren:
TELEGRAM_ENABLED=false
```
### API-Tests
```bash
# Consent-Endpoints:
curl http://localhost:5001/api/social-media/platforms
curl http://localhost:5001/api/groups/by-consent?workshopConsent=true
curl http://localhost:5001/api/admin/consents/export
# Upload testen (mit Consents):
curl -X POST http://localhost:5001/api/upload-batch \
-F "images=@test.jpg" \
-F "year=2025" \
-F "title=Test" \
-F "name=Developer" \
-F 'consents={"workshopConsent":true,"socialMediaConsents":[{"platformId":1,"consented":true}]}'
```
## Container-Management
```bash
# Status anzeigen:
docker compose -f docker/dev/docker-compose.yml ps
# Container neustarten:
docker compose -f docker/dev/docker-compose.yml restart
# Container neu bauen (nach Package-Updates):
docker compose -f docker/dev/docker-compose.yml build --no-cache
# Stoppen:
docker compose -f docker/dev/docker-compose.yml down
# Mit Volumes löschen (ACHTUNG: Löscht Datenbank!):
docker compose -f docker/dev/docker-compose.yml down -v
```
### Shell-Zugriff
```bash
# Frontend Container:
docker compose -f docker/dev/docker-compose.yml exec frontend-dev bash
# Backend Container:
docker compose -f docker/dev/docker-compose.yml exec backend-dev bash
# Datenbank-Shell:
docker compose -f docker/dev/docker-compose.yml exec backend-dev sqlite3 /usr/src/app/src/data/db/image_uploader.db
```
## Debugging
### Backend Debugging
```bash
# Live-Logs:
docker compose -f docker/dev/docker-compose.yml logs -f backend-dev
# Nodemon Restart:
# → Änderungen in backend/src/** werden automatisch erkannt
# Fehlerhafte Migration fixen:
# 1. Migration-Eintrag löschen:
docker compose -f docker/dev/docker-compose.yml exec backend-dev sqlite3 /usr/src/app/src/data/db/image_uploader.db "DELETE FROM schema_migrations WHERE migration_name='XXX.sql';"
# 2. Backend neustarten:
docker compose -f docker/dev/docker-compose.yml restart backend-dev
```
### Frontend Debugging
```bash
# React DevTools im Browser verwenden
# Network Tab für API-Calls prüfen
# Console für Fehler checken
# Nginx-Reload (bei Konfig-Änderungen):
docker compose -f docker/dev/docker-compose.yml exec frontend-dev nginx -s reload
```
### Datenbank-Backup & Restore
```bash
# Backup:
docker cp image-uploader-backend-dev:/usr/src/app/src/data/db/image_uploader.db ./backup.db
# Restore:
docker cp ./backup.db image-uploader-backend-dev:/usr/src/app/src/data/db/image_uploader.db
docker compose -f docker/dev/docker-compose.yml restart backend-dev
```
## Häufige Probleme
### "Migration failed" Fehler
**Problem**: Inline-Kommentare in SQL-Statements
**Lösung**: DatabaseManager entfernt diese automatisch (seit Commit 8e62475)
### "No such column: display_in_workshop"
**Problem**: Migration 005 nicht ausgeführt
**Lösung**: Backend neu starten oder manuell Migration ausführen
### Port 3000 bereits belegt
**Problem**: Anderer Prozess nutzt Port 3000
**Lösung**:
```bash
lsof -ti:3000 | xargs kill -9
# Oder Port in docker/dev/docker-compose.yml ändern
```
### Consent-Filter zeigt nichts
**Problem**: `display_in_workshop` fehlt in groupFormatter
**Lösung**: Bereits gefixt (Commit f049c47)
## Git Workflow
```bash
# Feature Branch erstellen:
git checkout -b feature/my-feature
# Änderungen committen:
git add .
git commit -m "feat: Add new feature"
# Vor Merge: Code testen!
# - Upload-Flow mit Consents
# - Moderation mit Filtern
# - Slideshow-Funktionalität
# - Cleanup-System
# Push:
git push origin feature/my-feature
```
### Git Hook (optional Absicherung)
Standard-Deployments sollten `ADMIN_SESSION_COOKIE_SECURE=true` behalten, damit das Session-Cookie nur über HTTPS übertragen wird.
Das bereitgestellte Pre-Commit-Hook stellt sicher, dass der Wert in `docker/prod/docker-compose.yml` automatisch auf `true` zurückgesetzt wird, falls er versehentlich verändert wurde (z.B. nach einem Test auf HTTP-only Hardware):
```bash
ln -s ../../scripts/git-hooks/pre-commit .git/hooks/pre-commit
```
Nach der Installation aktualisiert der Hook die Datei bei Bedarf und staged sie direkt.
Für lokale HTTP-Lab-Deployments nutze eine separate (gitignorierte) `docker-compose.override.yml`, um `ADMIN_SESSION_COOKIE_SECURE=false` nur zur Laufzeit zu setzen. Entfernen kannst du den Hook jederzeit über `rm .git/hooks/pre-commit`.
## Host-Separation Testing (Public/Internal Hosts)
Die Applikation unterstützt eine Public/Internal Host-Separation für die Produktion. Lokal kann dies mit /etc/hosts-Einträgen getestet werden.
### Schnellstart: Lokales Testing mit /etc/hosts
**1. Hosts-Datei bearbeiten:**
**Linux / Mac:**
```bash
sudo nano /etc/hosts
```
**Windows (als Administrator):**
1. Notepad öffnen (als Administrator)
2. Datei öffnen: `C:\Windows\System32\drivers\etc\hosts`
3. Dateifilter auf "Alle Dateien" ändern
Füge hinzu:
```
127.0.0.1 public.test.local
127.0.0.1 internal.test.local
```
**2. Docker .env anpassen:**
Bearbeite `docker/dev/frontend/config/.env`:
```bash
API_URL=http://localhost:5001
CLIENT_URL=http://localhost:3000
APP_VERSION=1.1.0
PUBLIC_HOST=public.test.local
INTERNAL_HOST=internal.test.local
```
Bearbeite `docker/dev/docker-compose.yml`:
```yaml
backend-dev:
environment:
- PUBLIC_HOST=public.test.local
- INTERNAL_HOST=internal.test.local
- ENABLE_HOST_RESTRICTION=true
- TRUST_PROXY_HOPS=0
frontend-dev:
environment:
- HOST=0.0.0.0
- DANGEROUSLY_DISABLE_HOST_CHECK=true
```
**3. Container starten:**
```bash
./dev.sh
```
**4. Im Browser testen:**
**Public Host** (`http://public.test.local:3000`):
- ✅ Upload-Seite funktioniert
- ✅ UUID Management funktioniert (`/manage/:token`)
- ✅ Social Media Badges angezeigt
- ❌ Kein Admin/Groups/Slideshow-Menü
- ❌ `/moderation` → 404
**Internal Host** (`http://internal.test.local:3000`):
- ✅ Alle Features verfügbar
- ✅ Admin-Bereich, Groups, Slideshow erreichbar
- ✅ Vollständiger API-Zugriff
### API-Tests mit curl
**Public Host - Blockierte Routen (403):**
```bash
curl -H "Host: public.test.local" http://localhost:5001/api/admin/deletion-log
curl -H "Host: public.test.local" http://localhost:5001/api/groups
curl -H "Host: public.test.local" http://localhost:5001/api/auth/login
```
**Public Host - Erlaubte Routen:**
```bash
curl -H "Host: public.test.local" http://localhost:5001/api/upload
curl -H "Host: public.test.local" http://localhost:5001/api/manage/YOUR-UUID
curl -H "Host: public.test.local" http://localhost:5001/api/social-media/platforms
```
**Internal Host - Alle Routen:**
```bash
curl -H "Host: internal.test.local" http://localhost:5001/api/groups
curl -H "Host: internal.test.local" http://localhost:5001/api/admin/deletion-log
```
### Frontend Code-Splitting testen
**Public Host:**
1. Browser DevTools → Network → JS Filter
2. Öffne `http://public.test.local:3000`
3. **Erwartung:** Slideshow/Admin/Groups-Bundles werden **nicht** geladen
4. Navigiere zu `/admin` → Redirect zu 404
**Internal Host:**
1. Öffne `http://internal.test.local:3000`
2. Navigiere zu `/slideshow`
3. **Erwartung:** Lazy-Bundle wird erst jetzt geladen (Code Splitting)
### Rate Limiting testen
Public Host: 20 Uploads/Stunde
```bash
for i in {1..25}; do
echo "Upload $i"
curl -X POST -H "Host: public.test.local" \
http://localhost:5001/api/upload \
-F "file=@test.jpg" -F "group=Test"
done
# Ab Upload 21: HTTP 429 (Too Many Requests)
```
### Troubleshooting
**"Invalid Host header"**
- Lösung: `DANGEROUSLY_DISABLE_HOST_CHECK=true` in `.env.development` (Frontend)
**"Alle Routen geben 403"**
- Prüfe `ENABLE_HOST_RESTRICTION=true`
- Prüfe `PUBLIC_HOST` / `INTERNAL_HOST` ENV-Variablen
- Container neu starten
**"public.test.local nicht erreichbar"**
- Prüfe `/etc/hosts`: `cat /etc/hosts | grep test.local`
- DNS-Cache leeren:
- **Linux:** `sudo systemd-resolve --flush-caches`
- **Mac:** `sudo dscacheutil -flushcache`
- **Windows:** `ipconfig /flushdns`
**Feature deaktivieren (Standard Dev):**
```yaml
backend-dev:
environment:
- ENABLE_HOST_RESTRICTION=false
```
### Production Setup
Für Production mit echten Subdomains siehe:
- `FeatureRequests/FEATURE_PLAN-FrontendPublic.md` (Sektion 12: Testing Strategy)
- nginx-proxy-manager Konfiguration erforderlich
- Hosts: `deinprojekt.hobbyhimmel.de` (public), `deinprojekt.lan.hobbyhimmel.de` (internal)
---
## 🚀 Release Management
### Automated Release (EMPFOHLEN)
**Ein Befehl macht alles:**
```bash
npm run release # Patch: 1.2.0 → 1.2.1
npm run release:minor # Minor: 1.2.0 → 1.3.0
npm run release:major # Major: 1.2.0 → 2.0.0
```
**Was passiert automatisch:**
1. ✅ Version in allen package.json erhöht
2. ✅ Footer.js, OpenAPI-Spec, Docker-Images aktualisiert
3. ✅ **CHANGELOG.md automatisch generiert** aus Git-Commits
4. ✅ Git Commit erstellt
5. ✅ Git Tag erstellt
6. ✅ Preview anzeigen + Bestätigung
Dann nur noch:
```bash
git push && git push --tags
```
### Beispiel-Workflow:
```bash
# Features entwickeln mit Conventional Commits:
git commit -m "feat: Add user login"
git commit -m "fix: Fix button alignment"
git commit -m "refactor: Extract ConsentFilter component"
# Release erstellen:
npm run release:minor
# Preview wird angezeigt, dann [Y] drücken
# Push:
git push && git push --tags
```
### CHANGELOG wird automatisch generiert!
Das Release-Script (`scripts/release.sh`) gruppiert deine Commits nach Typ:
- `feat:` → ✨ Features
- `fix:` → 🐛 Fixes
- `refactor:` → ♻️ Refactoring
- `chore:` → 🔧 Chores
- `docs:` → 📚 Documentation
**Wichtig:** Verwende [Conventional Commits](https://www.conventionalcommits.org/)!
### Manuelle Scripts (falls nötig)
```bash
# Version nur synchronisieren (ohne Bump):
./scripts/sync-version.sh
# Version manuell bumpen:
./scripts/bump-version.sh patch # oder minor/major
```
**Version-Synchronisation:**
- Single Source of Truth: `frontend/package.json`
- Wird synchronisiert zu: `backend/package.json`, `Footer.js`, `generate-openapi.js`, Docker Images
---
## Nützliche Befehle
```bash
# Alle Container-IDs:
docker ps -a
# Speicherplatz prüfen:
docker system df
# Ungenutztes aufräumen:
docker system prune -a
# Logs durchsuchen:
docker compose -f docker/dev/docker-compose.yml logs | grep ERROR
# Performance-Monitoring:
docker stats
```

481
README.md
View File

@ -5,87 +5,103 @@ A self-hosted image uploader with multi-image upload capabilities and automatic
## Features ## Features
**Multi-Image Upload**: Upload multiple images at once with batch processing **Multi-Image Upload**: Upload multiple images at once with batch processing
**Telegram Notifications**: 🆕 Real-time notifications for uploads, consent changes, deletions, and daily warnings **Drag-and-Drop Reordering**: 🆕 Admins can reorder images via intuitive drag-and-drop interface
**Social Media Consent Management**: 🆕 GDPR-compliant consent system for workshop display and social media publishing
**Automatic Cleanup**: 🆕 Unapproved groups are automatically deleted after 7 days
**Deletion Log**: 🆕 Complete audit trail of automatically deleted content
**Drag-and-Drop Reordering**: 🆕 User during upload and admins can reorder images via intuitive drag-and-drop interface
**Slideshow Mode**: Automatic fullscreen slideshow with smooth transitions (respects custom ordering) **Slideshow Mode**: Automatic fullscreen slideshow with smooth transitions (respects custom ordering)
**Preview Image Optimization**: Automatic thumbnail generation for faster gallery loading (96-98% size reduction) **Preview Image Optimization**: Automatic thumbnail generation for faster gallery loading (96-98% size reduction)
**Touch-Friendly Interface**: 🆕 Mobile-optimized drag handles and responsive design **Touch-Friendly Interface**: 🆕 Mobile-optimized drag handles and responsive design
**Moderation Panel**: Dedicated moderation interface with consent filtering and export **Admin Panel**: Dedicated moderation interface for content management and organization
**Persistent Storage**: Docker volumes ensure data persistence across restarts **Persistent Storage**: Docker volumes ensure data persistence across restarts
**Clean UI**: Minimalist design focused on user experience **Clean UI**: Minimalist design focused on user experience
**Self-Hosted**: Complete control over your data and infrastructure **Self-Hosted**: Complete control over your data and infrastructure
**Lightweight**: Built with modern web technologies for optimal performance
## What's New ## What's New
This project extends the original [Image-Uploader by vallezw](https://github.com/vallezw/Image-Uploader) with enhanced multi-upload and slideshow capabilities. This project extends the original [Image-Uploader by vallezw](https://github.com/vallezw/Image-Uploader) with enhanced multi-upload and slideshow capabilities.
See the [CHANGELOG](CHANGELOG.md) for a detailed list of improvements and new features. ### 🆕 Latest Features (January 2025)
- **Drag-and-Drop Image Reordering**: Admins can now reorder images using intuitive drag-and-drop
- **Touch-Friendly Interface**: Mobile-optimized controls with always-visible drag handles
- **Slideshow Integration**: Custom image order automatically applies to slideshow mode
- **Optimistic UI Updates**: Immediate visual feedback with error recovery
- **Comprehensive Admin Panel**: Dedicated moderation interface for content curation
### Core Features
- Multi-image batch upload with progress tracking
- Automatic slideshow presentation mode
- Image grouping with descriptions and metadata
- Random slideshow rotation with custom ordering support
- Keyboard navigation support (Slideshow: Space/Arrow keys, Escape to exit)
- Mobile-responsive design with touch-first interactions
## Quick Start ## Quick Start
### Docker Deployment (Recommended) ### Docker Deployment (Recommended)
#### Production Environment 1. **Create docker-compose.yml**:
```bash ```yaml
# Start production environment services:
./prod.sh image-uploader-frontend:
image: gitea.lan.hobbyhimmel.de/hobbyhimmel/image-uploader-frontend:latest
ports:
- "80:80"
build:
context: ./frontend
dockerfile: ./Dockerfile
depends_on:
- "image-uploader-backend"
environment:
- "API_URL=http://image-uploader-backend:5000"
- "CLIENT_URL=http://localhost"
container_name: "image-uploader-frontend"
networks:
- npm-nw
- image-uploader-internal
# Or manually: image-uploader-backend:
docker compose -f docker/prod/docker-compose.yml up -d image: gitea.lan.hobbyhimmel.de/hobbyhimmel/image-uploader-backend:latest
ports:
- "5000:5000"
build:
context: ./backend
dockerfile: ./Dockerfile
container_name: "image-uploader-backend"
networks:
- image-uploader-internal
volumes:
- app-data:/usr/src/app/src/data
volumes:
app-data:
driver: local
networks:
npm-nw:
external: true
image-uploader-internal:
driver: bridge
``` ```
#### Development Environment 2. **Start the application**:
```bash ```bash
# Start development environment docker compose up -d
./dev.sh ```
# Or manually:
docker compose -f docker/dev/docker-compose.yml up -d
### Access URLs
#### Production (Port 80): 3. **Access the application**:
- Upload Interface: `http://localhost` - Upload Interface: `http://localhost`
- Backend: `http://localhost:5000`
- Slideshow Mode: `http://localhost/slideshow` - Slideshow Mode: `http://localhost/slideshow`
- Groups Overview: `http://localhost/groups`
- Moderation Panel: `http://localhost/moderation` (requires authentication)
#### Development (Port 3000):
- Upload Interface: `http://localhost:3000`
- Backend API: `http://localhost:5001`
- Slideshow Mode: `http://localhost:3000/slideshow`
### Multi-Image Upload ### Multi-Image Upload
1. Visit `http://localhost` 1. Visit `http://localhost`
2. Drag & drop multiple images or click to select 2. Drag & drop multiple images or click to select
3. Add an optional description for your image collection 3. Add an optional description for your image collection
4. **Grant Consent** (mandatory): 4. Click "Upload Images" to process the batch
- ✅ **Workshop Display**: Required consent to display images on local monitor 5. Images are automatically grouped for slideshow viewing
- ☐ **Social Media** (optional): Per-platform consent for Facebook, Instagram, TikTok
5. Click "Upload Images" to process the batch
6. Receive your **Group ID** and **Management Link** as reference
7. Images are grouped and await moderation approval
### Self-Service Management Portal
After upload, users receive a unique management link (`/manage/:token`) to:
- **View Upload**: See all images and metadata
- **Manage Consents**: Revoke or restore workshop/social media consents
- **Edit Metadata**: Update title, description, year (triggers re-moderation)
- **Manage Images**: Add new images or delete existing ones
- **Delete Group**: Complete removal with double-confirmation
- **Email Contact**: Request deletion of already published social media posts
**Security Features**:
- No authentication required (token-based access)
- Rate limiting: 10 requests per hour per IP
- Brute-force protection: 20 failed attempts → 24h ban
- Complete audit trail of all management actions
### Slideshow Mode ### Slideshow Mode
@ -94,9 +110,7 @@ After upload, users receive a unique management link (`/manage/:token`) to:
- Fullscreen presentation - Fullscreen presentation
- 4-second display per image - 4-second display per image
- Automatic progression through all slideshow collections - Automatic progression through all slideshow collections
- **🆕 Chronological order**: Groups play from oldest to newest (year → upload date) - Random selection of next slideshow after completing current one
- **🆕 Intelligent preloading**: Next images load in background for seamless transitions
- **🆕 Zero loading delays**: Pre-cached images for instant display
- Smooth fade transitions (0.5s) - Smooth fade transitions (0.5s)
- **Keyboard Controls**: - **Keyboard Controls**:
@ -131,37 +145,15 @@ The application automatically generates optimized preview thumbnails for all upl
### Moderation Interface (Protected) ### Moderation Interface (Protected)
- **Access**: `http://localhost/moderation` (requires admin session) - **Access**: `http://localhost/moderation` (requires authentication)
- **Authentication Flow**: - **Authentication**: HTTP Basic Auth (username: admin, password: set during setup)
- Built-in login form establishes a server session stored in HttpOnly cookies
- First-time setup wizard creates the initial admin user once `ADMIN_SESSION_SECRET` is configured
- CSRF token must be included (header `X-CSRF-Token`) for any mutating admin API call
- `AUTHENTICATION.md` documents CLI/cURL examples for managing sessions and CSRF tokens
- **Protected Endpoints**: All `/api/admin/*` routes require authentication
- **Features**: - **Features**:
- Review pending image groups before public display - Review pending image groups before public display
- Visual countdown showing days until automatic deletion (7 days for unapproved groups) - Approve or reject submitted collections
- **Consent Management**:
- Visual consent badges showing social media platforms
- Filter by consent status (All / Workshop-only / Facebook / Instagram / TikTok)
- Export consent data as CSV/JSON for legal compliance
- Consent timestamp tracking
- Approve or reject submitted collections with instant feedback
- Delete individual images from approved groups - Delete individual images from approved groups
- View group details (title, creator, description, image count) - View group details (title, creator, description, image count)
- **Deletion Log** (bottom of moderation page):
- Statistics: Total groups/images deleted, storage freed
- Detailed history table with timestamps and reasons
- Toggle between last 10 entries and complete history
- Bulk moderation actions - Bulk moderation actions
- **Automatic Cleanup**:
- Unapproved groups are automatically deleted after 7 days
- Daily cleanup runs at 10:00 AM (Europe/Berlin timezone)
- Complete removal: Database entries + physical files (originals + previews)
- Full audit trail logged for compliance
- **Note**: Approved groups are NEVER automatically deleted
- **Security Features**: - **Security Features**:
- Password protected access via nginx HTTP Basic Auth - Password protected access via nginx HTTP Basic Auth
- Hidden from search engines (`robots.txt` + `noindex` meta tags) - Hidden from search engines (`robots.txt` + `noindex` meta tags)
@ -174,217 +166,47 @@ The application automatically generates optimized preview thumbnails for all upl
- View group statistics and metadata - View group statistics and metadata
## Docker Structure
The application uses separate Docker configurations for development and production with **simplified environment variable management**:
```
docker/
├── .env.backend.example # Backend environment variables documentation
├── .env.frontend.example # Frontend environment variables documentation
├── dev/ # Development environment
│ ├── .env # 🆕 Central dev secrets (gitignored)
│ ├── .env.example # Dev environment template
│ ├── docker-compose.yml # All ENV vars defined here
│ ├── backend/
│ │ └── Dockerfile # Development backend container
│ └── frontend/
│ ├── config/env.sh # Generates window._env_ from ENV
│ ├── Dockerfile # Development frontend container
│ ├── nginx.conf # Development nginx configuration
│ └── start.sh # Development startup script
└── prod/ # Production environment
├── .env # 🆕 Central prod secrets (gitignored)
├── .env.example # Production environment template
├── docker-compose.yml # All ENV vars defined here
├── backend/
│ └── Dockerfile # Production backend container
└── frontend/
├── config/env.sh # Generates window._env_ from ENV
├── config/htpasswd # HTTP Basic Auth credentials
├── Dockerfile # Production frontend container
└── nginx.conf # Production nginx configuration
```
### Environment Configuration
**🆕 Simplified ENV Structure (Nov 2025):**
- **2 central `.env` files** (down from 16 files!)
- `docker/dev/.env` - All development secrets
- `docker/prod/.env` - All production secrets
- **docker-compose.yml** - All environment variables defined in `environment:` sections
- **No .env files in Docker images** - All configuration via docker-compose
- **Frontend env.sh** - Generates `window._env_` JavaScript object from ENV variables at runtime
**How it works:**
1. Docker Compose automatically reads `.env` from the same directory
2. Variables are injected into containers via `environment:` sections using `${VAR}` placeholders
3. Frontend `env.sh` script reads ENV variables and generates JavaScript config at container startup
4. Secrets stay in gitignored `.env` files, never in code or images
- **Development**: Uses `docker/dev/` configuration with live reloading
- **Production**: Uses `docker/prod/` configuration with optimized builds
- **Scripts**: Use `./dev.sh` or `./prod.sh` for easy deployment
## Data Structure ## Data Structure
Data are stored in SQLite database. The structure is as follows: Data are stored in sqlite database. The structure is as follows:
### Core Tables
``` sql ``` sql
-- Groups table (extended with consent fields)
CREATE TABLE groups ( CREATE TABLE groups (
id INTEGER PRIMARY KEY AUTOINCREMENT, id INTEGER PRIMARY KEY AUTOINCREMENT,
group_id TEXT UNIQUE NOT NULL, group_id TEXT UNIQUE NOT NULL,
year INTEGER NOT NULL, year INTEGER NOT NULL,
title TEXT NOT NULL, title TEXT NOT NULL,
description TEXT, description TEXT,
name TEXT, name TEXT,
upload_date DATETIME NOT NULL, upload_date DATETIME NOT NULL,
approved BOOLEAN DEFAULT FALSE, approved BOOLEAN DEFAULT FALSE,
display_in_workshop BOOLEAN NOT NULL DEFAULT 0, -- Consent for workshop display created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
consent_timestamp DATETIME, -- When consent was granted updated_at DATETIME DEFAULT CURRENT_TIMESTAMP
management_token TEXT, -- For Phase 2: Self-service portal );
created_at DATETIME DEFAULT CURRENT_TIMESTAMP, CREATE TABLE sqlite_sequence(name,seq);
updated_at DATETIME DEFAULT CURRENT_TIMESTAMP
);
-- Images table
CREATE TABLE images ( CREATE TABLE images (
id INTEGER PRIMARY KEY AUTOINCREMENT, id INTEGER PRIMARY KEY AUTOINCREMENT,
group_id TEXT NOT NULL, group_id TEXT NOT NULL,
file_name TEXT NOT NULL, file_name TEXT NOT NULL,
original_name TEXT NOT NULL, original_name TEXT NOT NULL,
file_path TEXT NOT NULL, file_path TEXT NOT NULL,
preview_path TEXT, -- Optimized thumbnail path preview_path TEXT,
image_description TEXT, -- Individual image description upload_order INTEGER NOT NULL,
upload_order INTEGER NOT NULL, file_size INTEGER,
file_size INTEGER, mime_type TEXT,
mime_type TEXT, created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP, FOREIGN KEY (group_id) REFERENCES groups(group_id) ON DELETE CASCADE
FOREIGN KEY (group_id) REFERENCES groups(group_id) ON DELETE CASCADE );
);
-- Deletion log for audit trail
CREATE TABLE deletion_log (
id INTEGER PRIMARY KEY AUTOINCREMENT,
group_id TEXT NOT NULL,
title TEXT,
name TEXT,
upload_date DATETIME,
image_count INTEGER,
total_size INTEGER,
deletion_reason TEXT,
deleted_at DATETIME DEFAULT CURRENT_TIMESTAMP
);
```
### Social Media Consent Tables
``` sql
-- Configurable social media platforms
CREATE TABLE social_media_platforms (
id INTEGER PRIMARY KEY AUTOINCREMENT,
platform_name TEXT UNIQUE NOT NULL, -- e.g., 'facebook', 'instagram', 'tiktok'
display_name TEXT NOT NULL, -- e.g., 'Facebook', 'Instagram', 'TikTok'
icon_name TEXT, -- Material-UI Icon name
is_active BOOLEAN DEFAULT 1,
sort_order INTEGER DEFAULT 0,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP
);
-- Per-group, per-platform consent tracking
CREATE TABLE group_social_media_consents (
id INTEGER PRIMARY KEY AUTOINCREMENT,
group_id TEXT NOT NULL,
platform_id INTEGER NOT NULL,
consented BOOLEAN NOT NULL DEFAULT 0,
consent_timestamp DATETIME NOT NULL,
revoked BOOLEAN DEFAULT 0, -- For Phase 2: Consent revocation
revoked_timestamp DATETIME, -- When consent was revoked
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
updated_at DATETIME DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (group_id) REFERENCES groups(group_id) ON DELETE CASCADE,
FOREIGN KEY (platform_id) REFERENCES social_media_platforms(id) ON DELETE CASCADE,
UNIQUE(group_id, platform_id)
);
-- Management audit log (Phase 2)
CREATE TABLE management_audit_log (
id INTEGER PRIMARY KEY AUTOINCREMENT,
group_id TEXT,
management_token TEXT, -- First 8 characters only (masked)
action TEXT NOT NULL, -- validate_token, revoke_consent, edit_metadata, add_images, delete_image, delete_group
success BOOLEAN NOT NULL,
error_message TEXT,
ip_address TEXT,
user_agent TEXT,
request_data TEXT, -- JSON of request body
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (group_id) REFERENCES groups(group_id) ON DELETE SET NULL
);
-- Indexes for performance
CREATE INDEX IF NOT EXISTS idx_audit_group_id ON management_audit_log(group_id);
CREATE INDEX IF NOT EXISTS idx_audit_action ON management_audit_log(action);
CREATE INDEX IF NOT EXISTS idx_audit_success ON management_audit_log(success);
CREATE INDEX IF NOT EXISTS idx_audit_created_at ON management_audit_log(created_at);
CREATE INDEX IF NOT EXISTS idx_audit_ip_address ON management_audit_log(ip_address);
revoked_timestamp DATETIME,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
updated_at DATETIME DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (group_id) REFERENCES groups(group_id) ON DELETE CASCADE,
FOREIGN KEY (platform_id) REFERENCES social_media_platforms(id) ON DELETE CASCADE,
UNIQUE(group_id, platform_id)
);
-- Migration tracking
CREATE TABLE schema_migrations (
id INTEGER PRIMARY KEY AUTOINCREMENT,
migration_name TEXT UNIQUE NOT NULL,
applied_at DATETIME DEFAULT CURRENT_TIMESTAMP
);
```
### Indexes
``` sql
-- Groups indexes
CREATE INDEX idx_groups_group_id ON groups(group_id); CREATE INDEX idx_groups_group_id ON groups(group_id);
CREATE INDEX idx_groups_year ON groups(year); CREATE INDEX idx_groups_year ON groups(year);
CREATE INDEX idx_groups_upload_date ON groups(upload_date); CREATE INDEX idx_groups_upload_date ON groups(upload_date);
CREATE INDEX idx_groups_display_consent ON groups(display_in_workshop);
CREATE UNIQUE INDEX idx_groups_management_token ON groups(management_token) WHERE management_token IS NOT NULL;
-- Images indexes
CREATE INDEX idx_images_group_id ON images(group_id); CREATE INDEX idx_images_group_id ON images(group_id);
CREATE INDEX idx_images_upload_order ON images(upload_order); CREATE INDEX idx_images_upload_order ON images(upload_order);
-- Consent indexes
CREATE INDEX idx_consents_group_id ON group_social_media_consents(group_id);
CREATE INDEX idx_consents_platform_id ON group_social_media_consents(platform_id);
CREATE INDEX idx_consents_consented ON group_social_media_consents(consented);
```
### Triggers
``` sql
-- Update timestamp on groups modification
CREATE TRIGGER update_groups_timestamp CREATE TRIGGER update_groups_timestamp
AFTER UPDATE ON groups AFTER UPDATE ON groups
FOR EACH ROW FOR EACH ROW
BEGIN BEGIN
UPDATE groups SET updated_at = CURRENT_TIMESTAMP WHERE id = NEW.id; UPDATE groups SET updated_at = CURRENT_TIMESTAMP WHERE id = NEW.id;
END; END;
-- Update timestamp on consent modification
CREATE TRIGGER update_consents_timestamp
AFTER UPDATE ON group_social_media_consents
FOR EACH ROW
BEGIN
UPDATE group_social_media_consents SET updated_at = CURRENT_TIMESTAMP WHERE id = NEW.id;
END;
``` ```
## Architecture ## Architecture
@ -438,122 +260,29 @@ src
## API Endpoints ## API Endpoints
### Upload Operations ### Upload Operations
- `POST /api/upload/batch` - Upload multiple images with description and consent data - `POST /api/upload/batch` - Upload multiple images with description
- `GET /api/groups` - Retrieve all slideshow groups - `GET /api/groups` - Retrieve all slideshow groups
- `GET /api/groups/:id` - Get specific slideshow group - `GET /api/groups/:id` - Get specific slideshow group
### Consent Management
- `GET /api/social-media/platforms` - Get list of active social media platforms
- `POST /api/groups/:groupId/consents` - Save consent data for a group
- `GET /api/groups/:groupId/consents` - Get consent data for a group
- `GET /api/admin/groups/by-consent` - Filter groups by consent status (query params: `?workshopConsent=true&platform=facebook`)
- `GET /api/admin/consents/export` - Export all consent data as CSV/JSON
### User Self-Service Management Portal (Phase 2 - Backend Complete)
**Management Portal APIs** (Token-based authentication):
- `GET /api/manage/:token` - Validate management token and retrieve group data
- `PUT /api/manage/:token/consents` - Revoke or restore consents (workshop & social media)
- `PUT /api/manage/:token/metadata` - Edit group title and description (resets approval status)
- `POST /api/manage/:token/images` - Add new images to existing group (max 50 total, resets approval)
- `DELETE /api/manage/:token/images/:imageId` - Delete individual image (prevents deleting last image)
- `DELETE /api/manage/:token` - Delete entire group with all images and data
**Management Audit Log APIs** (Admin access only):
- `GET /api/admin/management-audit?limit=N` - Retrieve recent management actions (default: 10)
- `GET /api/admin/management-audit/stats` - Get statistics (total actions, success rate, unique IPs)
- `GET /api/admin/management-audit/group/:groupId` - Get audit log for specific group
**Security Features**:
- IP-based rate limiting: 10 requests per hour per IP
- Brute-force protection: 20 failed token validations → 24-hour IP ban
- Complete audit trail: All management actions logged with IP, User-Agent, timestamp
- Token masking: Only first 8 characters stored in audit log for privacy
- Automatic file cleanup: Physical deletion of images when removed via API
### Moderation Operations (Protected) ### Moderation Operations (Protected)
- `GET /moderation/groups` - Get all groups pending moderation (includes consent info) - `GET /moderation/groups` - Get all groups pending moderation
- `PATCH /groups/:id/approve` - Approve/unapprove a group for public display - `POST /groups/:id/approve` - Approve a group for public display
- `DELETE /groups/:id` - Delete an entire group - `DELETE /groups/:id` - Delete an entire group
- `DELETE /groups/:id/images/:imageId` - Delete individual image from group - `DELETE /groups/:id/images/:imageId` - Delete individual image from group
### Admin Operations (Protected by /moderation access)
- `GET /api/admin/deletion-log?limit=N` - Get recent deletion log entries (default: 10)
- `GET /api/admin/deletion-log/all` - Get complete deletion history
- `GET /api/admin/deletion-log/stats` - Get deletion statistics (total groups/images deleted, storage freed)
- `POST /api/admin/cleanup/trigger` - Manually trigger cleanup (for testing)
- `GET /api/admin/cleanup/preview` - Preview which groups would be deleted (dry-run)
### File Access ### File Access
- `GET /api/upload/:filename` - Access uploaded image files (legacy, use `/api/download` instead) - `GET /api/upload/:filename` - Access uploaded image files (legacy, use `/api/download` instead)
- `GET /api/download/:filename` - Download original full-resolution images - `GET /api/download/:filename` - Download original full-resolution images
- `GET /api/previews/:filename` - Access optimized preview thumbnails (~100KB, 800px width) - `GET /api/previews/:filename` - Access optimized preview thumbnails (~100KB, 800px width)
## Testing
### Automatic Cleanup Testing
The application includes comprehensive testing tools for the automatic cleanup feature:
```bash
# Run interactive test helper (recommended)
./tests/test-cleanup.sh
# Available test operations:
# 1. View unapproved groups with age
# 2. Backdate groups for testing (simulate 7+ day old groups)
# 3. Preview cleanup (dry-run)
# 4. Execute cleanup manually
# 5. View deletion log history
```
**Testing Workflow:**
1. Upload a test group (don't approve it)
2. Use test script to backdate it by 8 days
3. Preview what would be deleted
4. Execute cleanup and verify deletion log
For detailed testing instructions, see: [`tests/TESTING-CLEANUP.md`](tests/TESTING-CLEANUP.md)
## Configuration ## Configuration
### Environment Variables ### Environment Variables
**Simplified ENV Management (Nov 2025):**
All environment variables are now managed through **2 central `.env` files** and `docker-compose.yml`:
**Core Variables:**
| Variable | Default | Description | | Variable | Default | Description |
|----------|---------|-------------| |----------|---------|-------------|
| `API_URL` | `http://localhost:5001` | Backend API endpoint (frontend → backend) | | `API_URL` | `http://localhost:5000` | Backend API endpoint |
| `PUBLIC_HOST` | `public.test.local` | Public upload subdomain (no admin access) | | `CLIENT_URL` | `http://localhost` | Frontend application URL |
| `INTERNAL_HOST` | `internal.test.local` | Internal admin subdomain (full access) |
| `ADMIN_SESSION_SECRET` | - | Secret for admin session cookies (required) |
**Telegram Notifications (Optional):**
| Variable | Default | Description |
|----------|---------|-------------|
| `TELEGRAM_ENABLED` | `false` | Enable/disable Telegram notifications |
| `TELEGRAM_BOT_TOKEN` | - | Telegram Bot API token (from @BotFather) |
| `TELEGRAM_CHAT_ID` | - | Telegram chat/group ID for notifications |
| `TELEGRAM_SEND_TEST_ON_START` | `false` | Send test message on service startup (dev only) |
**Configuration Files:**
- `docker/dev/.env` - Development secrets (gitignored)
- `docker/prod/.env` - Production secrets (gitignored)
- `docker/dev/.env.example` - Development template (committed)
- `docker/prod/.env.example` - Production template (committed)
**How to configure:**
1. Copy `.env.example` to `.env` in the respective environment folder
2. Edit `.env` and set your secrets (ADMIN_SESSION_SECRET, Telegram tokens, etc.)
3. Docker Compose automatically reads `.env` and injects variables into containers
4. Never commit `.env` files (already in `.gitignore`)
**Telegram Setup:** See `scripts/README.telegram.md` for complete configuration guide.
### Volume Configuration ### Volume Configuration
- **Upload Limits**: 100MB maximum file size for batch uploads - **Upload Limits**: 100MB maximum file size for batch uploads

34
TODO.md
View File

@ -32,7 +32,7 @@ Neue Struktur: Datenbank in src/data/db und bilder in src/data/images
- [x] Die neue Komponente `ImageGallery` soll so aussehen wir die GroupCard im Grid -> siehe Pages/ModerationGroupPage.js und Pages/GroupOverviewPage.js und auch die gleichen Funktionalitäten besitzen. - [x] Die neue Komponente `ImageGallery` soll so aussehen wir die GroupCard im Grid -> siehe Pages/ModerationGroupPage.js und Pages/GroupOverviewPage.js und auch die gleichen Funktionalitäten besitzen.
- [x] Klärung SimpleMultiDropzone vs. MultiImageUploadDropzone (MultiImageUploadDropzone wurde gelösch, SimpleMultiDropzone umbenannt in MultiImageDropzone) - [x] Klärung SimpleMultiDropzone vs. MultiImageUploadDropzone (MultiImageUploadDropzone wurde gelösch, SimpleMultiDropzone umbenannt in MultiImageDropzone)
- [x] alte group-card css styles aus ImageGallery.css entfernen - [x] alte group-card css styles aus ImageGallery.css entfernen
- [x] **Persistentes Reordering: Drag-and-drop in `ImageGallery` + Backend-Endpunkt** 🚧 - [ ] **Persistentes Reordering: Drag-and-drop in `ImageGallery` + Backend-Endpunkt** 🚧
- **Status**: In Planung - **Status**: In Planung
- **Feature Plan**: `docs/FEATURE_PLAN-reordering.md` - **Feature Plan**: `docs/FEATURE_PLAN-reordering.md`
- **Aufgaben**: 9 Tasks (Backend API + Frontend DnD + Integration) - **Aufgaben**: 9 Tasks (Backend API + Frontend DnD + Integration)
@ -44,20 +44,7 @@ Neue Struktur: Datenbank in src/data/db und bilder in src/data/images
## Backend ## Backend
[x] Erweiterung der API um die Funktion bestehende Daten zu editieren/aktualisieren [x] Erweiterung der API um die Funktion bestehende Daten zu editieren/aktualisieren
[x] Preview Generierung für hochgeladene Bilder [x] Preview Generierung für hochgeladene Bilder
[x] **Automatisches Löschen nicht freigegebener Gruppen** ✅ ABGESCHLOSSEN [ ] Automatisches Löschen von Groupen, welche nach einer bestimmten Zeit (z.B. 5 Tage) nicht freigegeben wurden
- **Status**: Fertiggestellt und getestet
- **Feature Plan**: `docs/FEATURE_PLAN-delete-unproved-groups.md`
- **Branch**: `feature/DeleteUnprovedGroups`
- **Details**:
- Automatische Löschung nach 7 Tagen
- Countdown-Anzeige in Moderationsansicht
- Vollständiges Deletion-Log mit Statistiken
- Täglicher Cron-Job (10:00 Uhr)
- Test-Tools: `tests/test-cleanup.sh` und `tests/TESTING-CLEANUP.md`
- **Aufgaben**: 11 Tasks (DB Migration + Backend Cleanup Service + Cron-Job + Frontend UI)
- **Geschätzte Zeit**: 2-3 Tage
- **Löschfrist**: 7 Tage nach Upload (nur nicht freigegebene Gruppen)
- **Cron-Job**: Täglich 10:00 Uhr
[ ] Integration eines Benachrichtigungssystems (E-Mail, Push-Benachrichtigungen) wenn eine neue Slideshow hochgeladen wurde [ ] Integration eines Benachrichtigungssystems (E-Mail, Push-Benachrichtigungen) wenn eine neue Slideshow hochgeladen wurde
[ ] Implementierung eines Logging-Systems zur Nachverfolgung von Änderungen und Aktivitäten [ ] Implementierung eines Logging-Systems zur Nachverfolgung von Änderungen und Aktivitäten
@ -65,8 +52,7 @@ Neue Struktur: Datenbank in src/data/db und bilder in src/data/images
[x] Erweiterung der Benutzeroberfläche um eine Editierfunktion für bestehende Einträge in ModerationPage.js [x] Erweiterung der Benutzeroberfläche um eine Editierfunktion für bestehende Einträge in ModerationPage.js
[x] In der angezeigten Gruppen soll statt Bilder ansehen Gruppe editieren stehen [x] In der angezeigten Gruppen soll statt Bilder ansehen Gruppe editieren stehen
[x] Diese bestehende Ansicht (Bilder ansehen) soll als eigene Seite implementiert werden [x] Diese bestehende Ansicht (Bilder ansehen) soll als eigene Seite implementiert werden
[x] Ergänzung der Möglichkeit eine Beschreibung zu den Bildern hinzuzufügen [ ] Erweiterung der ModerationPage um reine Datenbankeditor der sqlite Datenbank.
## 🚀 Deployment-Überlegungen ## 🚀 Deployment-Überlegungen
@ -98,16 +84,16 @@ Neue Struktur: Datenbank in src/data/db und bilder in src/data/images
- ✅ Mobile-Kompatibilität - ✅ Mobile-Kompatibilität
### Nice-to-Have ### Nice-to-Have
[x] 🎨 Drag & Drop Reihenfolge ändern - 🎨 Drag & Drop Reihenfolge ändern
[x] 📊 Upload-Progress mit Details - 📊 Upload-Progress mit Details
[x] 🖼️ Thumbnail-Navigation in Slideshow - 🖼️ Thumbnail-Navigation in Slideshow
- 🔄 Batch-Operations (alle entfernen, etc.)
### Future Features ### Future Features
- 👤 User-Management - 👤 User-Management
- 🏷️ Tagging-System
- 📤 Export-Funktionen
- 🎵 Audio-Integration
--- ---

View File

@ -1,8 +1,3 @@
node_modules node_modules
npm-debug.log npm-debug.log
upload/ upload/
src/data/db/*.db
src/data/db/*.db-*
src/data/images/
src/data/previews/
src/data/groups/

1
backend/.env.example Normal file
View File

@ -0,0 +1 @@
REMOVE_IMAGES=<boolean | undefined>

21
backend/Dockerfile Normal file
View File

@ -0,0 +1,21 @@
FROM node:24
WORKDIR /usr/src/app
# Note: Node 24 LTS (v24.11.0) uses Debian Bookworm
# Install sqlite3 CLI
RUN apt-get update && apt-get install -y sqlite3 && rm -rf /var/lib/apt/lists/*
COPY package*.json ./
# Development
RUN npm install
# Production
# RUN npm ci --only=production
COPY . .
EXPOSE 5000
CMD [ "node", "src/index.js" ]

File diff suppressed because it is too large Load Diff

View File

@ -1,34 +0,0 @@
module.exports = {
testEnvironment: 'node',
coverageDirectory: 'coverage',
setupFiles: ['<rootDir>/tests/env.js'],
collectCoverageFrom: [
'src/**/*.js',
'!src/index.js', // Server startup
'!src/generate-openapi.js', // Build tool
'!src/scripts/**', // Utility scripts
],
testMatch: [
'**/tests/**/*.test.js',
'**/tests/**/*.spec.js'
],
coverageThreshold: {
global: {
branches: 20,
functions: 20,
lines: 20,
statements: 20
}
},
// Setup for each test file - initializes server once
setupFilesAfterEnv: ['<rootDir>/tests/setup.js'],
testTimeout: 10000,
// Run tests serially to avoid DB conflicts
maxWorkers: 1,
// Force exit after tests complete
forceExit: true,
// Transform ESM modules in node_modules
transformIgnorePatterns: [
'node_modules/(?!(uuid)/)'
]
};

View File

@ -1,50 +1,31 @@
{ {
"name": "backend", "name": "backend",
"version": "2.0.1", "version": "1.0.0",
"description": "", "description": "",
"main": "src/index.js", "main": "src/index.js",
"scripts": { "scripts": {
"start": "node src/index.js", "start": "node src/index.js",
"server": "nodemon --ignore docs/openapi.json src/index.js", "server": "nodemon src/index.js",
"client": "npm run dev --prefix ../frontend", "client": "npm run dev --prefix ../frontend",
"client-build": "cd ../frontend && npm run build && serve -s build -l 80", "client-build": "cd ../frontend && npm run build && serve -s build -l 80",
"dev": "concurrently \"npm run server\" \"npm run client\"", "dev": "concurrently \"npm run server\" \"npm run client\"",
"build": "concurrently \"npm run server\" \"npm run client-build\"", "build": "concurrently \"npm run server\" \"npm run client-build\""
"generate-openapi": "node src/generate-openapi.js",
"test-openapi": "node test-openapi-paths.js",
"validate-openapi": "redocly lint docs/openapi.json",
"test": "jest --coverage",
"test:watch": "jest --watch",
"test:api": "jest tests/api",
"create-admin": "node src/scripts/createAdminUser.js"
}, },
"keywords": [], "keywords": [],
"author": "", "author": "",
"license": "ISC", "license": "ISC",
"dependencies": { "dependencies": {
"bcryptjs": "^3.0.3",
"connect-sqlite3": "^0.9.16",
"dotenv": "^8.2.0", "dotenv": "^8.2.0",
"express": "^4.17.1", "express": "^4.17.1",
"express-fileupload": "^1.2.1", "express-fileupload": "^1.2.1",
"express-session": "^1.18.2",
"find-remove": "^2.0.3", "find-remove": "^2.0.3",
"fs": "^0.0.1-security", "fs": "^0.0.1-security",
"node-cron": "^4.2.1",
"node-telegram-bot-api": "^0.66.0",
"sharp": "^0.34.4", "sharp": "^0.34.4",
"shortid": "^2.2.16", "shortid": "^2.2.16",
"sqlite3": "^5.1.7", "sqlite3": "^5.1.7"
"uuid": "^13.0.0"
}, },
"devDependencies": { "devDependencies": {
"@redocly/cli": "^2.11.1",
"@stoplight/prism-cli": "^5.14.2",
"concurrently": "^6.0.0", "concurrently": "^6.0.0",
"jest": "^30.2.0", "nodemon": "^2.0.7"
"nodemon": "^2.0.7",
"supertest": "^7.1.4",
"swagger-autogen": "^2.23.7",
"swagger-ui-express": "^5.0.1"
} }
} }

View File

@ -1,15 +1,21 @@
const endpoints = {
UPLOAD_STATIC_DIRECTORY: '/upload',
UPLOAD_FILE: '/upload',
UPLOAD_BATCH: '/upload/batch',
PREVIEW_STATIC_DIRECTORY: '/previews',
DOWNLOAD_FILE: '/download/:id',
GET_GROUP: '/groups/:groupId',
GET_ALL_GROUPS: '/groups',
DELETE_GROUP: '/groups/:groupId'
};
// Filesystem directory (relative to backend/src) where uploaded images will be stored // Filesystem directory (relative to backend/src) where uploaded images will be stored
// Use path.join(__dirname, '..', UPLOAD_FS_DIR, fileName) in code // Use path.join(__dirname, '..', UPLOAD_FS_DIR, fileName) in code
// In test mode, use a temporary directory in /tmp to avoid permission issues const UPLOAD_FS_DIR = 'data/images';
const UPLOAD_FS_DIR = process.env.NODE_ENV === 'test'
? '/tmp/test-image-uploader/images'
: 'data/images';
// Filesystem directory (relative to backend/src) where preview images will be stored // Filesystem directory (relative to backend/src) where preview images will be stored
// Use path.join(__dirname, '..', PREVIEW_FS_DIR, fileName) in code // Use path.join(__dirname, '..', PREVIEW_FS_DIR, fileName) in code
const PREVIEW_FS_DIR = process.env.NODE_ENV === 'test' const PREVIEW_FS_DIR = 'data/previews';
? '/tmp/test-image-uploader/previews'
: 'data/previews';
// Preview generation configuration // Preview generation configuration
const PREVIEW_CONFIG = { const PREVIEW_CONFIG = {
@ -23,4 +29,4 @@ const time = {
WEEK_1: 604800000 WEEK_1: 604800000
}; };
module.exports = { time, UPLOAD_FS_DIR, PREVIEW_FS_DIR, PREVIEW_CONFIG }; module.exports = { endpoints, time, UPLOAD_FS_DIR, PREVIEW_FS_DIR, PREVIEW_CONFIG };

View File

@ -5,41 +5,27 @@ const fs = require('fs');
class DatabaseManager { class DatabaseManager {
constructor() { constructor() {
this.db = null; this.db = null;
this.dbPath = null; // Place database file under data/db
this.dbPath = path.join(__dirname, '../data/db/image_uploader.db');
this.schemaPath = path.join(__dirname, 'schema.sql'); this.schemaPath = path.join(__dirname, 'schema.sql');
} }
getDatabasePath() {
if (process.env.NODE_ENV === 'test') {
return ':memory:';
}
return path.join(__dirname, '../data/db/image_uploader.db');
}
async initialize() { async initialize() {
try { try {
if (!this.dbPath) { // Stelle sicher, dass das data-Verzeichnis existiert
this.dbPath = this.getDatabasePath(); const dataDir = path.dirname(this.dbPath);
} if (!fs.existsSync(dataDir)) {
// Stelle sicher, dass das data-Verzeichnis existiert (skip for in-memory) fs.mkdirSync(dataDir, { recursive: true });
if (this.dbPath !== ':memory:') {
const dataDir = path.dirname(this.dbPath);
if (!fs.existsSync(dataDir)) {
fs.mkdirSync(dataDir, { recursive: true });
}
} }
// Öffne Datenbankverbindung (promisify for async/await) // Öffne Datenbankverbindung
await new Promise((resolve, reject) => { this.db = new sqlite3.Database(this.dbPath, (err) => {
this.db = new sqlite3.Database(this.dbPath, (err) => { if (err) {
if (err) { console.error('Fehler beim Öffnen der Datenbank:', err.message);
console.error('Fehler beim Öffnen der Datenbank:', err.message); throw err;
reject(err); } else {
} else { console.log('✓ SQLite Datenbank verbunden:', this.dbPath);
console.log('✓ SQLite Datenbank verbunden:', this.dbPath); }
resolve();
}
});
}); });
// Aktiviere Foreign Keys // Aktiviere Foreign Keys
@ -48,15 +34,8 @@ class DatabaseManager {
// Erstelle Schema // Erstelle Schema
await this.createSchema(); await this.createSchema();
// Run database migrations (automatic on startup) // Generate missing previews for existing images
await this.runMigrations(); await this.generateMissingPreviews();
const skipPreviewGeneration = ['1', 'true', 'yes'].includes(String(process.env.SKIP_PREVIEW_GENERATION || '').toLowerCase());
// Generate missing previews for existing images (skip in test mode or when explicitly disabled)
if (process.env.NODE_ENV !== 'test' && !skipPreviewGeneration) {
await this.generateMissingPreviews();
}
console.log('✓ Datenbank erfolgreich initialisiert'); console.log('✓ Datenbank erfolgreich initialisiert');
} catch (error) { } catch (error) {
@ -125,42 +104,12 @@ class DatabaseManager {
} }
} }
// Migration: Füge image_description Feld zur images Tabelle hinzu (falls nicht vorhanden)
try {
await this.run('ALTER TABLE images ADD COLUMN image_description TEXT');
console.log('✓ image_description Feld zur images Tabelle hinzugefügt');
} catch (error) {
// Feld existiert bereits - das ist okay
if (!error.message.includes('duplicate column')) {
console.warn('Migration Warnung:', error.message);
}
}
// Erstelle Deletion Log Tabelle
await this.run(`
CREATE TABLE IF NOT EXISTS deletion_log (
id INTEGER PRIMARY KEY AUTOINCREMENT,
group_id TEXT NOT NULL,
year INTEGER NOT NULL,
image_count INTEGER NOT NULL,
upload_date DATETIME NOT NULL,
deleted_at DATETIME DEFAULT CURRENT_TIMESTAMP,
deletion_reason TEXT DEFAULT 'auto_cleanup_7days',
total_file_size INTEGER
)
`);
console.log('✓ Deletion Log Tabelle erstellt');
// Erstelle Indizes // Erstelle Indizes
await this.run('CREATE INDEX IF NOT EXISTS idx_groups_group_id ON groups(group_id)'); await this.run('CREATE INDEX IF NOT EXISTS idx_groups_group_id ON groups(group_id)');
await this.run('CREATE INDEX IF NOT EXISTS idx_groups_year ON groups(year)'); await this.run('CREATE INDEX IF NOT EXISTS idx_groups_year ON groups(year)');
await this.run('CREATE INDEX IF NOT EXISTS idx_groups_upload_date ON groups(upload_date)'); await this.run('CREATE INDEX IF NOT EXISTS idx_groups_upload_date ON groups(upload_date)');
await this.run('CREATE INDEX IF NOT EXISTS idx_groups_approved ON groups(approved)');
await this.run('CREATE INDEX IF NOT EXISTS idx_groups_cleanup ON groups(approved, upload_date)');
await this.run('CREATE INDEX IF NOT EXISTS idx_images_group_id ON images(group_id)'); await this.run('CREATE INDEX IF NOT EXISTS idx_images_group_id ON images(group_id)');
await this.run('CREATE INDEX IF NOT EXISTS idx_images_upload_order ON images(upload_order)'); await this.run('CREATE INDEX IF NOT EXISTS idx_images_upload_order ON images(upload_order)');
await this.run('CREATE INDEX IF NOT EXISTS idx_deletion_log_deleted_at ON deletion_log(deleted_at DESC)');
await this.run('CREATE INDEX IF NOT EXISTS idx_deletion_log_year ON deletion_log(year)');
console.log('✓ Indizes erstellt'); console.log('✓ Indizes erstellt');
// Erstelle Trigger // Erstelle Trigger
@ -174,31 +123,6 @@ class DatabaseManager {
`); `);
console.log('✓ Trigger erstellt'); console.log('✓ Trigger erstellt');
// Admin Users Tabelle (für Session-Authentication)
await this.run(`
CREATE TABLE IF NOT EXISTS admin_users (
id INTEGER PRIMARY KEY AUTOINCREMENT,
username TEXT UNIQUE NOT NULL,
password_hash TEXT NOT NULL,
role TEXT NOT NULL DEFAULT 'admin',
is_active BOOLEAN NOT NULL DEFAULT 1,
requires_password_change BOOLEAN NOT NULL DEFAULT 0,
last_login_at DATETIME,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
updated_at DATETIME DEFAULT CURRENT_TIMESTAMP
)
`);
await this.run('CREATE UNIQUE INDEX IF NOT EXISTS idx_admin_users_username ON admin_users(username)');
await this.run(`
CREATE TRIGGER IF NOT EXISTS update_admin_users_timestamp
AFTER UPDATE ON admin_users
FOR EACH ROW
BEGIN
UPDATE admin_users SET updated_at = CURRENT_TIMESTAMP WHERE id = NEW.id;
END
`);
console.log('✓ Admin Users Tabelle erstellt');
console.log('✅ Datenbank-Schema vollständig erstellt'); console.log('✅ Datenbank-Schema vollständig erstellt');
} catch (error) { } catch (error) {
console.error('💥 Fehler beim Erstellen des Schemas:', error); console.error('💥 Fehler beim Erstellen des Schemas:', error);
@ -219,19 +143,6 @@ class DatabaseManager {
}); });
} }
// Execute multi-statement SQL scripts (z. B. Migrationen mit Triggern)
exec(sql) {
return new Promise((resolve, reject) => {
this.db.exec(sql, (err) => {
if (err) {
reject(err);
} else {
resolve();
}
});
});
}
// Promise-wrapper für sqlite3.get // Promise-wrapper für sqlite3.get
get(sql, params = []) { get(sql, params = []) {
return new Promise((resolve, reject) => { return new Promise((resolve, reject) => {
@ -360,112 +271,6 @@ class DatabaseManager {
// Don't throw - this shouldn't prevent DB initialization // Don't throw - this shouldn't prevent DB initialization
} }
} }
/**
* Run pending database migrations automatically
* Migrations are SQL files in the migrations/ directory
*/
async runMigrations() {
try {
console.log('🔄 Checking for database migrations...');
const migrationsDir = path.join(__dirname, 'migrations');
// Check if migrations directory exists
if (!fs.existsSync(migrationsDir)) {
console.log(' No migrations directory found, skipping migrations');
return;
}
// Create migrations tracking table if it doesn't exist
await this.run(`
CREATE TABLE IF NOT EXISTS schema_migrations (
id INTEGER PRIMARY KEY AUTOINCREMENT,
migration_name TEXT UNIQUE NOT NULL,
applied_at DATETIME DEFAULT CURRENT_TIMESTAMP
)
`);
// Get list of applied migrations
const appliedMigrations = await this.all('SELECT migration_name FROM schema_migrations');
const appliedSet = new Set(appliedMigrations.map(m => m.migration_name));
// Get all migration files
const migrationFiles = fs.readdirSync(migrationsDir)
.filter(f => f.endsWith('.sql'))
.sort();
if (migrationFiles.length === 0) {
console.log(' No migration files found');
return;
}
let appliedCount = 0;
// Run pending migrations
for (const file of migrationFiles) {
if (appliedSet.has(file)) {
continue; // Already applied
}
console.log(` 🔧 Applying migration: ${file}`);
const migrationPath = path.join(migrationsDir, file);
const sql = fs.readFileSync(migrationPath, 'utf8');
try {
// Execute migration in a transaction
await this.run('BEGIN TRANSACTION');
// Remove comments (both line and inline) to avoid sqlite parser issues
const cleanedSql = sql
.split('\n')
.map(line => {
const commentIndex = line.indexOf('--');
if (commentIndex !== -1) {
return line.substring(0, commentIndex);
}
return line;
})
.join('\n')
.trim();
if (!cleanedSql) {
console.warn(` ⚠️ Migration ${file} enthält keinen ausführbaren SQL-Code, übersprungen`);
await this.run('COMMIT');
continue;
}
await this.exec(cleanedSql);
// Record migration
await this.run(
'INSERT INTO schema_migrations (migration_name) VALUES (?)',
[file]
);
await this.run('COMMIT');
appliedCount++;
console.log(` ✅ Successfully applied: ${file}`);
} catch (error) {
await this.run('ROLLBACK');
console.error(` ❌ Error applying ${file}:`, error.message);
throw new Error(`Migration failed: ${file} - ${error.message}`);
}
}
if (appliedCount > 0) {
console.log(`✓ Applied ${appliedCount} database migration(s)`);
} else {
console.log('✓ Database is up to date');
}
} catch (error) {
console.error('❌ Migration error:', error.message);
throw error;
}
}
} }
// Singleton Instance // Singleton Instance

View File

@ -1,18 +0,0 @@
-- Migration 005: Add consent management fields to groups table
-- Date: 2025-11-09
-- Description: Adds fields for workshop display consent, consent timestamp, and management token
-- Add consent-related columns to groups table
ALTER TABLE groups ADD COLUMN display_in_workshop BOOLEAN NOT NULL DEFAULT 0;
ALTER TABLE groups ADD COLUMN consent_timestamp DATETIME;
ALTER TABLE groups ADD COLUMN management_token TEXT; -- For Phase 2: Self-service portal
-- Create indexes for better query performance
CREATE INDEX IF NOT EXISTS idx_groups_display_consent ON groups(display_in_workshop);
CREATE UNIQUE INDEX IF NOT EXISTS idx_groups_management_token ON groups(management_token) WHERE management_token IS NOT NULL;
-- IMPORTANT: Do NOT update existing groups!
-- Old groups (before this migration) never gave explicit consent.
-- They must remain with display_in_workshop = 0 for GDPR compliance.
-- Only NEW uploads (after this migration) will have explicit consent via the upload form.
-- Existing groups can be manually reviewed and consent can be granted by admins if needed.

View File

@ -1,54 +0,0 @@
-- Migration 006: Create social media platform configuration and consent tables
-- Date: 2025-11-09
-- Description: Creates extensible social media platform management and per-group consent tracking
-- ============================================================================
-- Table: social_media_platforms
-- Purpose: Configurable list of social media platforms for consent management
-- ============================================================================
CREATE TABLE IF NOT EXISTS social_media_platforms (
id INTEGER PRIMARY KEY AUTOINCREMENT,
platform_name TEXT UNIQUE NOT NULL, -- Internal identifier (e.g., 'facebook', 'instagram', 'tiktok')
display_name TEXT NOT NULL, -- User-facing name (e.g., 'Facebook', 'Instagram', 'TikTok')
icon_name TEXT, -- Material-UI Icon name for frontend display
is_active BOOLEAN DEFAULT 1, -- Enable/disable platform without deletion
sort_order INTEGER DEFAULT 0, -- Display order in UI
created_at DATETIME DEFAULT CURRENT_TIMESTAMP
);
-- ============================================================================
-- Table: group_social_media_consents
-- Purpose: Track user consent for each group and social media platform
-- ============================================================================
CREATE TABLE IF NOT EXISTS group_social_media_consents (
id INTEGER PRIMARY KEY AUTOINCREMENT,
group_id TEXT NOT NULL,
platform_id INTEGER NOT NULL,
consented BOOLEAN NOT NULL DEFAULT 0,
consent_timestamp DATETIME NOT NULL,
revoked BOOLEAN DEFAULT 0, -- For Phase 2: Consent revocation tracking
revoked_timestamp DATETIME, -- When consent was revoked (Phase 2)
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
updated_at DATETIME DEFAULT CURRENT_TIMESTAMP,
-- Foreign key constraints
FOREIGN KEY (group_id) REFERENCES groups(group_id) ON DELETE CASCADE,
FOREIGN KEY (platform_id) REFERENCES social_media_platforms(id) ON DELETE CASCADE,
-- Ensure each platform can only have one consent entry per group
UNIQUE(group_id, platform_id)
);
-- ============================================================================
-- Indexes for query performance
-- ============================================================================
CREATE INDEX IF NOT EXISTS idx_consents_group_id ON group_social_media_consents(group_id);
CREATE INDEX IF NOT EXISTS idx_consents_platform_id ON group_social_media_consents(platform_id);
CREATE INDEX IF NOT EXISTS idx_consents_consented ON group_social_media_consents(consented);
-- ============================================================================
-- Seed data: Insert default social media platforms
-- ============================================================================
INSERT INTO social_media_platforms (platform_name, display_name, icon_name, sort_order) VALUES ('facebook', 'Facebook', 'Facebook', 1);
INSERT INTO social_media_platforms (platform_name, display_name, icon_name, sort_order) VALUES ('instagram', 'Instagram', 'Instagram', 2);
INSERT INTO social_media_platforms (platform_name, display_name, icon_name, sort_order) VALUES ('tiktok', 'TikTok', 'MusicNote', 3);

View File

@ -1,32 +0,0 @@
-- Migration 007: Create management audit log table
-- Date: 2025-11-11
-- Description: Track all management portal actions for security and compliance
-- ============================================================================
-- Table: management_audit_log
-- Purpose: Audit trail for all user actions via management portal
-- ============================================================================
CREATE TABLE IF NOT EXISTS management_audit_log (
id INTEGER PRIMARY KEY AUTOINCREMENT,
group_id TEXT, -- Group ID (NULL if token validation failed)
management_token TEXT, -- Management token used (partially masked in queries)
action TEXT NOT NULL, -- Action type: 'validate_token', 'revoke_consent', 'update_metadata', 'add_image', 'delete_image', 'delete_group'
success BOOLEAN NOT NULL DEFAULT 1, -- Whether action succeeded
error_message TEXT, -- Error message if action failed
ip_address TEXT, -- Client IP address
user_agent TEXT, -- Client user agent
request_data TEXT, -- JSON of request data (sanitized)
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
-- Foreign key (optional, NULL if group was deleted)
FOREIGN KEY (group_id) REFERENCES groups(group_id) ON DELETE SET NULL
);
-- ============================================================================
-- Indexes for query performance
-- ============================================================================
CREATE INDEX IF NOT EXISTS idx_audit_group_id ON management_audit_log(group_id);
CREATE INDEX IF NOT EXISTS idx_audit_action ON management_audit_log(action);
CREATE INDEX IF NOT EXISTS idx_audit_success ON management_audit_log(success);
CREATE INDEX IF NOT EXISTS idx_audit_created_at ON management_audit_log(created_at);
CREATE INDEX IF NOT EXISTS idx_audit_ip ON management_audit_log(ip_address);

View File

@ -1,21 +0,0 @@
-- Migration: Create admin_users table for server-side admin authentication
CREATE TABLE IF NOT EXISTS admin_users (
id INTEGER PRIMARY KEY AUTOINCREMENT,
username TEXT UNIQUE NOT NULL,
password_hash TEXT NOT NULL,
role TEXT NOT NULL DEFAULT 'admin',
is_active BOOLEAN NOT NULL DEFAULT 1,
requires_password_change BOOLEAN NOT NULL DEFAULT 0,
last_login_at DATETIME,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
updated_at DATETIME DEFAULT CURRENT_TIMESTAMP
);
CREATE UNIQUE INDEX IF NOT EXISTS idx_admin_users_username ON admin_users(username);
CREATE TRIGGER IF NOT EXISTS update_admin_users_timestamp
AFTER UPDATE ON admin_users
FOR EACH ROW
BEGIN
UPDATE admin_users SET updated_at = CURRENT_TIMESTAMP WHERE id = NEW.id;
END;

View File

@ -1,11 +0,0 @@
-- Migration 009: Add source tracking to audit log
-- Adds source_host and source_type columns to management_audit_log
-- Add source_host column (stores the hostname from which request originated)
ALTER TABLE management_audit_log ADD COLUMN source_host TEXT;
-- Add source_type column (stores 'public' or 'internal')
ALTER TABLE management_audit_log ADD COLUMN source_type TEXT;
-- Create index for filtering by source_type
CREATE INDEX IF NOT EXISTS idx_audit_log_source_type ON management_audit_log(source_type);

View File

@ -1,139 +0,0 @@
/**
* Database Migration Runner
* Executes SQL migrations in order
*/
const sqlite3 = require('sqlite3').verbose();
const path = require('path');
const fs = require('fs');
const dbPath = path.join(__dirname, '../data/db/image_uploader.db');
const migrationsDir = path.join(__dirname, 'migrations');
// Helper to promisify database operations
function runQuery(db, sql, params = []) {
return new Promise((resolve, reject) => {
db.run(sql, params, function(err) {
if (err) reject(err);
else resolve(this);
});
});
}
function getQuery(db, sql, params = []) {
return new Promise((resolve, reject) => {
db.get(sql, params, (err, row) => {
if (err) reject(err);
else resolve(row);
});
});
}
async function runMigrations() {
console.log('🚀 Starting database migrations...\n');
// Check if database exists
if (!fs.existsSync(dbPath)) {
console.error('❌ Database file not found:', dbPath);
console.error('Please run the application first to initialize the database.');
process.exit(1);
}
const db = new sqlite3.Database(dbPath, (err) => {
if (err) {
console.error('❌ Error opening database:', err.message);
process.exit(1);
}
});
try {
// Enable foreign keys
await runQuery(db, 'PRAGMA foreign_keys = ON');
// Create migrations table if it doesn't exist
await runQuery(db, `
CREATE TABLE IF NOT EXISTS schema_migrations (
id INTEGER PRIMARY KEY AUTOINCREMENT,
migration_name TEXT UNIQUE NOT NULL,
applied_at DATETIME DEFAULT CURRENT_TIMESTAMP
)
`);
// Get list of applied migrations
const appliedMigrations = await new Promise((resolve, reject) => {
db.all('SELECT migration_name FROM schema_migrations', [], (err, rows) => {
if (err) reject(err);
else resolve(rows.map(r => r.migration_name));
});
});
console.log('📋 Applied migrations:', appliedMigrations.length > 0 ? appliedMigrations.join(', ') : 'none');
// Get all migration files
const migrationFiles = fs.readdirSync(migrationsDir)
.filter(f => f.endsWith('.sql'))
.sort();
console.log('📁 Found migration files:', migrationFiles.length, '\n');
// Run pending migrations
for (const file of migrationFiles) {
if (appliedMigrations.includes(file)) {
console.log(`⏭️ Skipping ${file} (already applied)`);
continue;
}
console.log(`🔧 Applying ${file}...`);
const migrationPath = path.join(migrationsDir, file);
const sql = fs.readFileSync(migrationPath, 'utf8');
try {
// Execute migration in a transaction
await runQuery(db, 'BEGIN TRANSACTION');
// Split by semicolon and execute each statement
const statements = sql
.split(';')
.map(s => s.trim())
.filter(s => s.length > 0 && !s.startsWith('--'));
for (const statement of statements) {
await runQuery(db, statement);
}
// Record migration
await runQuery(db,
'INSERT INTO schema_migrations (migration_name) VALUES (?)',
[file]
);
await runQuery(db, 'COMMIT');
console.log(`✅ Successfully applied ${file}\n`);
} catch (error) {
await runQuery(db, 'ROLLBACK');
console.error(`❌ Error applying ${file}:`, error.message);
throw error;
}
}
console.log('\n✨ All migrations completed successfully!');
} catch (error) {
console.error('\n💥 Migration failed:', error);
process.exit(1);
} finally {
db.close();
}
}
// Run if executed directly
if (require.main === module) {
runMigrations().catch(error => {
console.error('Fatal error:', error);
process.exit(1);
});
}
module.exports = { runMigrations };

View File

@ -25,7 +25,6 @@ CREATE TABLE IF NOT EXISTS images (
file_size INTEGER, file_size INTEGER,
mime_type TEXT, mime_type TEXT,
preview_path TEXT, -- Path to preview/thumbnail image (added in migration 003) preview_path TEXT, -- Path to preview/thumbnail image (added in migration 003)
image_description TEXT, -- Optional description for each image (added in migration 004)
created_at DATETIME DEFAULT CURRENT_TIMESTAMP, created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (group_id) REFERENCES groups(group_id) ON DELETE CASCADE FOREIGN KEY (group_id) REFERENCES groups(group_id) ON DELETE CASCADE
); );
@ -48,25 +47,3 @@ FOR EACH ROW
BEGIN BEGIN
UPDATE groups SET updated_at = CURRENT_TIMESTAMP WHERE id = NEW.id; UPDATE groups SET updated_at = CURRENT_TIMESTAMP WHERE id = NEW.id;
END; END;
-- Admin Users Tabelle zur Verwaltung von Backend-Admins
CREATE TABLE IF NOT EXISTS admin_users (
id INTEGER PRIMARY KEY AUTOINCREMENT,
username TEXT UNIQUE NOT NULL,
password_hash TEXT NOT NULL,
role TEXT NOT NULL DEFAULT 'admin',
is_active BOOLEAN NOT NULL DEFAULT 1,
requires_password_change BOOLEAN NOT NULL DEFAULT 0,
last_login_at DATETIME,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
updated_at DATETIME DEFAULT CURRENT_TIMESTAMP
);
CREATE UNIQUE INDEX IF NOT EXISTS idx_admin_users_username ON admin_users(username);
CREATE TRIGGER IF NOT EXISTS update_admin_users_timestamp
AFTER UPDATE ON admin_users
FOR EACH ROW
BEGIN
UPDATE admin_users SET updated_at = CURRENT_TIMESTAMP WHERE id = NEW.id;
END;

View File

@ -1,96 +0,0 @@
const swaggerAutogen = require('swagger-autogen')();
const path = require('path');
const fs = require('fs');
const outputFile = path.join(__dirname, '..', 'docs', 'openapi.json');
// Import route mappings (Single Source of Truth - keine Router-Imports!)
const routeMappings = require('./routes/routeMappings');
// Use mappings directly (already has file + prefix)
const routerMappings = routeMappings;
const routesDir = path.join(__dirname, 'routes');
const endpointsFiles = routerMappings.map(r => path.join(routesDir, r.file));
const doc = {
info: {
title: 'Project Image Uploader API',
version: '2.0.1',
description: 'Auto-generated OpenAPI spec with correct mount prefixes'
},
host: 'localhost:5001',
schemes: ['http'],
// Add base path hints per router (swagger-autogen doesn't natively support per-file prefixes,
// so we'll post-process or use @swagger annotations in route files)
};
console.log('Generating OpenAPI spec...');
// Generate specs for each router separately with correct basePath
async function generateWithPrefixes() {
const allPaths = {};
const allTags = new Set();
for (const mapping of routerMappings) {
console.log(`<EFBFBD> Processing ${mapping.file} with prefix: "${mapping.prefix || '/'}"...`);
const uniqueName = mapping.name || mapping.file.replace('.js', '');
const tempOutput = path.join(__dirname, '..', 'docs', `.temp-${uniqueName}.json`);
const routeFile = path.join(routesDir, mapping.file);
const tempDoc = {
...doc,
basePath: mapping.prefix || '/'
};
await swaggerAutogen(tempOutput, [routeFile], tempDoc);
// Read the generated spec
const tempSpec = JSON.parse(fs.readFileSync(tempOutput, 'utf8'));
// Merge paths - prepend prefix to each path
for (const [routePath, pathObj] of Object.entries(tempSpec.paths || {})) {
const fullPath = mapping.prefix + routePath;
allPaths[fullPath] = pathObj;
// Collect tags
Object.values(pathObj).forEach(methodObj => {
if (methodObj.tags) {
methodObj.tags.forEach(tag => allTags.add(tag));
}
});
}
// Clean up temp file
fs.unlinkSync(tempOutput);
}
// Write final merged spec
const finalSpec = {
openapi: '3.0.0',
info: doc.info,
servers: [
{ url: 'http://localhost:5001', description: 'Development server (dev compose backend)' }
],
tags: Array.from(allTags).map(name => ({ name })),
paths: allPaths
};
fs.writeFileSync(outputFile, JSON.stringify(finalSpec, null, 2));
console.log('\n✅ OpenAPI spec generated successfully!');
console.log(`📊 Total paths: ${Object.keys(allPaths).length}`);
console.log(`📋 Tags: ${Array.from(allTags).join(', ')}`);
}
// Export for programmatic usage (e.g., from server.js)
module.exports = generateWithPrefixes;
// Run directly when called from CLI
if (require.main === module) {
generateWithPrefixes().catch(err => {
console.error('❌ Failed to generate OpenAPI spec:', err);
process.exit(1);
});
}

View File

@ -1,51 +0,0 @@
/**
* Audit-Log Middleware für Management Routes
* Loggt alle Aktionen im Management Portal für Security & Compliance
*/
const auditLogRepository = require('../repositories/ManagementAuditLogRepository');
/**
* Middleware zum Loggen von Management-Aktionen
* Fügt res.auditLog() Funktion hinzu
*/
const auditLogMiddleware = (req, res, next) => {
// Extrahiere Client-Informationen
const ipAddress = req.ip || req.connection.remoteAddress || 'unknown';
const userAgent = req.get('user-agent') || 'unknown';
const managementToken = req.params.token || null;
const sourceHost = req.get('x-forwarded-host') || req.get('host') || 'unknown';
const sourceType = req.requestSource || 'unknown';
/**
* Log-Funktion für Controllers
* @param {string} action - Aktion (z.B. 'validate_token', 'revoke_consent')
* @param {boolean} success - Erfolg
* @param {string} groupId - Gruppen-ID (optional)
* @param {string} errorMessage - Fehlermeldung (optional)
* @param {Object} requestData - Request-Daten (optional)
*/
res.auditLog = async (action, success, groupId = null, errorMessage = null, requestData = null) => {
try {
await auditLogRepository.logAction({
groupId,
managementToken,
action,
success,
errorMessage,
ipAddress,
userAgent,
requestData,
sourceHost,
sourceType
});
} catch (error) {
console.error('Failed to write audit log:', error);
// Audit-Log-Fehler sollen die Hauptoperation nicht blockieren
}
};
next();
};
module.exports = auditLogMiddleware;

View File

@ -1,20 +0,0 @@
/**
* Admin Authentication Middleware
* Validates server-side session for admin users
*/
const requireAdminAuth = (req, res, next) => {
const sessionUser = req.session && req.session.user;
if (!sessionUser || sessionUser.role !== 'admin') {
return res.status(403).json({
error: 'Zugriff verweigert',
reason: 'SESSION_REQUIRED'
});
}
res.locals.adminUser = sessionUser;
next();
};
module.exports = { requireAdminAuth };

View File

@ -1,40 +0,0 @@
const SAFE_METHODS = new Set(['GET', 'HEAD', 'OPTIONS']);
const requireCsrf = (req, res, next) => {
if (SAFE_METHODS.has(req.method.toUpperCase())) {
return next();
}
if (!req.session || !req.session.user) {
return res.status(403).json({
error: 'Zugriff verweigert',
reason: 'SESSION_REQUIRED'
});
}
if (!req.session.csrfToken) {
return res.status(403).json({
error: 'CSRF erforderlich',
reason: 'CSRF_SESSION_MISSING'
});
}
const headerToken = req.headers['x-csrf-token'];
if (!headerToken) {
return res.status(403).json({
error: 'CSRF erforderlich',
reason: 'CSRF_HEADER_MISSING'
});
}
if (headerToken !== req.session.csrfToken) {
return res.status(403).json({
error: 'CSRF ungültig',
reason: 'CSRF_TOKEN_INVALID'
});
}
next();
};
module.exports = { requireCsrf };

View File

@ -1,114 +0,0 @@
/**
* Host Gate Middleware
* Blockiert geschützte API-Routen für public Host
* Erlaubt nur Upload + Management für public Host
*
* Erkennt Host via X-Forwarded-Host (nginx-proxy-manager) oder Host Header
*/
const PUBLIC_HOST = process.env.PUBLIC_HOST || 'deinprojekt.hobbyhimmel.de';
const INTERNAL_HOST = process.env.INTERNAL_HOST || 'deinprojekt.lan.hobbyhimmel.de';
const ENABLE_HOST_RESTRICTION = process.env.ENABLE_HOST_RESTRICTION !== 'false';
// Debug: Log configuration on module load (development only)
if (process.env.NODE_ENV !== 'production' && process.env.NODE_ENV !== 'test') {
console.log('🔧 hostGate config:', { PUBLIC_HOST, INTERNAL_HOST, ENABLE_HOST_RESTRICTION });
}
// Routes die NUR für internal Host erlaubt sind
const INTERNAL_ONLY_ROUTES = [
'/api/admin',
'/api/groups',
'/api/slideshow',
'/api/migration',
'/api/moderation',
'/api/reorder',
'/api/batch-upload',
'/api/social-media',
'/api/auth/login', // Admin Login nur internal
'/api/auth/logout',
'/api/auth/session'
];
// Routes die für public Host erlaubt sind
const PUBLIC_ALLOWED_ROUTES = [
'/api/upload',
'/api/manage',
'/api/previews',
'/api/consent',
'/api/social-media/platforms' // Nur Plattformen lesen (für Consent-Badges im UUID Management)
];
/**
* Middleware: Host-basierte Zugriffskontrolle
* @param {Object} req - Express Request
* @param {Object} res - Express Response
* @param {Function} next - Next Middleware
*/
const hostGate = (req, res, next) => {
// Feature disabled only when explicitly set to false OR in test environment without explicit enable
const isTestEnv = process.env.NODE_ENV === 'test';
const explicitlyEnabled = process.env.ENABLE_HOST_RESTRICTION === 'true';
const explicitlyDisabled = process.env.ENABLE_HOST_RESTRICTION === 'false';
// Skip restriction if:
// - Explicitly disabled, OR
// - Test environment AND not explicitly enabled
if (explicitlyDisabled || (isTestEnv && !explicitlyEnabled)) {
req.isPublicHost = false;
req.isInternalHost = true;
req.requestSource = 'internal';
return next();
}
// Get host from X-Forwarded-Host (nginx-proxy-manager) or Host header
const forwardedHost = req.get('x-forwarded-host');
const hostHeader = req.get('host');
const host = forwardedHost || hostHeader || '';
const hostname = host.split(':')[0]; // Remove port if present
// Determine if request is from public or internal host
req.isPublicHost = hostname === PUBLIC_HOST;
req.isInternalHost = hostname === INTERNAL_HOST || hostname === 'localhost' || hostname === '127.0.0.1';
// Log host detection for debugging
if (process.env.NODE_ENV !== 'production') {
console.log(`🔍 Host Detection: ${hostname}${req.isPublicHost ? 'PUBLIC' : 'INTERNAL'}`);
}
// If public host, check if route is allowed
if (req.isPublicHost) {
const path = req.path;
// Check if explicitly allowed (z.B. /api/social-media/platforms)
const isExplicitlyAllowed = PUBLIC_ALLOWED_ROUTES.some(route =>
path === route || path.startsWith(route + '/')
);
if (isExplicitlyAllowed) {
// Erlaubt - kein Block
req.requestSource = 'public';
return next();
}
// Check if route is internal-only
const isInternalOnly = INTERNAL_ONLY_ROUTES.some(route =>
path.startsWith(route)
);
if (isInternalOnly) {
console.warn(`🚫 Public host blocked access to: ${path} (Host: ${hostname})`);
return res.status(403).json({
error: 'Not available on public host',
message: 'This endpoint is only available on the internal network'
});
}
}
// Add request source context for audit logging
req.requestSource = req.isPublicHost ? 'public' : 'internal';
next();
};
module.exports = hostGate;

View File

@ -1,17 +1,12 @@
const express = require("express"); const express = require("express");
const fileUpload = require("express-fileupload"); const fileUpload = require("express-fileupload");
const cors = require("./cors"); const cors = require("./cors");
const session = require("./session");
const hostGate = require("./hostGate");
const applyMiddlewares = (app) => { const applyMiddlewares = (app) => {
app.use(fileUpload()); app.use(fileUpload());
app.use(cors); app.use(cors);
app.use(session);
// JSON Parser für PATCH/POST Requests // JSON Parser für PATCH/POST Requests
app.use(express.json()); app.use(express.json());
// Host Gate: Blockiert geschützte Routen für public Host
app.use(hostGate);
}; };
module.exports = { applyMiddlewares }; module.exports = { applyMiddlewares };

View File

@ -1,240 +0,0 @@
/**
* Rate Limiting Middleware für Management Portal API
*
* Features:
* - IP-basiertes Rate-Limiting: 10 Requests pro Stunde
* - Brute-Force-Schutz: 24h Block nach 20 fehlgeschlagenen Token-Validierungen
* - In-Memory-Storage (für Production: Redis empfohlen)
*/
// In-Memory Storage für Rate-Limiting
const requestCounts = new Map(); // IP -> { count, resetTime }
const blockedIPs = new Map(); // IP -> { reason, blockedUntil, failedAttempts }
// Konfiguration
const RATE_LIMIT = {
MAX_REQUESTS_PER_HOUR: process.env.NODE_ENV === 'production' ? 10 : 100, // 100 für Dev, 10 für Production
WINDOW_MS: 60 * 60 * 1000, // 1 Stunde
BRUTE_FORCE_THRESHOLD: 20,
BLOCK_DURATION_MS: 24 * 60 * 60 * 1000 // 24 Stunden
};
// Public Upload Rate Limiting (strengere Limits für öffentliche Uploads)
const PUBLIC_UPLOAD_LIMIT = {
MAX_UPLOADS_PER_HOUR: parseInt(process.env.PUBLIC_UPLOAD_RATE_LIMIT || '20', 10),
WINDOW_MS: parseInt(process.env.PUBLIC_UPLOAD_RATE_WINDOW || '3600000', 10) // 1 Stunde
};
// In-Memory Storage für Public Upload Rate-Limiting
const publicUploadCounts = new Map(); // IP -> { count, resetTime }
/**
* Extrahiere Client-IP aus Request
*/
function getClientIP(req) {
return req.headers['x-forwarded-for']?.split(',')[0].trim() ||
req.headers['x-real-ip'] ||
req.connection.remoteAddress ||
req.socket.remoteAddress ||
'unknown';
}
/**
* Rate-Limiting Middleware
* Begrenzt Requests pro IP auf 10 pro Stunde
*/
function rateLimitMiddleware(req, res, next) {
const clientIP = getClientIP(req);
const now = Date.now();
// Prüfe ob IP blockiert ist
if (blockedIPs.has(clientIP)) {
const blockInfo = blockedIPs.get(clientIP);
if (now < blockInfo.blockedUntil) {
const remainingTime = Math.ceil((blockInfo.blockedUntil - now) / 1000 / 60 / 60);
return res.status(429).json({
success: false,
error: 'IP temporarily blocked',
message: `Your IP has been blocked due to ${blockInfo.reason}. Try again in ${remainingTime} hours.`,
blockedUntil: new Date(blockInfo.blockedUntil).toISOString()
});
} else {
// Block abgelaufen - entfernen
blockedIPs.delete(clientIP);
}
}
// Hole oder erstelle Request-Counter für IP
let requestInfo = requestCounts.get(clientIP);
if (!requestInfo || now > requestInfo.resetTime) {
// Neues Zeitfenster
requestInfo = {
count: 0,
resetTime: now + RATE_LIMIT.WINDOW_MS,
failedAttempts: requestInfo?.failedAttempts || 0
};
requestCounts.set(clientIP, requestInfo);
}
// Prüfe Rate-Limit
if (requestInfo.count >= RATE_LIMIT.MAX_REQUESTS_PER_HOUR) {
const resetIn = Math.ceil((requestInfo.resetTime - now) / 1000 / 60);
return res.status(429).json({
success: false,
error: 'Rate limit exceeded',
message: `Too many requests. You can make ${RATE_LIMIT.MAX_REQUESTS_PER_HOUR} requests per hour. Try again in ${resetIn} minutes.`,
limit: RATE_LIMIT.MAX_REQUESTS_PER_HOUR,
resetIn: resetIn
});
}
// Erhöhe Counter
requestInfo.count++;
requestCounts.set(clientIP, requestInfo);
// Request durchlassen
next();
}
/**
* Registriere fehlgeschlagene Token-Validierung
* Wird von Management-Routes aufgerufen bei 404 Token-Errors
*/
function recordFailedTokenValidation(req) {
const clientIP = getClientIP(req);
const now = Date.now();
let requestInfo = requestCounts.get(clientIP);
if (!requestInfo) {
requestInfo = {
count: 0,
resetTime: now + RATE_LIMIT.WINDOW_MS,
failedAttempts: 0
};
}
requestInfo.failedAttempts++;
requestCounts.set(clientIP, requestInfo);
// Prüfe Brute-Force-Schwelle
if (requestInfo.failedAttempts >= RATE_LIMIT.BRUTE_FORCE_THRESHOLD) {
blockedIPs.set(clientIP, {
reason: 'brute force attack (multiple failed token validations)',
blockedUntil: now + RATE_LIMIT.BLOCK_DURATION_MS,
failedAttempts: requestInfo.failedAttempts
});
console.warn(`⚠️ IP ${clientIP} blocked for 24h due to ${requestInfo.failedAttempts} failed token validations`);
// Reset failed attempts
requestInfo.failedAttempts = 0;
requestCounts.set(clientIP, requestInfo);
}
}
/**
* Cleanup-Funktion: Entfernt abgelaufene Einträge
* Sollte periodisch aufgerufen werden (z.B. alle 1h)
*/
function cleanupExpiredEntries() {
const now = Date.now();
let cleaned = 0;
// Cleanup requestCounts
for (const [ip, info] of requestCounts.entries()) {
if (now > info.resetTime && info.failedAttempts === 0) {
requestCounts.delete(ip);
cleaned++;
}
}
// Cleanup blockedIPs
for (const [ip, blockInfo] of blockedIPs.entries()) {
if (now > blockInfo.blockedUntil) {
blockedIPs.delete(ip);
cleaned++;
}
}
if (cleaned > 0) {
console.log(`🧹 Rate-Limiter: Cleaned up ${cleaned} expired entries`);
}
}
// Auto-Cleanup alle 60 Minuten
setInterval(cleanupExpiredEntries, 60 * 60 * 1000);
/**
* Statistiken für Monitoring
*/
function getStatistics() {
return {
activeIPs: requestCounts.size,
blockedIPs: blockedIPs.size,
blockedIPsList: Array.from(blockedIPs.entries()).map(([ip, info]) => ({
ip,
reason: info.reason,
blockedUntil: new Date(info.blockedUntil).toISOString(),
failedAttempts: info.failedAttempts
})),
publicUploadActiveIPs: publicUploadCounts.size
};
}
/**
* Public Upload Rate Limiter Middleware
* Strengere Limits für öffentliche Uploads (20 pro Stunde pro IP)
* Wird nur auf public Host angewendet
*/
function publicUploadLimiter(req, res, next) {
// Skip wenn nicht public Host oder Feature disabled
if (!req.isPublicHost || process.env.NODE_ENV === 'test') {
return next();
}
const clientIP = getClientIP(req);
const now = Date.now();
// Hole oder erstelle Upload-Counter für IP
let uploadInfo = publicUploadCounts.get(clientIP);
if (!uploadInfo || now > uploadInfo.resetTime) {
// Neues Zeitfenster
uploadInfo = {
count: 0,
resetTime: now + PUBLIC_UPLOAD_LIMIT.WINDOW_MS
};
publicUploadCounts.set(clientIP, uploadInfo);
}
// Prüfe Upload-Limit
if (uploadInfo.count >= PUBLIC_UPLOAD_LIMIT.MAX_UPLOADS_PER_HOUR) {
const resetIn = Math.ceil((uploadInfo.resetTime - now) / 1000 / 60);
console.warn(`🚫 Public upload limit exceeded for IP ${clientIP} (${uploadInfo.count}/${PUBLIC_UPLOAD_LIMIT.MAX_UPLOADS_PER_HOUR})`);
return res.status(429).json({
success: false,
error: 'Upload limit exceeded',
message: `You have reached the maximum of ${PUBLIC_UPLOAD_LIMIT.MAX_UPLOADS_PER_HOUR} uploads per hour. Please try again in ${resetIn} minutes.`,
limit: PUBLIC_UPLOAD_LIMIT.MAX_UPLOADS_PER_HOUR,
resetIn: resetIn
});
}
// Erhöhe Upload-Counter
uploadInfo.count++;
publicUploadCounts.set(clientIP, uploadInfo);
// Request durchlassen
next();
}
module.exports = {
rateLimitMiddleware,
recordFailedTokenValidation,
cleanupExpiredEntries,
getStatistics,
publicUploadLimiter
};

View File

@ -1,71 +0,0 @@
const fs = require('fs');
const path = require('path');
const session = require('express-session');
const SQLiteStore = require('connect-sqlite3')(session);
const SESSION_FILENAME = process.env.ADMIN_SESSION_DB || 'sessions.sqlite';
const SESSION_DIR = process.env.ADMIN_SESSION_DIR
? path.resolve(process.env.ADMIN_SESSION_DIR)
: path.join(__dirname, '..', 'data');
const SESSION_SECRET = process.env.ADMIN_SESSION_SECRET;
const IS_PRODUCTION = process.env.NODE_ENV === 'production';
const ADMIN_SESSION_COOKIE_SECURE = process.env.ADMIN_SESSION_COOKIE_SECURE;
const parseBooleanEnv = (value) => {
if (typeof value !== 'string') {
return undefined;
}
switch (value.toLowerCase().trim()) {
case 'true':
case '1':
case 'yes':
case 'on':
return true;
case 'false':
case '0':
case 'no':
case 'off':
return false;
default:
return undefined;
}
};
const secureOverride = parseBooleanEnv(ADMIN_SESSION_COOKIE_SECURE);
const cookieSecure = secureOverride ?? IS_PRODUCTION;
if (IS_PRODUCTION && secureOverride === false) {
console.warn('[Session] ADMIN_SESSION_COOKIE_SECURE=false detected secure cookies disabled in production. Only do this on trusted HTTP deployments.');
}
if (!SESSION_SECRET) {
throw new Error('ADMIN_SESSION_SECRET is required for session management');
}
// Ensure session directory exists so SQLite can create the DB file
if (!fs.existsSync(SESSION_DIR)) {
fs.mkdirSync(SESSION_DIR, { recursive: true });
}
const store = new SQLiteStore({
db: SESSION_FILENAME,
dir: SESSION_DIR,
ttl: 8 * 60 * 60 // seconds
});
const sessionMiddleware = session({
name: 'sid',
store,
secret: SESSION_SECRET,
resave: false,
saveUninitialized: false,
cookie: {
httpOnly: true,
secure: cookieSecure,
sameSite: 'strict',
maxAge: 8 * 60 * 60 * 1000 // 8 hours
}
});
module.exports = sessionMiddleware;

View File

@ -1,67 +0,0 @@
const dbManager = require('../database/DatabaseManager');
class AdminUserRepository {
async countActiveAdmins() {
const row = await dbManager.get(
'SELECT COUNT(*) as count FROM admin_users WHERE is_active = 1'
);
return row ? row.count : 0;
}
async getByUsername(username) {
return dbManager.get(
'SELECT * FROM admin_users WHERE username = ?',
[username]
);
}
async getById(id) {
return dbManager.get(
'SELECT * FROM admin_users WHERE id = ?',
[id]
);
}
async listActiveAdmins() {
return dbManager.all(
`SELECT id, username, role, is_active, requires_password_change, last_login_at, created_at, updated_at
FROM admin_users
WHERE is_active = 1
ORDER BY username ASC`
);
}
async createAdminUser({ username, passwordHash, role = 'admin', requiresPasswordChange = false }) {
const result = await dbManager.run(
`INSERT INTO admin_users (username, password_hash, role, requires_password_change)
VALUES (?, ?, ?, ?)` ,
[username, passwordHash, role, requiresPasswordChange ? 1 : 0]
);
return result.id;
}
async updatePassword(id, newPasswordHash, requiresPasswordChange = false) {
await dbManager.run(
`UPDATE admin_users
SET password_hash = ?, requires_password_change = ?
WHERE id = ?`,
[newPasswordHash, requiresPasswordChange ? 1 : 0, id]
);
}
async markInactive(id) {
await dbManager.run(
'UPDATE admin_users SET is_active = 0 WHERE id = ?',
[id]
);
}
async recordSuccessfulLogin(id) {
await dbManager.run(
'UPDATE admin_users SET last_login_at = CURRENT_TIMESTAMP WHERE id = ?',
[id]
);
}
}
module.exports = new AdminUserRepository();

View File

@ -1,63 +0,0 @@
const dbManager = require('../database/DatabaseManager');
class DeletionLogRepository {
// Erstellt Lösch-Protokoll
async createDeletionEntry(logData) {
const result = await dbManager.run(`
INSERT INTO deletion_log (group_id, year, image_count, upload_date, deletion_reason, total_file_size)
VALUES (?, ?, ?, ?, ?, ?)
`, [
logData.groupId,
logData.year,
logData.imageCount,
logData.uploadDate,
logData.deletionReason || 'auto_cleanup_7days',
logData.totalFileSize || null
]);
return result.id;
}
// Hole letzte N Einträge
async getRecentDeletions(limit = 10) {
const deletions = await dbManager.all(`
SELECT * FROM deletion_log
ORDER BY deleted_at DESC
LIMIT ?
`, [limit]);
return deletions;
}
// Hole alle Einträge (für Admin-Übersicht)
async getAllDeletions() {
const deletions = await dbManager.all(`
SELECT * FROM deletion_log
ORDER BY deleted_at DESC
`);
return deletions;
}
// Statistiken (Anzahl gelöschte Gruppen, Bilder, Speicherplatz)
async getDeletionStatistics() {
const stats = await dbManager.get(`
SELECT
COUNT(*) as totalDeleted,
SUM(image_count) as totalImages,
SUM(total_file_size) as totalSize,
MAX(deleted_at) as lastCleanup
FROM deletion_log
`);
return {
totalDeleted: stats.totalDeleted || 0,
totalImages: stats.totalImages || 0,
totalSize: stats.totalSize || 0,
lastCleanup: stats.lastCleanup || null
};
}
}
module.exports = new DeletionLogRepository();

View File

@ -7,24 +7,23 @@ class GroupRepository {
return await dbManager.transaction(async (db) => { return await dbManager.transaction(async (db) => {
// Füge Gruppe hinzu // Füge Gruppe hinzu
const groupResult = await db.run(` const groupResult = await db.run(`
INSERT INTO groups (group_id, year, title, description, name, upload_date, approved) INSERT INTO groups (group_id, year, title, description, name, upload_date)
VALUES (?, ?, ?, ?, ?, ?, ?) VALUES (?, ?, ?, ?, ?, ?)
`, [ `, [
groupData.groupId, groupData.groupId,
groupData.year, groupData.year,
groupData.title, groupData.title,
groupData.description || null, groupData.description || null,
groupData.name || null, groupData.name || null,
groupData.uploadDate, groupData.uploadDate
groupData.approved || false
]); ]);
// Füge Bilder hinzu // Füge Bilder hinzu
if (groupData.images && groupData.images.length > 0) { if (groupData.images && groupData.images.length > 0) {
for (const image of groupData.images) { for (const image of groupData.images) {
await db.run(` await db.run(`
INSERT INTO images (group_id, file_name, original_name, file_path, upload_order, file_size, mime_type, preview_path, image_description) INSERT INTO images (group_id, file_name, original_name, file_path, upload_order, file_size, mime_type, preview_path)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?) VALUES (?, ?, ?, ?, ?, ?, ?, ?)
`, [ `, [
groupData.groupId, groupData.groupId,
image.fileName, image.fileName,
@ -33,8 +32,7 @@ class GroupRepository {
image.uploadOrder, image.uploadOrder,
image.fileSize || null, image.fileSize || null,
image.mimeType || null, image.mimeType || null,
image.previewPath || null, image.previewPath || null
image.imageDescription || null
]); ]);
} }
} }
@ -67,15 +65,13 @@ class GroupRepository {
name: group.name, name: group.name,
uploadDate: group.upload_date, uploadDate: group.upload_date,
images: images.map(img => ({ images: images.map(img => ({
id: img.id,
fileName: img.file_name, fileName: img.file_name,
originalName: img.original_name, originalName: img.original_name,
filePath: img.file_path, filePath: img.file_path,
previewPath: img.preview_path, previewPath: img.preview_path,
uploadOrder: img.upload_order, uploadOrder: img.upload_order,
fileSize: img.file_size, fileSize: img.file_size,
mimeType: img.mime_type, mimeType: img.mime_type
imageDescription: img.image_description
})), })),
imageCount: images.length imageCount: images.length
}; };
@ -378,478 +374,6 @@ class GroupRepository {
}; };
}); });
} }
// Aktualisiere die Beschreibung eines einzelnen Bildes
async updateImageDescription(imageId, groupId, description) {
// Validierung: Max 200 Zeichen
if (description && description.length > 200) {
throw new Error('Image description must not exceed 200 characters');
}
const result = await dbManager.run(`
UPDATE images
SET image_description = ?
WHERE id = ? AND group_id = ?
`, [description || null, imageId, groupId]);
return result.changes > 0;
}
// Batch-Update für mehrere Bildbeschreibungen
async updateBatchImageDescriptions(groupId, descriptions) {
if (!Array.isArray(descriptions) || descriptions.length === 0) {
throw new Error('Descriptions array is required and cannot be empty');
}
return await dbManager.transaction(async (db) => {
let updateCount = 0;
for (const desc of descriptions) {
const { imageId, description } = desc;
// Validierung: Max 200 Zeichen
if (description && description.length > 200) {
throw new Error(`Image description for image ${imageId} must not exceed 200 characters`);
}
// Prüfe ob Bild zur Gruppe gehört
const image = await db.get(`
SELECT id FROM images WHERE id = ? AND group_id = ?
`, [imageId, groupId]);
if (!image) {
throw new Error(`Image with ID ${imageId} not found in group ${groupId}`);
}
// Update Beschreibung
const result = await db.run(`
UPDATE images
SET image_description = ?
WHERE id = ? AND group_id = ?
`, [description || null, imageId, groupId]);
updateCount += result.changes;
}
return {
groupId: groupId,
updatedImages: updateCount
};
});
}
// Findet Gruppen, die zum Löschen anstehen (approved=false & älter als N Tage)
async findUnapprovedGroupsOlderThan(days) {
const cutoffDate = new Date();
cutoffDate.setDate(cutoffDate.getDate() - days);
const cutoffDateStr = cutoffDate.toISOString();
const groups = await dbManager.all(`
SELECT * FROM groups
WHERE approved = FALSE
AND upload_date < ?
ORDER BY upload_date ASC
`, [cutoffDateStr]);
return groups;
}
// Hole Statistiken für Gruppe (für Deletion Log)
async getGroupStatistics(groupId) {
const group = await dbManager.get(`
SELECT * FROM groups WHERE group_id = ?
`, [groupId]);
if (!group) {
return null;
}
const images = await dbManager.all(`
SELECT file_size, file_path, preview_path FROM images
WHERE group_id = ?
`, [groupId]);
const totalFileSize = images.reduce((sum, img) => sum + (img.file_size || 0), 0);
return {
groupId: group.group_id,
year: group.year,
imageCount: images.length,
uploadDate: group.upload_date,
totalFileSize: totalFileSize,
images: images
};
}
// Löscht Gruppe komplett (inkl. DB-Einträge und Dateien)
async deleteGroupCompletely(groupId) {
return await dbManager.transaction(async (db) => {
// Hole alle Bilder der Gruppe (für Datei-Löschung)
const images = await db.all(`
SELECT file_path, preview_path FROM images
WHERE group_id = ?
`, [groupId]);
// Lösche Gruppe (CASCADE löscht automatisch Bilder aus DB)
const result = await db.run(`
DELETE FROM groups WHERE group_id = ?
`, [groupId]);
if (result.changes === 0) {
throw new Error(`Group with ID ${groupId} not found`);
}
return {
deletedImages: images.length,
imagePaths: images
};
});
}
// ============================================================================
// Consent Management Methods
// ============================================================================
/**
* Erstelle neue Gruppe mit Consent-Daten
* @param {Object} groupData - Standard Gruppendaten
* @param {boolean} workshopConsent - Werkstatt-Anzeige Zustimmung
* @param {Array} socialMediaConsents - Array von {platformId, consented}
* @returns {Promise<string>} groupId der erstellten Gruppe
*/
async createGroupWithConsent(groupData, workshopConsent, socialMediaConsents = []) {
const SocialMediaRepository = require('./SocialMediaRepository');
const socialMediaRepo = new SocialMediaRepository(dbManager);
const { v4: uuidv4 } = require('uuid');
return await dbManager.transaction(async (db) => {
const consentTimestamp = new Date().toISOString();
const managementToken = uuidv4(); // Generate UUID v4 token
// Füge Gruppe mit Consent-Feldern und Management-Token hinzu
await db.run(`
INSERT INTO groups (
group_id, year, title, description, name, upload_date, approved,
display_in_workshop, consent_timestamp, management_token
)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
`, [
groupData.groupId,
groupData.year,
groupData.title,
groupData.description || null,
groupData.name || null,
groupData.uploadDate,
groupData.approved || false,
workshopConsent ? 1 : 0,
consentTimestamp,
managementToken
]);
// Füge Bilder hinzu
if (groupData.images && groupData.images.length > 0) {
for (const image of groupData.images) {
await db.run(`
INSERT INTO images (
group_id, file_name, original_name, file_path, upload_order,
file_size, mime_type, preview_path, image_description
)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)
`, [
groupData.groupId,
image.fileName,
image.originalName,
image.filePath,
image.uploadOrder,
image.fileSize || null,
image.mimeType || null,
image.previewPath || null,
image.imageDescription || null
]);
}
}
// Speichere Social Media Consents
if (socialMediaConsents && socialMediaConsents.length > 0) {
await socialMediaRepo.saveConsents(
groupData.groupId,
socialMediaConsents,
consentTimestamp
);
}
return {
groupId: groupData.groupId,
managementToken: managementToken
};
});
}
/**
* Hole Gruppe mit allen Consent-Informationen
* @param {string} groupId - ID der Gruppe
* @returns {Promise<Object>} Gruppe mit Bildern und Consents
*/
async getGroupWithConsents(groupId) {
const SocialMediaRepository = require('./SocialMediaRepository');
const socialMediaRepo = new SocialMediaRepository(dbManager);
// Hole Standard-Gruppendaten
const group = await this.getGroupById(groupId);
if (!group) {
return null;
}
// Füge Consent-Daten hinzu
group.consents = await socialMediaRepo.getConsentsForGroup(groupId);
return group;
}
/**
* Aktualisiere Consents für eine bestehende Gruppe
* @param {string} groupId - ID der Gruppe
* @param {boolean} workshopConsent - Neue Werkstatt-Consent
* @param {Array} socialMediaConsents - Neue Social Media Consents
* @returns {Promise<void>}
*/
async updateConsents(groupId, workshopConsent, socialMediaConsents = []) {
const SocialMediaRepository = require('./SocialMediaRepository');
const socialMediaRepo = new SocialMediaRepository(dbManager);
return await dbManager.transaction(async (db) => {
const consentTimestamp = new Date().toISOString();
// Aktualisiere Werkstatt-Consent
await db.run(`
UPDATE groups
SET display_in_workshop = ?,
consent_timestamp = ?
WHERE group_id = ?
`, [workshopConsent ? 1 : 0, consentTimestamp, groupId]);
// Lösche alte Social Media Consents
await socialMediaRepo.deleteConsentsForGroup(groupId);
// Speichere neue Consents
if (socialMediaConsents && socialMediaConsents.length > 0) {
await socialMediaRepo.saveConsents(
groupId,
socialMediaConsents,
consentTimestamp
);
}
});
}
/**
* Filtere Gruppen nach Consent-Status
* @param {Object} filters - Filter-Optionen
* @param {boolean} filters.displayInWorkshop - Filter nach Werkstatt-Consent
* @param {number} filters.platformId - Filter nach Plattform-ID
* @param {boolean} filters.platformConsent - Filter nach Platform-Consent-Status
* @returns {Promise<Array>} Gefilterte Gruppen
*/
async getGroupsByConsentStatus(filters = {}) {
let query = `
SELECT DISTINCT g.*
FROM groups g
`;
const params = [];
const conditions = [];
// Filter nach Werkstatt-Consent
if (filters.displayInWorkshop !== undefined) {
conditions.push('g.display_in_workshop = ?');
params.push(filters.displayInWorkshop ? 1 : 0);
}
// Filter nach Social Media Platform
if (filters.platformId !== undefined) {
query += `
LEFT JOIN group_social_media_consents c
ON g.group_id = c.group_id AND c.platform_id = ?
`;
params.push(filters.platformId);
if (filters.platformConsent !== undefined) {
conditions.push('c.consented = ?');
params.push(filters.platformConsent ? 1 : 0);
conditions.push('(c.revoked IS NULL OR c.revoked = 0)');
}
}
if (conditions.length > 0) {
query += ' WHERE ' + conditions.join(' AND ');
}
query += ' ORDER BY g.upload_date DESC';
return await dbManager.all(query, params);
}
/**
* Exportiere Consent-Daten für rechtliche Dokumentation
* @param {Object} filters - Optional: Filter-Kriterien
* @returns {Promise<Array>} Export-Daten mit allen Consent-Informationen
*/
async exportConsentData(filters = {}) {
let query = `
SELECT
g.group_id,
g.year,
g.title,
g.name,
g.upload_date,
g.display_in_workshop,
g.consent_timestamp,
g.approved
FROM groups g
WHERE 1=1
`;
const params = [];
if (filters.year) {
query += ' AND g.year = ?';
params.push(filters.year);
}
if (filters.approved !== undefined) {
query += ' AND g.approved = ?';
params.push(filters.approved ? 1 : 0);
}
query += ' ORDER BY g.upload_date DESC';
const groups = await dbManager.all(query, params);
// Lade Social Media Consents für jede Gruppe
const SocialMediaRepository = require('./SocialMediaRepository');
const socialMediaRepo = new SocialMediaRepository(dbManager);
for (const group of groups) {
group.socialMediaConsents = await socialMediaRepo.getConsentsForGroup(group.group_id);
}
return groups;
}
/**
* Generiere Management-Token für Gruppe (Phase 2)
* @param {string} groupId - ID der Gruppe
* @returns {Promise<string>} Generierter UUID Token
*/
async generateManagementToken(groupId) {
const crypto = require('crypto');
const token = crypto.randomUUID();
await dbManager.run(`
UPDATE groups
SET management_token = ?
WHERE group_id = ?
`, [token, groupId]);
return token;
}
/**
* Hole Gruppe über Management-Token (Phase 2)
* @param {string} token - Management Token
* @returns {Promise<Object|null>} Gruppe mit allen Daten oder null
*/
async getGroupByManagementToken(token) {
const group = await dbManager.get(`
SELECT * FROM groups WHERE management_token = ?
`, [token]);
if (!group) {
return null;
}
// Lade Bilder und Consents
return await this.getGroupWithConsents(group.group_id);
}
/**
* Hole aktive Social Media Plattformen
* Convenience-Methode für Frontend
* @returns {Promise<Array>} Aktive Plattformen
*/
async getActiveSocialMediaPlatforms() {
const SocialMediaRepository = require('./SocialMediaRepository');
const socialMediaRepo = new SocialMediaRepository(dbManager);
return await socialMediaRepo.getActivePlatforms();
}
/**
* Hole Social Media Consents für Gruppe
* Convenience-Methode
* @param {string} groupId - ID der Gruppe
* @returns {Promise<Array>} Consents
*/
async getSocialMediaConsentsForGroup(groupId) {
const SocialMediaRepository = require('./SocialMediaRepository');
const socialMediaRepo = new SocialMediaRepository(dbManager);
return await socialMediaRepo.getConsentsForGroup(groupId);
}
/**
* Hole Gruppe mit allen Daten (Bilder + Consents) per Management Token
* Für Self-Service Management Portal
* @param {string} managementToken - UUID v4 Management Token
* @returns {Promise<Object|null>} Gruppe mit Bildern, Workshop-Consent und Social Media Consents
*/
async getGroupByManagementToken(managementToken) {
// Hole Gruppe
const group = await dbManager.get(`
SELECT * FROM groups WHERE management_token = ?
`, [managementToken]);
if (!group) {
return null;
}
// Hole Bilder
const images = await dbManager.all(`
SELECT * FROM images
WHERE group_id = ?
ORDER BY upload_order ASC
`, [group.group_id]);
// Hole Social Media Consents
const SocialMediaRepository = require('./SocialMediaRepository');
const socialMediaRepo = new SocialMediaRepository(dbManager);
const socialMediaConsents = await socialMediaRepo.getConsentsForGroup(group.group_id);
return {
groupId: group.group_id,
year: group.year,
title: group.title,
description: group.description,
name: group.name,
uploadDate: group.upload_date,
approved: group.approved,
// Workshop consent
displayInWorkshop: group.display_in_workshop,
consentTimestamp: group.consent_timestamp,
// Images
images: images.map(img => ({
id: img.id,
fileName: img.file_name,
originalName: img.original_name,
filePath: img.file_path,
previewPath: img.preview_path,
uploadOrder: img.upload_order,
fileSize: img.file_size,
mimeType: img.mime_type,
imageDescription: img.image_description
})),
imageCount: images.length,
// Social Media Consents
socialMediaConsents: socialMediaConsents || []
};
}
} }
module.exports = new GroupRepository(); module.exports = new GroupRepository();

View File

@ -1,212 +0,0 @@
/**
* ManagementAuditLogRepository
*
* Repository für Management Audit Logging
* Verwaltet management_audit_log Tabelle
*/
const dbManager = require('../database/DatabaseManager');
class ManagementAuditLogRepository {
/**
* Log eine Management-Aktion
* @param {Object} logData - Audit-Log-Daten
* @param {string} logData.groupId - Gruppen-ID (optional)
* @param {string} logData.managementToken - Management-Token (wird maskiert)
* @param {string} logData.action - Aktion (validate_token, revoke_consent, etc.)
* @param {boolean} logData.success - Erfolg
* @param {string} logData.errorMessage - Fehlermeldung (optional)
* @param {string} logData.ipAddress - IP-Adresse
* @param {string} logData.userAgent - User-Agent
* @param {Object} logData.requestData - Request-Daten (wird als JSON gespeichert)
* @param {string} logData.sourceHost - Source Host (public/internal)
* @param {string} logData.sourceType - Source Type (public/internal)
* @returns {Promise<number>} ID des Log-Eintrags
*/
async logAction(logData) {
// Maskiere Token (zeige nur erste 8 Zeichen)
const maskedToken = logData.managementToken
? logData.managementToken.substring(0, 8) + '...'
: null;
// Sanitiere Request-Daten (entferne sensible Daten)
const sanitizedData = logData.requestData ? {
...logData.requestData,
managementToken: undefined // Token nie loggen
} : null;
// Prüfe ob Spalten source_host und source_type existieren
const tableInfo = await dbManager.all(`PRAGMA table_info(management_audit_log)`);
const hasSourceColumns = tableInfo.some(col => col.name === 'source_host');
let query, params;
if (hasSourceColumns) {
query = `
INSERT INTO management_audit_log
(group_id, management_token, action, success, error_message, ip_address, user_agent, request_data, source_host, source_type)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
`;
params = [
logData.groupId || null,
maskedToken,
logData.action,
logData.success ? 1 : 0,
logData.errorMessage || null,
logData.ipAddress || null,
logData.userAgent || null,
sanitizedData ? JSON.stringify(sanitizedData) : null,
logData.sourceHost || null,
logData.sourceType || null
];
} else {
// Fallback für alte DB-Schemas ohne source_host/source_type
query = `
INSERT INTO management_audit_log
(group_id, management_token, action, success, error_message, ip_address, user_agent, request_data)
VALUES (?, ?, ?, ?, ?, ?, ?, ?)
`;
params = [
logData.groupId || null,
maskedToken,
logData.action,
logData.success ? 1 : 0,
logData.errorMessage || null,
logData.ipAddress || null,
logData.userAgent || null,
sanitizedData ? JSON.stringify(sanitizedData) : null
];
}
const result = await dbManager.run(query, params);
return result.lastID;
}
/**
* Hole letzte N Audit-Einträge
* @param {number} limit - Anzahl der Einträge (default: 100)
* @returns {Promise<Array>} Array von Audit-Einträgen
*/
async getRecentLogs(limit = 100) {
const query = `
SELECT
id,
group_id,
management_token,
action,
success,
error_message,
ip_address,
user_agent,
request_data,
created_at
FROM management_audit_log
ORDER BY created_at DESC
LIMIT ?
`;
const logs = await dbManager.all(query, [limit]);
// Parse request_data JSON
return logs.map(log => ({
...log,
requestData: log.request_data ? JSON.parse(log.request_data) : null,
request_data: undefined
}));
}
/**
* Hole Audit-Logs für eine Gruppe
* @param {string} groupId - Gruppen-ID
* @returns {Promise<Array>} Array von Audit-Einträgen
*/
async getLogsByGroupId(groupId) {
const query = `
SELECT
id,
group_id,
management_token,
action,
success,
error_message,
ip_address,
user_agent,
request_data,
created_at
FROM management_audit_log
WHERE group_id = ?
ORDER BY created_at DESC
`;
const logs = await dbManager.all(query, [groupId]);
return logs.map(log => ({
...log,
requestData: log.request_data ? JSON.parse(log.request_data) : null,
request_data: undefined
}));
}
/**
* Hole fehlgeschlagene Aktionen nach IP
* @param {string} ipAddress - IP-Adresse
* @param {number} hours - Zeitraum in Stunden (default: 24)
* @returns {Promise<Array>} Array von fehlgeschlagenen Aktionen
*/
async getFailedActionsByIP(ipAddress, hours = 24) {
const query = `
SELECT
id,
group_id,
management_token,
action,
error_message,
created_at
FROM management_audit_log
WHERE ip_address = ?
AND success = 0
AND created_at >= datetime('now', '-${hours} hours')
ORDER BY created_at DESC
`;
return await dbManager.all(query, [ipAddress]);
}
/**
* Statistiken für Audit-Log
* @returns {Promise<Object>} Statistiken
*/
async getStatistics() {
const query = `
SELECT
COUNT(*) as totalActions,
SUM(CASE WHEN success = 1 THEN 1 ELSE 0 END) as successfulActions,
SUM(CASE WHEN success = 0 THEN 1 ELSE 0 END) as failedActions,
COUNT(DISTINCT group_id) as uniqueGroups,
COUNT(DISTINCT ip_address) as uniqueIPs,
MAX(created_at) as lastAction
FROM management_audit_log
`;
return await dbManager.get(query);
}
/**
* Lösche alte Audit-Logs (Cleanup)
* @param {number} days - Lösche Logs älter als X Tage (default: 90)
* @returns {Promise<number>} Anzahl gelöschter Einträge
*/
async cleanupOldLogs(days = 90) {
const query = `
DELETE FROM management_audit_log
WHERE created_at < datetime('now', '-${days} days')
`;
const result = await dbManager.run(query);
return result.changes;
}
}
module.exports = new ManagementAuditLogRepository();

View File

@ -1,339 +0,0 @@
/**
* SocialMediaRepository
*
* Repository für Social Media Platform und Consent Management
* Verwaltet social_media_platforms und group_social_media_consents Tabellen
*/
class SocialMediaRepository {
constructor(dbManager) {
this.db = dbManager;
}
// ============================================================================
// Platform Management
// ============================================================================
/**
* Lade alle Social Media Plattformen (aktiv und inaktiv)
* @returns {Promise<Array>} Array von Platform-Objekten
*/
async getAllPlatforms() {
const query = `
SELECT
id,
platform_name,
display_name,
icon_name,
is_active,
sort_order,
created_at
FROM social_media_platforms
ORDER BY sort_order ASC, display_name ASC
`;
return await this.db.all(query);
}
/**
* Lade nur aktive Social Media Plattformen
* @returns {Promise<Array>} Array von aktiven Platform-Objekten
*/
async getActivePlatforms() {
const query = `
SELECT
id,
platform_name,
display_name,
icon_name,
sort_order
FROM social_media_platforms
WHERE is_active = 1
ORDER BY sort_order ASC, display_name ASC
`;
return await this.db.all(query);
}
/**
* Erstelle eine neue Social Media Plattform
* @param {Object} platformData - Platform-Daten
* @param {string} platformData.platform_name - Interner Name (z.B. 'facebook')
* @param {string} platformData.display_name - Anzeigename (z.B. 'Facebook')
* @param {string} platformData.icon_name - Material-UI Icon Name
* @param {number} platformData.sort_order - Sortierreihenfolge
* @returns {Promise<number>} ID der neu erstellten Plattform
*/
async createPlatform(platformData) {
const query = `
INSERT INTO social_media_platforms
(platform_name, display_name, icon_name, sort_order, is_active)
VALUES (?, ?, ?, ?, 1)
`;
const result = await this.db.run(
query,
[
platformData.platform_name,
platformData.display_name,
platformData.icon_name || null,
platformData.sort_order || 0
]
);
return result.lastID;
}
/**
* Aktualisiere eine bestehende Plattform
* @param {number} platformId - ID der Plattform
* @param {Object} platformData - Zu aktualisierende Daten
* @returns {Promise<void>}
*/
async updatePlatform(platformId, platformData) {
const updates = [];
const values = [];
if (platformData.display_name !== undefined) {
updates.push('display_name = ?');
values.push(platformData.display_name);
}
if (platformData.icon_name !== undefined) {
updates.push('icon_name = ?');
values.push(platformData.icon_name);
}
if (platformData.sort_order !== undefined) {
updates.push('sort_order = ?');
values.push(platformData.sort_order);
}
if (updates.length === 0) {
return; // Nichts zu aktualisieren
}
values.push(platformId);
const query = `
UPDATE social_media_platforms
SET ${updates.join(', ')}
WHERE id = ?
`;
await this.db.run(query, values);
}
/**
* Aktiviere oder deaktiviere eine Plattform
* @param {number} platformId - ID der Plattform
* @param {boolean} isActive - Aktiv-Status
* @returns {Promise<void>}
*/
async togglePlatformStatus(platformId, isActive) {
const query = `
UPDATE social_media_platforms
SET is_active = ?
WHERE id = ?
`;
await this.db.run(query, [isActive ? 1 : 0, platformId]);
}
// ============================================================================
// Consent Management
// ============================================================================
/**
* Speichere Consents für eine Gruppe
* @param {string} groupId - ID der Gruppe
* @param {Array} consentsArray - Array von {platformId, consented} Objekten
* @param {string} consentTimestamp - ISO-Timestamp der Zustimmung
* @returns {Promise<void>}
*/
async saveConsents(groupId, consentsArray, consentTimestamp) {
if (!Array.isArray(consentsArray) || consentsArray.length === 0) {
return; // Keine Consents zu speichern
}
const query = `
INSERT INTO group_social_media_consents
(group_id, platform_id, consented, consent_timestamp)
VALUES (?, ?, ?, ?)
`;
// Speichere jeden Consent einzeln
for (const consent of consentsArray) {
await this.db.run(
query,
[
groupId,
consent.platformId,
consent.consented ? 1 : 0,
consentTimestamp
]
);
}
}
/**
* Lade alle Consents für eine Gruppe
* @param {string} groupId - ID der Gruppe
* @returns {Promise<Array>} Array von Consent-Objekten mit Platform-Info
*/
async getConsentsForGroup(groupId) {
const query = `
SELECT
c.id,
c.group_id,
c.platform_id,
c.consented,
c.consent_timestamp,
c.revoked,
c.revoked_timestamp,
p.platform_name,
p.display_name,
p.icon_name
FROM group_social_media_consents c
JOIN social_media_platforms p ON c.platform_id = p.id
WHERE c.group_id = ?
ORDER BY p.sort_order ASC
`;
return await this.db.all(query, [groupId]);
}
/**
* Lade Gruppen-IDs nach Consent-Status filtern
* @param {Object} filters - Filter-Optionen
* @param {number} filters.platformId - Optional: Filter nach Plattform-ID
* @param {boolean} filters.consented - Optional: Filter nach Consent-Status
* @returns {Promise<Array>} Array von Gruppen-IDs
*/
async getGroupIdsByConsentStatus(filters = {}) {
let query = `
SELECT DISTINCT c.group_id
FROM group_social_media_consents c
WHERE 1=1
`;
const params = [];
if (filters.platformId !== undefined) {
query += ' AND c.platform_id = ?';
params.push(filters.platformId);
}
if (filters.consented !== undefined) {
query += ' AND c.consented = ?';
params.push(filters.consented ? 1 : 0);
}
if (filters.revoked !== undefined) {
query += ' AND c.revoked = ?';
params.push(filters.revoked ? 1 : 0);
}
const results = await this.db.all(query, params);
return results.map(row => row.group_id);
}
/**
* Widerrufe einen Consent (Phase 2)
* @param {string} groupId - ID der Gruppe
* @param {number} platformId - ID der Plattform
* @returns {Promise<void>}
*/
async revokeConsent(groupId, platformId) {
const query = `
UPDATE group_social_media_consents
SET
revoked = 1,
revoked_timestamp = CURRENT_TIMESTAMP
WHERE group_id = ? AND platform_id = ?
`;
await this.db.run(query, [groupId, platformId]);
}
/**
* Stelle einen widerrufenen Consent wieder her (Phase 2)
* @param {string} groupId - ID der Gruppe
* @param {number} platformId - ID der Plattform
* @returns {Promise<void>}
*/
async restoreConsent(groupId, platformId) {
const query = `
UPDATE group_social_media_consents
SET
revoked = 0,
revoked_timestamp = NULL
WHERE group_id = ? AND platform_id = ?
`;
await this.db.run(query, [groupId, platformId]);
}
/**
* Lade Consent-Historie für eine Gruppe (Phase 2)
* @param {string} groupId - ID der Gruppe
* @returns {Promise<Array>} Array von Consent-Änderungen
*/
async getConsentHistory(groupId) {
const query = `
SELECT
c.id,
c.group_id,
c.platform_id,
c.consented,
c.consent_timestamp,
c.revoked,
c.revoked_timestamp,
c.created_at,
c.updated_at,
p.platform_name,
p.display_name
FROM group_social_media_consents c
JOIN social_media_platforms p ON c.platform_id = p.id
WHERE c.group_id = ?
ORDER BY c.updated_at DESC
`;
return await this.db.all(query, [groupId]);
}
/**
* Prüfe ob eine Gruppe Consent für eine bestimmte Plattform hat
* @param {string} groupId - ID der Gruppe
* @param {number} platformId - ID der Plattform
* @returns {Promise<boolean>} true wenn Consent erteilt und nicht widerrufen
*/
async hasActiveConsent(groupId, platformId) {
const query = `
SELECT consented, revoked
FROM group_social_media_consents
WHERE group_id = ? AND platform_id = ?
`;
const result = await this.db.get(query, [groupId, platformId]);
if (!result) {
return false;
}
return result.consented === 1 && result.revoked === 0;
}
/**
* Lösche alle Consents für eine Gruppe (CASCADE durch DB)
* @param {string} groupId - ID der Gruppe
* @returns {Promise<void>}
*/
async deleteConsentsForGroup(groupId) {
const query = `
DELETE FROM group_social_media_consents
WHERE group_id = ?
`;
await this.db.run(query, [groupId]);
}
}
module.exports = SocialMediaRepository;

View File

@ -1,357 +0,0 @@
# API Routes - Developer Guide
## 📁 Single Source of Truth
**`routeMappings.js`** ist die zentrale Konfigurationsdatei für alle API-Routen.
```javascript
// ✅ HIER ändern (Single Source of Truth)
module.exports = [
{ router: 'upload', prefix: '/api', file: 'upload.js' },
// ...
];
```
**Verwendet von:**
- `routes/index.js` → Server-Routing
- `generate-openapi.js` → OpenAPI-Dokumentation
**❌ NICHT direkt in `routes/index.js` oder `generate-openapi.js` ändern!**
---
## 🆕 Neue Route hinzufügen
### 1. Router-Datei erstellen
```bash
touch backend/src/routes/myNewRoute.js
```
```javascript
// backend/src/routes/myNewRoute.js
const express = require('express');
const router = express.Router();
/**
* #swagger.tags = ['My Feature']
* #swagger.description = 'Beschreibung der Route'
*/
router.get('/my-endpoint', async (req, res) => {
res.json({ success: true });
});
module.exports = router;
```
### 2. In `routeMappings.js` registrieren
```javascript
// backend/src/routes/routeMappings.js
module.exports = [
// ... bestehende Routes
{ router: 'myNewRoute', prefix: '/api/my-feature', file: 'myNewRoute.js' }
];
```
### 3. In `routes/index.js` importieren
```javascript
// backend/src/routes/index.js
const myNewRouteRouter = require('./myNewRoute');
const routerMap = {
// ... bestehende Router
myNewRoute: myNewRouteRouter
};
```
### 4. OpenAPI regenerieren
OpenAPI wird **automatisch** bei jedem Server-Start (Dev-Mode) generiert.
**Manuell generieren:**
```bash
npm run generate-openapi
```
**OpenAPI-Pfade testen:**
```bash
npm run test-openapi # Prüft alle Routen gegen localhost:5000
```
**Fertig!** Route ist unter `/api/my-feature/my-endpoint` verfügbar.
---
## 🔄 OpenAPI-Dokumentation generieren
### Automatisch bei Server-Start (Dev-Mode) ⭐
Im Development-Modus wird die OpenAPI-Spezifikation **automatisch generiert**, wenn der Server startet:
```bash
cd backend
npm run dev # oder npm run server
```
**Ausgabe:**
```
🔄 Generating OpenAPI specification...
✓ OpenAPI spec generated
📊 Total paths: 35
📋 Tags: Upload, Management Portal, Admin - ...
```
Die Datei `backend/docs/openapi.json` wird bei jedem Start aktualisiert.
### Manuell (für Produktions-Builds)
```bash
cd backend
npm run generate-openapi
```
**Generiert:** `backend/docs/openapi.json`
**Zugriff:** http://localhost:5001/api/docs/ (nur dev-mode)
### Was wird generiert?
- Alle Routen aus `routeMappings.js`
- Mount-Prefixes werden automatisch angewendet
- Swagger-Annotations aus Route-Dateien werden erkannt
- **Automatisch im Dev-Mode:** Bei jedem Server-Start (nur wenn `NODE_ENV !== 'production'`)
- **Manuell:** Mit `npm run generate-openapi`
### Swagger-Annotations verwenden
**Wichtig:** swagger-autogen nutzt `#swagger` Comments (nicht `@swagger`)!
```javascript
router.get('/groups', async (req, res) => {
/*
#swagger.tags = ['Groups']
#swagger.summary = 'Alle Gruppen abrufen'
#swagger.description = 'Liefert alle freigegebenen Gruppen mit Bildern'
#swagger.responses[200] = {
description: 'Liste der Gruppen',
schema: {
groups: [{
groupId: 'cTV24Yn-a',
year: 2024,
title: 'Familie Mueller'
}],
totalCount: 73
}
}
#swagger.responses[500] = {
description: 'Server error'
}
*/
// Route implementation...
});
```
**Mit Parametern:**
```javascript
router.get('/groups/:groupId', async (req, res) => {
/*
#swagger.tags = ['Groups']
#swagger.summary = 'Einzelne Gruppe abrufen'
#swagger.parameters['groupId'] = {
in: 'path',
required: true,
type: 'string',
description: 'Unique group ID',
example: 'cTV24Yn-a'
}
#swagger.responses[200] = {
description: 'Group details',
schema: { groupId: 'cTV24Yn-a', title: 'Familie Mueller' }
}
#swagger.responses[404] = {
description: 'Group not found'
}
*/
// Route implementation...
});
```
**Mit Request Body:**
```javascript
router.post('/groups', async (req, res) => {
/*
#swagger.tags = ['Groups']
#swagger.summary = 'Neue Gruppe erstellen'
#swagger.parameters['body'] = {
in: 'body',
required: true,
schema: {
title: 'Familie Mueller',
year: 2024,
description: 'Weihnachtsfeier'
}
}
#swagger.responses[201] = {
description: 'Group created',
schema: { groupId: 'abc123', message: 'Created successfully' }
}
*/
// Route implementation...
});
```
---
## 🗂️ API-Struktur
### Public API (`/api`)
- **Zugriff:** Öffentlich, keine Authentifizierung
- **Routen:** Upload, Download, Groups (lesend)
- **Dateien:** `upload.js`, `download.js`, `batchUpload.js`, `groups.js`
### Management API (`/api/manage`)
- **Zugriff:** Token-basiert (UUID v4)
- **Routen:** Selbstverwaltung von eigenen Gruppen
- **Dateien:** `management.js`
- **Beispiel:** `PUT /api/manage/:token/reorder`
### Admin API (`/api/admin`)
- **Zugriff:** Geschützt (Middleware erforderlich)
- **Routen:** Moderation, Deletion Logs, Cleanup
- **Dateien:** `admin.js`, `consent.js`, `reorder.js`
- **Beispiel:** `GET /api/admin/groups`, `DELETE /api/admin/groups/:id`
### System API (`/api/system`)
- **Zugriff:** Intern (Wartungsfunktionen)
- **Routen:** Datenbank-Migrationen
- **Dateien:** `migration.js`
---
## 🔒 Mehrfach-Mount (z.B. Reorder)
Manche Routen sind an mehreren Stellen verfügbar:
```javascript
// routeMappings.js
module.exports = [
// Admin-Zugriff (geschützt)
{ router: 'reorder', prefix: '/api/admin', file: 'reorder.js' },
// Management-Zugriff (in management.js integriert)
// { router: 'management', prefix: '/api/manage', file: 'management.js' }
// → enthält PUT /:token/reorder
];
```
**Hinweis:** Reorder ist direkt in `management.js` implementiert, nicht als separater Mount.
---
## ⚠️ Wichtige Regeln
### 1. Relative Pfade in Router-Dateien
```javascript
// ✅ RICHTIG (ohne Prefix)
router.get('/groups', ...)
router.get('/groups/:id', ...)
// ❌ FALSCH (Prefix gehört in routeMappings.js)
router.get('/api/groups', ...)
```
### 2. String-Literale verwenden
```javascript
// ✅ RICHTIG
router.get('/upload', ...)
// ❌ FALSCH (swagger-autogen kann Variablen nicht auflösen)
const ROUTES = { UPLOAD: '/upload' };
router.get(ROUTES.UPLOAD, ...)
```
### 3. Mount-Prefix nur in routeMappings.js
```javascript
// routeMappings.js
{ router: 'groups', prefix: '/api', file: 'groups.js' }
// ✅ Ergebnis: /api/groups
```
---
## 🧪 Testen
### Backend-Tests mit curl
```bash
# Public API
curl http://localhost:5000/api/groups
# Management API (Token erforderlich)
curl http://localhost:5000/api/manage/YOUR-TOKEN-HERE
# Admin API
curl http://localhost:5000/api/admin/groups
```
### OpenAPI-Spec validieren
```bash
cd backend
npm run test-openapi
```
**Ausgabe:**
```
🔍 Testing 35 paths from openapi.json against http://localhost:5000
✅ GET /api/groups → 200
✅ GET /api/upload → 405 (expected, needs POST)
...
```
### Swagger UI öffnen
```
http://localhost:5001/api/docs/
```
**Hinweis:** Nur im Development-Modus verfügbar!
---
## 🐛 Troubleshooting
### OpenAPI-Generierung hängt
**Problem:** `generate-openapi.js` lädt Router-Module, die wiederum andere Module laden → Zirkelbezüge
**Lösung:** `routeMappings.js` enthält nur Konfiguration, keine Router-Imports
### Route nicht in OpenAPI
1. Prüfe `routeMappings.js` → Route registriert?
2. Prüfe Router-Datei → String-Literale verwendet?
3. Regeneriere: `npm run generate-openapi` (oder starte Server neu im Dev-Mode)
### Route funktioniert nicht
1. Prüfe `routes/index.js` → Router in `routerMap` eingetragen?
2. Prüfe Console → Fehler beim Server-Start?
3. Teste mit curl → Exakte URL prüfen
---
## 📚 Weitere Dokumentation
- **Feature-Plan:** `docs/FEATURE_PLAN-autogen-openapi.md`
- **OpenAPI-Spec:** `backend/docs/openapi.json`
- **API-Tests:** `backend/test-openapi-paths.js`

File diff suppressed because it is too large Load Diff

View File

@ -1,195 +0,0 @@
const express = require('express');
const router = express.Router();
const AdminAuthService = require('../services/AdminAuthService');
const { requireAdminAuth } = require('../middlewares/auth');
const { requireCsrf } = require('../middlewares/csrf');
router.get('/setup/status', async (req, res) => {
/*
#swagger.tags = ['Admin Authentication']
#swagger.summary = 'Check onboarding status'
#swagger.description = 'Returns whether the initial admin setup is still pending and if a session already exists.'
*/
try {
const needsSetup = await AdminAuthService.needsInitialSetup();
const sessionUser = req.session && req.session.user
? {
id: req.session.user.id,
username: req.session.user.username,
role: req.session.user.role,
requiresPasswordChange: Boolean(req.session.user.requiresPasswordChange)
}
: null;
res.json({
needsSetup,
hasSession: Boolean(sessionUser),
user: sessionUser
});
} catch (error) {
console.error('[Auth] setup/status error:', error);
res.status(500).json({ error: 'SETUP_STATUS_FAILED' });
}
});
router.post('/setup/initial-admin', async (req, res) => {
/*
#swagger.tags = ['Admin Authentication']
#swagger.summary = 'Complete initial admin setup'
#swagger.description = 'Creates the very first admin account and immediately starts a session.'
*/
try {
const { username, password } = req.body || {};
if (!username || !password) {
return res.status(400).json({ error: 'USERNAME_AND_PASSWORD_REQUIRED' });
}
const user = await AdminAuthService.createInitialAdmin({ username, password });
const csrfToken = AdminAuthService.startSession(req, {
...user,
requiresPasswordChange: false
});
res.status(201).json({
success: true,
user: {
id: user.id,
username: user.username,
role: user.role
},
csrfToken
});
} catch (error) {
console.error('[Auth] initial setup error:', error.message);
switch (error.message) {
case 'SETUP_ALREADY_COMPLETED':
return res.status(409).json({ error: 'SETUP_ALREADY_COMPLETED' });
case 'USERNAME_REQUIRED':
return res.status(400).json({ error: 'USERNAME_REQUIRED' });
case 'PASSWORD_TOO_WEAK':
return res.status(400).json({ error: 'PASSWORD_TOO_WEAK' });
default:
if (error.message && error.message.includes('UNIQUE')) {
return res.status(409).json({ error: 'USERNAME_IN_USE' });
}
return res.status(500).json({ error: 'INITIAL_SETUP_FAILED' });
}
}
});
router.post('/login', async (req, res) => {
/*
#swagger.tags = ['Admin Authentication']
#swagger.summary = 'Admin login'
#swagger.description = 'Starts a server-side admin session and returns a CSRF token.'
*/
try {
const { username, password } = req.body || {};
if (!username || !password) {
return res.status(400).json({ error: 'USERNAME_AND_PASSWORD_REQUIRED' });
}
if (await AdminAuthService.needsInitialSetup()) {
return res.status(409).json({ error: 'SETUP_REQUIRED' });
}
const user = await AdminAuthService.verifyCredentials(username, password);
if (!user) {
return res.status(401).json({ error: 'INVALID_CREDENTIALS' });
}
const csrfToken = AdminAuthService.startSession(req, user);
res.json({
success: true,
user: {
id: user.id,
username: user.username,
role: user.role,
requiresPasswordChange: user.requiresPasswordChange
},
csrfToken
});
} catch (error) {
console.error('[Auth] login error:', error);
res.status(500).json({ error: 'LOGIN_FAILED' });
}
});
router.post('/logout', async (req, res) => {
/*
#swagger.tags = ['Admin Authentication']
#swagger.summary = 'Terminate admin session'
#swagger.description = 'Destroys the current session and clears the sid cookie.'
*/
try {
await AdminAuthService.destroySession(req);
res.clearCookie('sid');
res.status(204).send();
} catch (error) {
console.error('[Auth] logout error:', error);
res.status(500).json({ error: 'LOGOUT_FAILED' });
}
});
router.get('/csrf-token', requireAdminAuth, (req, res) => {
/*
#swagger.tags = ['Admin Authentication']
#swagger.summary = 'Fetch CSRF token'
#swagger.description = 'Returns a CSRF token for the active admin session (session required).'
*/
if (!req.session.csrfToken) {
req.session.csrfToken = AdminAuthService.generateCsrfToken();
}
res.json({ csrfToken: req.session.csrfToken });
});
router.post('/change-password', requireAdminAuth, requireCsrf, async (req, res) => {
/*
#swagger.tags = ['Admin Authentication']
#swagger.summary = 'Change admin password'
#swagger.description = 'Allows a logged-in admin to rotate their password (CSRF protected).'
*/
try {
const { currentPassword, newPassword } = req.body || {};
if (!currentPassword || !newPassword) {
return res.status(400).json({ error: 'CURRENT_AND_NEW_PASSWORD_REQUIRED' });
}
const user = await AdminAuthService.changePassword({
userId: req.session.user.id,
currentPassword,
newPassword
});
req.session.user = {
...req.session.user,
requiresPasswordChange: false
};
res.json({
success: true,
user: {
id: user.id,
username: user.username,
role: user.role,
requiresPasswordChange: false
}
});
} catch (error) {
console.error('[Auth] change password error:', error.message || error);
switch (error.message) {
case 'CURRENT_PASSWORD_REQUIRED':
return res.status(400).json({ error: 'CURRENT_PASSWORD_REQUIRED' });
case 'PASSWORD_TOO_WEAK':
return res.status(400).json({ error: 'PASSWORD_TOO_WEAK' });
case 'INVALID_CURRENT_PASSWORD':
return res.status(400).json({ error: 'INVALID_CURRENT_PASSWORD' });
case 'USER_NOT_FOUND':
return res.status(404).json({ error: 'USER_NOT_FOUND' });
default:
return res.status(500).json({ error: 'PASSWORD_CHANGE_FAILED' });
}
}
});
module.exports = router;

View File

@ -1,102 +1,16 @@
const generateId = require("shortid"); const generateId = require("shortid");
const express = require('express'); const express = require('express');
const { Router } = require('express'); const { Router } = require('express');
const path = require('path'); const { endpoints } = require('../constants');
const UploadGroup = require('../models/uploadGroup'); const UploadGroup = require('../models/uploadGroup');
const groupRepository = require('../repositories/GroupRepository'); const GroupRepository = require('../repositories/GroupRepository');
const dbManager = require('../database/DatabaseManager'); const dbManager = require('../database/DatabaseManager');
const ImagePreviewService = require('../services/ImagePreviewService'); const ImagePreviewService = require('../services/ImagePreviewService');
const TelegramNotificationService = require('../services/TelegramNotificationService');
// Singleton-Instanz des Telegram Service
const telegramService = new TelegramNotificationService();
const router = Router(); const router = Router();
/**
* @swagger
* /upload/batch:
* post:
* tags: [Upload]
* summary: Batch upload multiple images and create a group
* description: Uploads multiple images at once, creates previews, and stores them as a group with metadata and consent information
* requestBody:
* required: true
* content:
* multipart/form-data:
* schema:
* type: object
* required:
* - images
* - consents
* properties:
* images:
* type: array
* items:
* type: string
* format: binary
* description: Multiple image files to upload
* metadata:
* type: string
* description: JSON string with group metadata (year, title, description, name)
* example: '{"year":2024,"title":"Familie Mueller","description":"Weihnachtsfeier","name":"Mueller"}'
* descriptions:
* type: string
* description: JSON array with image descriptions
* example: '[{"index":0,"description":"Gruppenfoto"},{"index":1,"description":"Werkstatt"}]'
* consents:
* type: string
* description: JSON object with consent flags (workshopConsent is required)
* example: '{"workshopConsent":true,"socialMedia":{"facebook":false,"instagram":true}}'
* responses:
* 200:
* description: Batch upload successful
* content:
* application/json:
* schema:
* type: object
* properties:
* success:
* type: boolean
* example: true
* groupId:
* type: string
* example: "cTV24Yn-a"
* managementToken:
* type: string
* format: uuid
* example: "550e8400-e29b-41d4-a716-446655440000"
* filesProcessed:
* type: integer
* example: 5
* message:
* type: string
* example: "5 Bilder erfolgreich hochgeladen"
* 400:
* description: Bad request - missing files or workshop consent
* content:
* application/json:
* schema:
* type: object
* properties:
* error:
* type: string
* message:
* type: string
* 500:
* description: Server error during batch upload
*/
// Batch-Upload für mehrere Bilder // Batch-Upload für mehrere Bilder
router.post('/upload/batch', async (req, res) => { router.post(endpoints.UPLOAD_BATCH, async (req, res) => {
/*
#swagger.tags = ['Upload']
#swagger.summary = 'Batch upload multiple images'
#swagger.description = 'Accepts multiple images + metadata/consents and creates a managed group with management token.'
#swagger.consumes = ['multipart/form-data']
#swagger.responses[200] = { description: 'Batch upload successful (returns management token)' }
#swagger.responses[400] = { description: 'Missing files or workshop consent' }
#swagger.responses[500] = { description: 'Unexpected server error' }
*/
try { try {
// Überprüfe ob Dateien hochgeladen wurden // Überprüfe ob Dateien hochgeladen wurden
if (!req.files || !req.files.images) { if (!req.files || !req.files.images) {
@ -108,31 +22,11 @@ router.post('/upload/batch', async (req, res) => {
// Metadaten aus dem Request body // Metadaten aus dem Request body
let metadata = {}; let metadata = {};
let descriptions = [];
let consents = {};
try { try {
metadata = req.body.metadata ? JSON.parse(req.body.metadata) : {}; metadata = req.body.metadata ? JSON.parse(req.body.metadata) : {};
descriptions = req.body.descriptions ? JSON.parse(req.body.descriptions) : [];
consents = req.body.consents ? JSON.parse(req.body.consents) : {};
} catch (e) { } catch (e) {
console.error('Error parsing metadata/descriptions/consents:', e); console.error('Error parsing metadata:', e);
metadata = { description: req.body.description || "" }; metadata = { description: req.body.description || "" };
descriptions = [];
consents = {};
}
// Merge separate form fields into metadata (backwards compatibility)
if (req.body.year) metadata.year = parseInt(req.body.year);
if (req.body.title) metadata.title = req.body.title;
if (req.body.name) metadata.name = req.body.name;
if (req.body.description) metadata.description = req.body.description;
// Validiere Workshop Consent (Pflichtfeld)
if (!consents.workshopConsent) {
return res.status(400).json({
error: 'Workshop consent required',
message: 'Die Zustimmung zur Anzeige in der Werkstatt ist erforderlich'
});
} }
// Erstelle neue Upload-Gruppe mit erweiterten Metadaten // Erstelle neue Upload-Gruppe mit erweiterten Metadaten
@ -202,63 +96,29 @@ router.post('/upload/batch', async (req, res) => {
console.error('Preview generation failed:', err); console.error('Preview generation failed:', err);
}); });
// Speichere Gruppe mit Consents in SQLite // Speichere Gruppe in SQLite
const createResult = await groupRepository.createGroupWithConsent({ await GroupRepository.createGroup({
groupId: group.groupId, groupId: group.groupId,
year: group.year, year: group.year,
title: group.title, title: group.title,
description: group.description, description: group.description,
name: group.name, name: group.name,
uploadDate: group.uploadDate, uploadDate: group.uploadDate,
images: processedFiles.map((file, index) => { images: processedFiles.map((file, index) => ({
// Finde passende Beschreibung für dieses Bild (match by fileName or originalName) fileName: file.fileName,
const descObj = descriptions.find(d => originalName: file.originalName,
d.fileName === file.originalName || d.fileName === file.fileName filePath: `/upload/${file.fileName}`,
); uploadOrder: index + 1,
const imageDescription = descObj ? descObj.description : null; fileSize: file.size,
mimeType: files[index].mimetype
// Validierung: Max 200 Zeichen }))
if (imageDescription && imageDescription.length > 200) { });
console.warn(`Image description for ${file.originalName} exceeds 200 characters, truncating`);
}
return {
fileName: file.fileName,
originalName: file.originalName,
filePath: `/upload/${file.fileName}`,
uploadOrder: index + 1,
fileSize: file.size,
mimeType: files[index].mimetype,
imageDescription: imageDescription ? imageDescription.slice(0, 200) : null
};
})
},
consents.workshopConsent,
consents.socialMediaConsents || []
);
console.log(`Successfully saved group ${group.groupId} with ${files.length} images to database`); console.log(`Successfully saved group ${group.groupId} with ${files.length} images to database`);
// Sende Telegram-Benachrichtigung (async, non-blocking) // Erfolgreiche Antwort
if (telegramService.isAvailable()) {
telegramService.sendUploadNotification({
name: group.name,
year: group.year,
title: group.title,
imageCount: files.length,
workshopConsent: consents.workshopConsent,
socialMediaConsents: consents.socialMediaConsents || [],
token: createResult.managementToken
}).catch(err => {
// Fehler loggen, aber Upload nicht fehlschlagen lassen
console.error('[Telegram] Upload notification failed:', err.message);
});
}
// Erfolgreiche Antwort mit Management-Token
res.json({ res.json({
groupId: group.groupId, groupId: group.groupId,
managementToken: createResult.managementToken,
message: 'Batch upload successful', message: 'Batch upload successful',
imageCount: files.length, imageCount: files.length,
year: group.year, year: group.year,

View File

@ -1,460 +0,0 @@
/**
* Consent Management API Routes
*
* Handles social media platform listings and consent management
*/
const express = require('express');
const router = express.Router();
const GroupRepository = require('../repositories/GroupRepository');
const SocialMediaRepository = require('../repositories/SocialMediaRepository');
const dbManager = require('../database/DatabaseManager');
const { requireAdminAuth } = require('../middlewares/auth');
const { requireCsrf } = require('../middlewares/csrf');
// Schütze alle Consent-Routes mit Admin-Auth
router.use(requireAdminAuth);
router.use(requireCsrf);
// ============================================================================
// Social Media Platforms
// ============================================================================
/**
* GET /social-media/platforms
* Liste aller aktiven Social Media Plattformen
*/
router.get('/social-media/platforms', async (req, res) => {
/*
#swagger.tags = ['Consent Management']
#swagger.summary = 'Get active social media platforms'
#swagger.description = 'Returns list of all active social media platforms available for consent'
#swagger.responses[200] = {
description: 'List of platforms',
schema: [{
platform_id: 1,
platform_name: 'instagram',
display_name: 'Instagram',
icon_name: 'instagram',
is_active: true
}]
}
*/
try {
const socialMediaRepo = new SocialMediaRepository(dbManager);
const platforms = await socialMediaRepo.getActivePlatforms();
res.json(platforms);
} catch (error) {
console.error('Error fetching platforms:', error);
res.status(500).json({
error: 'Failed to fetch social media platforms',
message: error.message
});
}
});
// ============================================================================
// Group Consents
// ============================================================================
router.post('/groups/:groupId/consents', async (req, res) => {
/*
#swagger.tags = ['Consent Management']
#swagger.summary = 'Save or update consents for a group'
#swagger.description = 'Store workshop consent and social media consents for a specific group'
#swagger.parameters['groupId'] = {
in: 'path',
required: true,
type: 'string',
description: 'Group ID',
example: 'abc123def456'
}
#swagger.parameters['body'] = {
in: 'body',
required: true,
schema: {
workshopConsent: true,
socialMediaConsents: [
{ platformId: 1, consented: true },
{ platformId: 2, consented: false }
]
}
}
#swagger.responses[200] = {
description: 'Consents saved successfully',
schema: { success: true, message: 'Consents saved successfully' }
}
#swagger.responses[400] = {
description: 'Invalid request data'
}
*/
try {
const { groupId } = req.params;
const { workshopConsent, socialMediaConsents } = req.body;
// Validierung
if (typeof workshopConsent !== 'boolean') {
return res.status(400).json({
error: 'Invalid request',
message: 'workshopConsent must be a boolean'
});
}
if (!Array.isArray(socialMediaConsents)) {
return res.status(400).json({
error: 'Invalid request',
message: 'socialMediaConsents must be an array'
});
}
// Prüfe ob Gruppe existiert
const group = await GroupRepository.getGroupById(groupId);
if (!group) {
return res.status(404).json({
error: 'Group not found',
message: `No group found with ID: ${groupId}`
});
}
// Aktualisiere Consents
await GroupRepository.updateConsents(
groupId,
workshopConsent,
socialMediaConsents
);
res.json({
success: true,
message: 'Consents updated successfully',
groupId
});
} catch (error) {
console.error('Error updating consents:', error);
res.status(500).json({
error: 'Failed to update consents',
message: error.message
});
}
});
/**
* GET /groups/:groupId/consents
* Lade alle Consents für eine Gruppe
*/
router.get('/groups/:groupId/consents', async (req, res) => {
/*
#swagger.tags = ['Consent Management']
#swagger.summary = 'Get consents for a group'
#swagger.description = 'Returns all consent data (workshop + social media) for a specific group'
#swagger.parameters['groupId'] = {
in: 'path',
required: true,
type: 'string',
description: 'Group ID',
example: 'abc123def456'
}
#swagger.responses[200] = {
description: 'Group consents',
schema: {
groupId: 'abc123',
workshopConsent: true,
consentTimestamp: '2025-11-01T10:00:00Z',
socialMediaConsents: [{
platformId: 1,
platformName: 'instagram',
displayName: 'Instagram',
consented: true,
revoked: false
}]
}
}
#swagger.responses[404] = {
description: 'Group not found'
}
*/
try {
const { groupId } = req.params;
// Hole Gruppe mit Consents
const group = await GroupRepository.getGroupWithConsents(groupId);
if (!group) {
return res.status(404).json({
error: 'Group not found',
message: `No group found with ID: ${groupId}`
});
}
// Formatiere Response
const response = {
groupId: group.group_id,
workshopConsent: group.display_in_workshop === 1,
consentTimestamp: group.consent_timestamp,
socialMediaConsents: group.consents.map(c => ({
platformId: c.platform_id,
platformName: c.platform_name,
displayName: c.display_name,
iconName: c.icon_name,
consented: c.consented === 1,
consentTimestamp: c.consent_timestamp,
revoked: c.revoked === 1,
revokedTimestamp: c.revoked_timestamp
}))
};
res.json(response);
} catch (error) {
console.error('Error fetching consents:', error);
res.status(500).json({
error: 'Failed to fetch consents',
message: error.message
});
}
});
// ============================================================================
// Admin - Filtering & Export
// ============================================================================
/**
* GET /groups/by-consent
* Filtere Gruppen nach Consent-Status
*
* Query params:
* - displayInWorkshop: boolean
* - platformId: number
* - platformConsent: boolean
*/
router.get('/groups/by-consent', async (req, res) => {
/*
#swagger.tags = ['Consent Management']
#swagger.summary = 'Filter groups by consent status'
#swagger.description = 'Returns groups filtered by workshop consent or social media platform consents'
#swagger.parameters['displayInWorkshop'] = {
in: 'query',
type: 'boolean',
description: 'Filter by workshop consent',
example: true
}
#swagger.parameters['platformId'] = {
in: 'query',
type: 'integer',
description: 'Filter by platform ID',
example: 1
}
#swagger.parameters['platformConsent'] = {
in: 'query',
type: 'boolean',
description: 'Filter by platform consent status',
example: true
}
#swagger.responses[200] = {
description: 'Filtered groups',
schema: {
count: 5,
filters: {
displayInWorkshop: true
},
groups: []
}
}
#swagger.responses[400] = {
description: 'Invalid platformId'
}
*/
try {
const filters = {};
// Parse query parameters
if (req.query.displayInWorkshop !== undefined) {
filters.displayInWorkshop = req.query.displayInWorkshop === 'true';
}
if (req.query.platformId !== undefined) {
filters.platformId = parseInt(req.query.platformId, 10);
if (isNaN(filters.platformId)) {
return res.status(400).json({
error: 'Invalid platformId',
message: 'platformId must be a number'
});
}
}
if (req.query.platformConsent !== undefined) {
filters.platformConsent = req.query.platformConsent === 'true';
}
// Hole gefilterte Gruppen
const groups = await GroupRepository.getGroupsByConsentStatus(filters);
res.json({
count: groups.length,
filters,
groups
});
} catch (error) {
console.error('Error filtering groups by consent:', error);
res.status(500).json({
error: 'Failed to filter groups',
message: error.message
});
}
});
/**
* GET /consents/export
* Export Consent-Daten für rechtliche Dokumentation
*
* Query params:
* - format: 'json' | 'csv' (default: json)
* - year: number (optional filter)
* - approved: boolean (optional filter)
*/
router.get('/consents/export', async (req, res) => {
/*
#swagger.tags = ['Consent Management']
#swagger.summary = 'Export consent data'
#swagger.description = 'Exports consent data for legal documentation in JSON or CSV format'
#swagger.parameters['format'] = {
in: 'query',
type: 'string',
enum: ['json', 'csv'],
description: 'Export format',
example: 'json'
}
#swagger.parameters['year'] = {
in: 'query',
type: 'integer',
description: 'Filter by year',
example: 2025
}
#swagger.parameters['approved'] = {
in: 'query',
type: 'boolean',
description: 'Filter by approval status',
example: true
}
#swagger.responses[200] = {
description: 'Export data (JSON format)',
schema: {
exportDate: '2025-11-15T16:30:00Z',
filters: { year: 2025 },
count: 12,
data: []
}
}
#swagger.responses[200] = {
description: 'Export data (CSV format)',
content: {
'text/csv': {
schema: {
type: 'string',
format: 'binary'
}
}
}
}
#swagger.responses[400] = {
description: 'Invalid format'
}
*/
try {
const format = req.query.format || 'json';
const filters = {};
// Parse filters
if (req.query.year) {
filters.year = parseInt(req.query.year, 10);
}
if (req.query.approved !== undefined) {
filters.approved = req.query.approved === 'true';
}
// Export Daten holen
const exportData = await GroupRepository.exportConsentData(filters);
// Format: JSON
if (format === 'json') {
res.json({
exportDate: new Date().toISOString(),
filters,
count: exportData.length,
data: exportData
});
return;
}
// Format: CSV
if (format === 'csv') {
// CSV Header
let csv = 'group_id,year,title,name,upload_date,workshop_consent,consent_timestamp,approved';
// Sammle alle möglichen Plattformen
const allPlatforms = new Set();
exportData.forEach(group => {
group.socialMediaConsents.forEach(consent => {
allPlatforms.add(consent.platform_name);
});
});
// Füge Platform-Spalten hinzu
const platformNames = Array.from(allPlatforms).sort();
platformNames.forEach(platform => {
csv += `,${platform}`;
});
csv += '\n';
// CSV Daten
exportData.forEach(group => {
const row = [
group.group_id,
group.year,
`"${(group.title || '').replace(/"/g, '""')}"`,
`"${(group.name || '').replace(/"/g, '""')}"`,
group.upload_date,
group.display_in_workshop === 1 ? 'true' : 'false',
group.consent_timestamp || '',
group.approved === 1 ? 'true' : 'false'
];
// Platform-Consents
const consentMap = {};
group.socialMediaConsents.forEach(consent => {
// Consent ist nur dann aktiv wenn consented=1 UND nicht revoked
consentMap[consent.platform_name] = consent.consented === 1 && consent.revoked !== 1;
});
platformNames.forEach(platform => {
row.push(consentMap[platform] ? 'true' : 'false');
});
csv += row.join(',') + '\n';
});
res.setHeader('Content-Type', 'text/csv');
res.setHeader('Content-Disposition', `attachment; filename=consent-export-${Date.now()}.csv`);
res.send(csv);
return;
}
res.status(400).json({
error: 'Invalid format',
message: 'Format must be "json" or "csv"'
});
} catch (error) {
console.error('Error exporting consent data:', error);
res.status(500).json({
error: 'Failed to export consent data',
message: error.message
});
}
});
module.exports = router;

View File

@ -1,48 +1,10 @@
const { Router } = require('express'); const { Router } = require('express');
const { UPLOAD_FS_DIR } = require('../constants'); const { endpoints, UPLOAD_FS_DIR } = require('../constants');
const path = require('path'); const path = require('path');
const router = Router(); const router = Router();
/** router.get(endpoints.DOWNLOAD_FILE, (req, res) => {
* @swagger
* /download/{id}:
* get:
* tags: [Download]
* summary: Download an uploaded image file
* description: Downloads the original image file by filename
* parameters:
* - in: path
* name: id
* required: true
* schema:
* type: string
* example: "abc123.jpg"
* description: Filename of the image to download
* responses:
* 200:
* description: File download initiated
* content:
* image/*:
* schema:
* type: string
* format: binary
* 404:
* description: File not found
*/
router.get('/download/:id', (req, res) => {
/*
#swagger.tags = ['Download']
#swagger.summary = 'Download original image'
#swagger.parameters['id'] = {
in: 'path',
required: true,
type: 'string',
description: 'Filename of the uploaded image'
}
#swagger.responses[200] = { description: 'Binary image response' }
#swagger.responses[404] = { description: 'File not found' }
*/
const filePath = path.join(__dirname, '..', UPLOAD_FS_DIR, req.params.id); const filePath = path.join(__dirname, '..', UPLOAD_FS_DIR, req.params.id);
res.download(filePath); res.download(filePath);
}); });

View File

@ -1,24 +1,12 @@
const { Router } = require('express'); const { Router } = require('express');
const { endpoints } = require('../constants');
const GroupRepository = require('../repositories/GroupRepository'); const GroupRepository = require('../repositories/GroupRepository');
const MigrationService = require('../services/MigrationService'); const MigrationService = require('../services/MigrationService');
const router = Router(); const router = Router();
// Alle Gruppen abrufen (für Slideshow mit vollständigen Bilddaten) // Alle Gruppen abrufen (für Slideshow mit vollständigen Bilddaten)
router.get('/groups', async (req, res) => { router.get(endpoints.GET_ALL_GROUPS, async (req, res) => {
/*
#swagger.tags = ['Public Groups']
#swagger.summary = 'Get approved groups with images'
#swagger.description = 'Returns all approved groups (slideshow feed). Automatically triggers JSON→SQLite migration if required.'
#swagger.responses[200] = {
description: 'List of approved groups',
schema: {
groups: [{ groupId: 'cTV24Yn-a', title: 'Familie Mueller' }],
totalCount: 73
}
}
#swagger.responses[500] = { description: 'Server error' }
*/
try { try {
// Auto-Migration beim ersten Zugriff // Auto-Migration beim ersten Zugriff
const migrationStatus = await MigrationService.getMigrationStatus(); const migrationStatus = await MigrationService.getMigrationStatus();
@ -42,21 +30,52 @@ router.get('/groups', async (req, res) => {
} }
}); });
// Einzelne Gruppe abrufen (nur freigegebene) // Alle Gruppen für Moderation abrufen (mit Freigabestatus) - MUSS VOR den :groupId routen stehen!
router.get('/groups/:groupId', async (req, res) => { router.get('/moderation/groups', async (req, res) => {
/* try {
#swagger.tags = ['Public Groups'] const groups = await GroupRepository.getAllGroupsWithModerationInfo();
#swagger.summary = 'Get approved group by ID' res.json({
#swagger.parameters['groupId'] = { groups,
in: 'path', totalCount: groups.length,
required: true, pendingCount: groups.filter(g => !g.approved).length,
type: 'string', approvedCount: groups.filter(g => g.approved).length
description: 'Public groupId (e.g. cTV24Yn-a)' });
} catch (error) {
console.error('Error fetching moderation groups:', error);
res.status(500).json({
error: 'Internal server error',
message: 'Fehler beim Laden der Moderations-Gruppen',
details: error.message
});
}
});
// Einzelne Gruppe für Moderation abrufen (inkl. nicht-freigegebene)
router.get('/moderation/groups/:groupId', async (req, res) => {
try {
const { groupId } = req.params;
const group = await GroupRepository.getGroupForModeration(groupId);
if (!group) {
return res.status(404).json({
error: 'Group not found',
message: `Gruppe mit ID ${groupId} wurde nicht gefunden`
});
} }
#swagger.responses[200] = { description: 'Group payload (images + metadata)' }
#swagger.responses[404] = { description: 'Group not found or not approved' } res.json(group);
#swagger.responses[500] = { description: 'Server error' } } catch (error) {
*/ console.error('Error fetching group for moderation:', error);
res.status(500).json({
error: 'Internal server error',
message: 'Fehler beim Laden der Gruppe für Moderation',
details: error.message
});
}
});
// Einzelne Gruppe abrufen
router.get(endpoints.GET_GROUP, async (req, res) => {
try { try {
const { groupId } = req.params; const { groupId } = req.params;
const group = await GroupRepository.getGroupById(groupId); const group = await GroupRepository.getGroupById(groupId);
@ -79,4 +98,149 @@ router.get('/groups/:groupId', async (req, res) => {
} }
}); });
// Gruppe freigeben/genehmigen
router.patch('/groups/:groupId/approve', async (req, res) => {
try {
const { groupId } = req.params;
const { approved } = req.body;
// Validierung
if (typeof approved !== 'boolean') {
return res.status(400).json({
error: 'Invalid request',
message: 'approved muss ein boolean Wert sein'
});
}
const updated = await GroupRepository.updateGroupApproval(groupId, approved);
if (!updated) {
return res.status(404).json({
error: 'Group not found',
message: `Gruppe mit ID ${groupId} wurde nicht gefunden`
});
}
res.json({
success: true,
message: approved ? 'Gruppe freigegeben' : 'Gruppe gesperrt',
groupId: groupId,
approved: approved
});
} catch (error) {
console.error('Error updating group approval:', error);
res.status(500).json({
error: 'Internal server error',
message: 'Fehler beim Aktualisieren der Freigabe'
});
}
});
// Gruppe bearbeiten (Metadaten aktualisieren)
router.patch('/groups/:groupId', async (req, res) => {
try {
const { groupId } = req.params;
// Erlaubte Felder zum Aktualisieren
const allowed = ['year', 'title', 'description', 'name'];
const updates = {};
for (const field of allowed) {
if (req.body[field] !== undefined) {
updates[field] = req.body[field];
}
}
if (Object.keys(updates).length === 0) {
return res.status(400).json({
error: 'Invalid request',
message: 'Keine gültigen Felder zum Aktualisieren angegeben'
});
}
const updated = await GroupRepository.updateGroup(groupId, updates);
if (!updated) {
return res.status(404).json({
error: 'Group not found',
message: `Gruppe mit ID ${groupId} wurde nicht gefunden`
});
}
res.json({
success: true,
message: 'Gruppe erfolgreich aktualisiert',
groupId: groupId,
updates: updates
});
} catch (error) {
console.error('Error updating group:', error);
res.status(500).json({
error: 'Internal server error',
message: 'Fehler beim Aktualisieren der Gruppe',
details: error.message
});
}
});
// Einzelnes Bild löschen
router.delete('/groups/:groupId/images/:imageId', async (req, res) => {
try {
const { groupId, imageId } = req.params;
const deleted = await GroupRepository.deleteImage(groupId, parseInt(imageId));
if (!deleted) {
return res.status(404).json({
error: 'Image not found',
message: `Bild mit ID ${imageId} in Gruppe ${groupId} wurde nicht gefunden`
});
}
res.json({
success: true,
message: 'Bild erfolgreich gelöscht',
groupId: groupId,
imageId: parseInt(imageId)
});
} catch (error) {
console.error('Error deleting image:', error);
res.status(500).json({
error: 'Internal server error',
message: 'Fehler beim Löschen des Bildes'
});
}
});
// Gruppe löschen
router.delete(endpoints.DELETE_GROUP, async (req, res) => {
try {
const { groupId } = req.params;
const deleted = await GroupRepository.deleteGroup(groupId);
if (!deleted) {
return res.status(404).json({
error: 'Group not found',
message: `Gruppe mit ID ${groupId} wurde nicht gefunden`
});
}
res.json({
success: true,
message: 'Gruppe erfolgreich gelöscht',
groupId: groupId
});
} catch (error) {
console.error('Error deleting group:', error);
res.status(500).json({
error: 'Internal server error',
message: 'Fehler beim Löschen der Gruppe'
});
}
});
module.exports = router; module.exports = router;

View File

@ -1,37 +1,13 @@
const authRouter = require('./auth');
const uploadRouter = require('./upload'); const uploadRouter = require('./upload');
const downloadRouter = require('./download'); const downloadRouter = require('./download');
const batchUploadRouter = require('./batchUpload'); const batchUploadRouter = require('./batchUpload');
const groupsRouter = require('./groups'); const groupsRouter = require('./groups');
const socialMediaRouter = require('./socialMedia');
const migrationRouter = require('./migration'); const migrationRouter = require('./migration');
const reorderRouter = require('./reorder'); const reorderRouter = require('./reorder');
const adminRouter = require('./admin');
const consentRouter = require('./consent');
const managementRouter = require('./management');
// Import route mappings (Single Source of Truth!)
const routeMappingsConfig = require('./routeMappings');
// Map router names to actual router instances
const routerMap = {
auth: authRouter,
upload: uploadRouter,
download: downloadRouter,
batchUpload: batchUploadRouter,
groups: groupsRouter,
socialMedia: socialMediaRouter,
migration: migrationRouter,
reorder: reorderRouter,
admin: adminRouter,
consent: consentRouter,
management: managementRouter
};
const renderRoutes = (app) => { const renderRoutes = (app) => {
routeMappingsConfig.forEach(({ router, prefix }) => { [uploadRouter, downloadRouter, batchUploadRouter, groupsRouter, migrationRouter].forEach(router => app.use('/', router));
app.use(prefix, routerMap[router]); app.use('/groups', reorderRouter);
});
}; };
module.exports = { renderRoutes }; module.exports = { renderRoutes };

File diff suppressed because it is too large Load Diff

View File

@ -2,26 +2,11 @@ const express = require('express');
const { Router } = require('express'); const { Router } = require('express');
const MigrationService = require('../services/MigrationService'); const MigrationService = require('../services/MigrationService');
const dbManager = require('../database/DatabaseManager'); const dbManager = require('../database/DatabaseManager');
const { requireAdminAuth } = require('../middlewares/auth');
const { requireCsrf } = require('../middlewares/csrf');
const router = Router(); const router = Router();
router.get('/status', async (req, res) => { // Migration Status abrufen
/* router.get('/migration/status', async (req, res) => {
#swagger.tags = ['System Migration']
#swagger.summary = 'Get migration status'
#swagger.description = 'Returns current database migration status and history'
#swagger.responses[200] = {
description: 'Migration status',
schema: {
migrationComplete: true,
jsonBackupExists: true,
sqliteActive: true,
lastMigration: '2025-11-01T10:00:00Z'
}
}
*/
try { try {
const status = await MigrationService.getMigrationStatus(); const status = await MigrationService.getMigrationStatus();
res.json(status); res.json(status);
@ -35,25 +20,8 @@ router.get('/status', async (req, res) => {
} }
}); });
// Protect dangerous migration operations with admin auth // Manuelle Migration starten
router.post('/migrate', requireAdminAuth, requireCsrf, async (req, res) => { router.post('/migration/migrate', async (req, res) => {
/*
#swagger.tags = ['System Migration']
#swagger.summary = 'Manually trigger migration'
#swagger.description = 'Triggers manual migration from JSON to SQLite database'
#swagger.responses[200] = {
description: 'Migration successful',
schema: {
success: true,
message: 'Migration completed successfully',
groupsMigrated: 24,
imagesMigrated: 348
}
}
#swagger.responses[500] = {
description: 'Migration failed'
}
*/
try { try {
const result = await MigrationService.migrateJsonToSqlite(); const result = await MigrationService.migrateJsonToSqlite();
res.json(result); res.json(result);
@ -67,23 +35,8 @@ router.post('/migrate', requireAdminAuth, requireCsrf, async (req, res) => {
} }
}); });
router.post('/rollback', requireAdminAuth, requireCsrf, async (req, res) => { // Rollback zu JSON (Notfall)
/* router.post('/migration/rollback', async (req, res) => {
#swagger.tags = ['System Migration']
#swagger.summary = 'Rollback to JSON'
#swagger.description = 'Emergency rollback from SQLite to JSON file storage'
#swagger.responses[200] = {
description: 'Rollback successful',
schema: {
success: true,
message: 'Rolled back to JSON successfully',
groupsRestored: 24
}
}
#swagger.responses[500] = {
description: 'Rollback failed'
}
*/
try { try {
const result = await MigrationService.rollbackToJson(); const result = await MigrationService.rollbackToJson();
res.json(result); res.json(result);
@ -97,31 +50,8 @@ router.post('/rollback', requireAdminAuth, requireCsrf, async (req, res) => {
} }
}); });
router.get('/health', async (req, res) => { // Datenbank Health Check
/* router.get('/migration/health', async (req, res) => {
#swagger.tags = ['System Migration']
#swagger.summary = 'Database health check'
#swagger.description = 'Checks database connectivity and health status'
#swagger.responses[200] = {
description: 'Database healthy',
schema: {
database: {
healthy: true,
status: 'OK'
}
}
}
#swagger.responses[500] = {
description: 'Database unhealthy',
schema: {
database: {
healthy: false,
status: 'ERROR',
error: 'Connection failed'
}
}
}
*/
try { try {
const isHealthy = await dbManager.healthCheck(); const isHealthy = await dbManager.healthCheck();
res.json({ res.json({

View File

@ -1,89 +1,28 @@
const express = require('express'); const express = require('express');
const router = express.Router(); const router = express.Router();
const GroupRepository = require('../repositories/GroupRepository'); const GroupRepository = require('../repositories/GroupRepository');
const { requireAdminAuth } = require('../middlewares/auth');
const { requireCsrf } = require('../middlewares/csrf');
router.use(requireAdminAuth);
router.use(requireCsrf);
/** /**
* @swagger * PUT /api/groups/:groupId/reorder
* /{groupId}/reorder: * Reorder images within a group
* put: *
* tags: [Admin] * Request Body:
* summary: Reorder images within a group * {
* description: Updates the display order of images in a group. All image IDs of the group must be provided in the desired order. * "imageIds": [123, 456, 789] // Array of image IDs in new order
* parameters: * }
* - in: path *
* name: groupId * Response:
* required: true * {
* schema: * "success": true,
* type: string * "message": "Image order updated successfully",
* example: "cTV24Yn-a" * "data": {
* description: Unique identifier of the group * "groupId": "abc123",
* requestBody: * "updatedImages": 3,
* required: true * "newOrder": [123, 456, 789]
* content: * }
* application/json: * }
* schema:
* type: object
* required:
* - imageIds
* properties:
* imageIds:
* type: array
* items:
* type: integer
* example: [123, 456, 789]
* description: Array of image IDs in the new desired order
* responses:
* 200:
* description: Image order updated successfully
* content:
* application/json:
* schema:
* type: object
* properties:
* success:
* type: boolean
* example: true
* message:
* type: string
* example: "Image order updated successfully"
* data:
* type: object
* properties:
* groupId:
* type: string
* updatedImages:
* type: integer
* newOrder:
* type: array
* items:
* type: integer
* 400:
* description: Invalid request - missing or invalid imageIds
* 404:
* description: Group not found
* 500:
* description: Server error during reordering
*/ */
router.put('/:groupId/reorder', async (req, res) => { router.put('/:groupId/reorder', async (req, res) => {
/*
#swagger.tags = ['Admin - Groups Moderation']
#swagger.summary = 'Reorder images within a group'
#swagger.parameters['groupId'] = {
in: 'path',
required: true,
type: 'string',
description: 'Admin groupId'
}
#swagger.responses[200] = { description: 'Order updated successfully' }
#swagger.responses[400] = { description: 'Validation error' }
#swagger.responses[404] = { description: 'Group not found' }
#swagger.responses[500] = { description: 'Internal server error' }
*/
try { try {
const { groupId } = req.params; const { groupId } = req.params;
const { imageIds } = req.body; const { imageIds } = req.body;

View File

@ -1,32 +0,0 @@
/**
* Single Source of Truth für Route-Mappings
* Wird verwendet von:
* - routes/index.js (Server-Routing)
* - generate-openapi.js (OpenAPI-Generierung)
*/
module.exports = [
// Auth API - Session & CSRF Management
{ router: 'auth', prefix: '/auth', file: 'auth.js' },
// Public API - Öffentlich zugänglich
{ router: 'upload', prefix: '/api', file: 'upload.js' },
{ router: 'download', prefix: '/api', file: 'download.js' },
{ router: 'batchUpload', prefix: '/api', file: 'batchUpload.js' },
{ router: 'groups', prefix: '/api', file: 'groups.js' },
{ router: 'socialMedia', prefix: '/api', file: 'socialMedia.js' },
// Management API - Token-basierter Zugriff
{ router: 'management', prefix: '/api/manage', file: 'management.js' },
// Admin API - Geschützt (Moderation, Logs, Cleanup, Consents)
// WICHTIG: consent muss VOR admin gemountet werden!
// Grund: admin.js hat /groups/:groupId, das matched auf /groups/by-consent
// Express matched Routes in Reihenfolge → spezifischere zuerst!
{ router: 'consent', prefix: '/api/admin', file: 'consent.js' },
{ router: 'admin', prefix: '/api/admin', file: 'admin.js' },
{ router: 'reorder', prefix: '/api/admin', file: 'reorder.js' },
// System API - Interne Wartungsfunktionen
{ router: 'migration', prefix: '/api/system/migration', file: 'migration.js' }
];

View File

@ -1,29 +0,0 @@
const express = require('express');
const SocialMediaRepository = require('../repositories/SocialMediaRepository');
const dbManager = require('../database/DatabaseManager');
const router = express.Router();
/**
* Public endpoint: list active social media platforms for consent selection
*/
router.get('/social-media/platforms', async (req, res) => {
/*
#swagger.tags = ['Consent Management']
#swagger.summary = 'List active social media platforms'
#swagger.description = 'Public endpoint that exposes the available platforms for consent selection on the upload form.'
*/
try {
const socialMediaRepo = new SocialMediaRepository(dbManager);
const platforms = await socialMediaRepo.getActivePlatforms();
res.json(platforms);
} catch (error) {
console.error('[SOCIAL_MEDIA] Failed to fetch platforms:', error);
res.status(500).json({
error: 'Failed to fetch social media platforms',
message: error.message
});
}
});
module.exports = router;

View File

@ -1,72 +1,31 @@
const generateId = require("shortid"); const generateId = require("shortid");
const express = require('express'); const express = require('express');
const { Router } = require('express'); const { Router } = require('express');
const { UPLOAD_FS_DIR, PREVIEW_FS_DIR } = require('../constants'); const { endpoints, UPLOAD_FS_DIR, PREVIEW_FS_DIR } = require('../constants');
const path = require('path'); const path = require('path');
const ImagePreviewService = require('../services/ImagePreviewService'); const ImagePreviewService = require('../services/ImagePreviewService');
const groupRepository = require('../repositories/GroupRepository');
const fs = require('fs');
const { publicUploadLimiter } = require('../middlewares/rateLimiter');
const router = Router(); const router = Router();
// Serve uploaded images via URL /upload but store files under data/images // Serve uploaded images via URL /upload but store files under data/images
router.use('/upload', express.static( path.join(__dirname, '..', UPLOAD_FS_DIR) )); router.use(endpoints.UPLOAD_STATIC_DIRECTORY, express.static( path.join(__dirname, '..', UPLOAD_FS_DIR) ));
// Serve preview images via URL /previews but store files under data/previews // Serve preview images via URL /previews but store files under data/previews
router.use('/previews', express.static( path.join(__dirname, '..', PREVIEW_FS_DIR) )); router.use(endpoints.PREVIEW_STATIC_DIRECTORY, express.static( path.join(__dirname, '..', PREVIEW_FS_DIR) ));
router.post('/upload', publicUploadLimiter, async (req, res) => { router.post(endpoints.UPLOAD_FILE, async (req, res) => {
/* if(req.files === null){
#swagger.tags = ['Upload']
#swagger.summary = 'Upload a single image and create a new group'
#swagger.description = 'Uploads an image file, generates a preview, and creates a new group in the database'
#swagger.consumes = ['multipart/form-data']
#swagger.parameters['file'] = {
in: 'formData',
type: 'file',
required: true,
description: 'Image file to upload'
}
#swagger.parameters['groupName'] = {
in: 'formData',
type: 'string',
description: 'Name for the new group',
example: 'Familie Mueller'
}
#swagger.responses[200] = {
description: 'File uploaded successfully',
schema: {
filePath: '/upload/abc123.jpg',
fileName: 'abc123.jpg',
groupId: 'cTV24Yn-a',
groupName: 'Familie Mueller'
}
}
#swagger.responses[400] = {
description: 'No file uploaded',
schema: { msg: 'No file uploaded' }
}
#swagger.responses[500] = {
description: 'Server error during upload'
}
*/
if(!req.files || req.files === null || !req.files.file){
console.log('No file uploaded'); console.log('No file uploaded');
return res.status(400).json({ error: 'Keine Datei hochgeladen' }); return res.status(400).json({ msg: 'No file uploaded' });
} }
const file = req.files.file; const file = req.files.file;
const groupName = req.body.groupName || 'Unnamed Group';
fileEnding = file.name.split(".") fileEnding = file.name.split(".")
fileEnding = fileEnding[fileEnding.length - 1] fileEnding = fileEnding[fileEnding.length - 1]
fileName = generateId() + '.' + fileEnding fileName = generateId() + '.' + fileEnding
// Handle absolute vs relative paths (test mode uses /tmp) const savePath = path.join(__dirname, '..', UPLOAD_FS_DIR, fileName);
const savePath = path.isAbsolute(UPLOAD_FS_DIR)
? path.join(UPLOAD_FS_DIR, fileName)
: path.join(__dirname, '..', UPLOAD_FS_DIR, fileName);
try { try {
// Save the uploaded file // Save the uploaded file
@ -77,10 +36,6 @@ router.post('/upload', publicUploadLimiter, async (req, res) => {
}); });
}); });
// Get file stats
const fileStats = fs.statSync(savePath);
const fileSize = fileStats.size;
// Generate preview asynchronously (don't wait for it) // Generate preview asynchronously (don't wait for it)
const previewFileName = ImagePreviewService._getPreviewFileName(fileName); const previewFileName = ImagePreviewService._getPreviewFileName(fileName);
const previewPath = ImagePreviewService.getPreviewPath(previewFileName); const previewPath = ImagePreviewService.getPreviewPath(previewFileName);
@ -95,45 +50,15 @@ router.post('/upload', publicUploadLimiter, async (req, res) => {
console.error(`Unexpected error during preview generation for ${fileName}:`, err); console.error(`Unexpected error during preview generation for ${fileName}:`, err);
}); });
// Create or update group in database
const groupId = generateId();
const currentYear = new Date().getFullYear();
const uploadDate = new Date().toISOString();
const groupData = {
groupId: groupId,
year: currentYear,
title: groupName,
description: `Einzelnes Bild Upload: ${file.name}`,
name: groupName,
uploadDate: uploadDate,
images: [{
fileName: fileName,
originalName: file.name,
filePath: `/upload/${fileName}`,
uploadOrder: 1,
fileSize: fileSize,
mimeType: file.mimetype,
previewPath: `/previews/${previewFileName}`
}]
};
// Save to database
await groupRepository.createGroup(groupData);
console.log(`✅ Group created: ${groupName} with image ${fileName}`);
// Return immediately with file path // Return immediately with file path
res.json({ res.json({
filePath: `/upload/${fileName}`, filePath: `${endpoints.UPLOAD_STATIC_DIRECTORY}/${fileName}`,
fileName: fileName, fileName: fileName
groupId: groupId,
groupName: groupName
}); });
} catch(err) { } catch(err) {
console.error('Upload error:', err); console.error(err);
return res.status(500).json({ error: err.message }); return res.status(500).send(err);
} }
}); });

View File

@ -1,102 +0,0 @@
#!/usr/bin/env node
const bcrypt = require('bcryptjs');
const dbManager = require('../database/DatabaseManager');
const AdminUserRepository = require('../repositories/AdminUserRepository');
const DEFAULT_SALT_ROUNDS = parseInt(process.env.ADMIN_PASSWORD_SALT_ROUNDS || '12', 10);
const printUsage = () => {
console.log('Usage: node src/scripts/createAdminUser.js --username <name> --password <pass> [--role <role>] [--require-password-change]');
console.log('Example: npm run create-admin -- --username admin2 --password "SehrSicher123!"');
};
const parseArgs = () => {
const rawArgs = process.argv.slice(2);
const args = {};
for (let i = 0; i < rawArgs.length; i++) {
const arg = rawArgs[i];
if (!arg.startsWith('--')) {
continue;
}
const key = arg.slice(2);
const next = rawArgs[i + 1];
if (!next || next.startsWith('--')) {
args[key] = true;
} else {
args[key] = next;
i++;
}
}
return args;
};
const validateInput = ({ username, password }) => {
if (!username || !username.trim()) {
throw new Error('USERNAME_REQUIRED');
}
if (!password || password.length < 10) {
throw new Error('PASSWORD_TOO_WEAK');
}
};
(async () => {
const args = parseArgs();
if (args.help || args.h) {
printUsage();
process.exit(0);
}
try {
validateInput(args);
} catch (validationError) {
console.error('⚠️ Validation error:', validationError.message);
printUsage();
process.exit(1);
}
const normalizedUsername = args.username.trim().toLowerCase();
const role = args.role || 'admin';
const requirePasswordChange = Boolean(args['require-password-change']);
// Skip expensive preview generation for CLI usage
process.env.SKIP_PREVIEW_GENERATION = process.env.SKIP_PREVIEW_GENERATION || '1';
try {
await dbManager.initialize();
const existingUser = await AdminUserRepository.getByUsername(normalizedUsername);
if (existingUser) {
console.error(`❌ Benutzer '${normalizedUsername}' existiert bereits.`);
process.exit(1);
}
const passwordHash = await bcrypt.hash(args.password, DEFAULT_SALT_ROUNDS);
const id = await AdminUserRepository.createAdminUser({
username: normalizedUsername,
passwordHash,
role,
requiresPasswordChange: requirePasswordChange
});
console.log('✅ Admin-Benutzer angelegt:');
console.log(` ID: ${id}`);
console.log(` Username: ${normalizedUsername}`);
console.log(` Rolle: ${role}`);
console.log(` Passwort-Änderung erforderlich: ${requirePasswordChange}`);
} catch (error) {
console.error('❌ Fehler beim Anlegen des Admin-Benutzers:', error.message);
process.exit(1);
} finally {
try {
await dbManager.close();
} catch (closeError) {
console.warn('⚠️ Datenbank konnte nicht sauber geschlossen werden:', closeError.message);
}
}
})();

View File

@ -1,255 +0,0 @@
#!/usr/bin/env node
/**
* Test-Script für automatisches Löschen
*
* Dieses Script hilft beim Testen des Cleanup-Features:
* 1. Zeigt alle nicht-freigegebenen Gruppen
* 2. Erlaubt das Zurückdatieren von Gruppen (für Tests)
* 3. Zeigt Preview der zu löschenden Gruppen
* 4. Triggert manuellen Cleanup
*/
const readline = require('readline');
const https = require('http');
const rl = readline.createInterface({
input: process.stdin,
output: process.stdout
});
const API_BASE = 'http://localhost:5001';
// Helper: HTTP Request
function makeRequest(path, method = 'GET', data = null) {
return new Promise((resolve, reject) => {
const url = new URL(path, API_BASE);
const options = {
hostname: url.hostname,
port: url.port,
path: url.pathname + url.search,
method: method,
headers: {
'Content-Type': 'application/json'
}
};
const req = https.request(options, (res) => {
let body = '';
res.on('data', chunk => body += chunk);
res.on('end', () => {
try {
resolve(JSON.parse(body));
} catch (e) {
resolve(body);
}
});
});
req.on('error', reject);
if (data) {
req.write(JSON.stringify(data));
}
req.end();
});
}
// Helper: SQL Query über API
async function execSQL(query) {
// Direkt über docker exec
const { exec } = require('child_process');
return new Promise((resolve, reject) => {
exec(
`docker compose -f docker/dev/docker-compose.yml exec -T backend-dev sqlite3 /usr/src/app/src/data/db/image_uploader.db "${query}"`,
(error, stdout, stderr) => {
if (error) {
reject(error);
return;
}
resolve(stdout);
}
);
});
}
// Zeige Menü
function showMenu() {
console.log('\n========================================');
console.log(' CLEANUP TEST MENÜ');
console.log('========================================');
console.log('1. Zeige alle nicht-freigegebenen Gruppen');
console.log('2. Gruppe um X Tage zurückdatieren (für Tests)');
console.log('3. Preview: Welche Gruppen würden gelöscht?');
console.log('4. Cleanup JETZT ausführen');
console.log('5. Lösch-Historie anzeigen');
console.log('0. Beenden');
console.log('========================================\n');
}
// Option 1: Zeige nicht-freigegebene Gruppen
async function showUnapprovedGroups() {
console.log('\n📋 Lade nicht-freigegebene Gruppen...\n');
const result = await execSQL(
'SELECT group_id, year, name, approved, datetime(upload_date) as upload_date, ' +
'CAST((julianday(\\'now\\') - julianday(upload_date)) AS INTEGER) as days_old ' +
'FROM groups WHERE approved = 0 ORDER BY upload_date DESC;'
);
console.log('Gruppe ID | Jahr | Name | Freigegeben | Upload-Datum | Tage alt');
console.log('------------- | ---- | --------- | ----------- | -------------------- | --------');
console.log(result || 'Keine nicht-freigegebenen Gruppen gefunden.');
}
// Option 2: Gruppe zurückdatieren
async function backdateGroup() {
await showUnapprovedGroups();
rl.question('\nGruppe ID zum Zurückdatieren: ', async (groupId) => {
if (!groupId) {
console.log('❌ Keine Gruppe ID angegeben');
return mainMenu();
}
rl.question('Um wie viele Tage zurückdatieren? (z.B. 8 für 8 Tage alt): ', async (days) => {
const daysNum = parseInt(days);
if (isNaN(daysNum) || daysNum < 1) {
console.log('❌ Ungültige Anzahl Tage');
return mainMenu();
}
try {
await execSQL(
`UPDATE groups SET upload_date = datetime('now', '-${daysNum} days') WHERE group_id = '${groupId}';`
);
console.log(`✅ Gruppe ${groupId} wurde um ${daysNum} Tage zurückdatiert`);
// Zeige aktualisierte Info
const result = await execSQL(
`SELECT group_id, datetime(upload_date) as upload_date, ` +
`CAST((julianday('now') - julianday(upload_date)) AS INTEGER) as days_old ` +
`FROM groups WHERE group_id = '${groupId}';`
);
console.log('\nAktualisierte Daten:');
console.log(result);
} catch (error) {
console.error('❌ Fehler:', error.message);
}
mainMenu();
});
});
}
// Option 3: Preview Cleanup
async function previewCleanup() {
console.log('\n🔍 Lade Cleanup Preview...\n');
try {
const result = await makeRequest('/api/admin/cleanup/preview');
if (result.groupsToDelete === 0) {
console.log('✅ Keine Gruppen würden gelöscht (alle sind < 7 Tage alt oder freigegeben)');
} else {
console.log(`⚠️ ${result.groupsToDelete} Gruppe(n) würden gelöscht:\n`);
result.groups.forEach(group => {
console.log(` - ${group.group_id} (${group.year}) - ${group.name}`);
console.log(` Upload: ${group.uploadDate}`);
console.log(` Tage seit Upload: ${Math.abs(group.daysUntilDeletion)}`);
console.log('');
});
}
} catch (error) {
console.error('❌ Fehler:', error.message);
}
mainMenu();
}
// Option 4: Cleanup ausführen
async function executeCleanup() {
console.log('\n⚠ ACHTUNG: Dies wird Gruppen permanent löschen!\n');
rl.question('Cleanup wirklich ausführen? (ja/nein): ', async (answer) => {
if (answer.toLowerCase() !== 'ja') {
console.log('❌ Abgebrochen');
return mainMenu();
}
console.log('\n🔄 Führe Cleanup aus...\n');
try {
const result = await makeRequest('/api/admin/cleanup/trigger', 'POST');
console.log('✅ Cleanup abgeschlossen!');
console.log(` Gelöschte Gruppen: ${result.result.deletedGroups}`);
console.log(` Fehler: ${result.result.failedGroups || 0}`);
} catch (error) {
console.error('❌ Fehler:', error.message);
}
mainMenu();
});
}
// Option 5: Lösch-Historie
async function showDeletionLog() {
console.log('\n📜 Lösch-Historie (letzte 10 Einträge)...\n');
try {
const result = await makeRequest('/api/admin/deletion-log?limit=10');
if (result.deletions.length === 0) {
console.log('Keine Einträge im Lösch-Log');
} else {
console.log('Gruppe ID | Jahr | Bilder | Upload-Datum | Gelöscht am | Grund');
console.log('------------- | ---- | ------ | -------------------- | -------------------- | -----');
result.deletions.forEach(d => {
console.log(
`${d.group_id.padEnd(13)} | ${String(d.year).padEnd(4)} | ${String(d.image_count).padEnd(6)} | ` +
`${d.upload_date.substring(0, 19)} | ${d.deleted_at.substring(0, 19)} | ${d.deletion_reason}`
);
});
}
} catch (error) {
console.error('❌ Fehler:', error.message);
}
mainMenu();
}
// Hauptmenü
function mainMenu() {
showMenu();
rl.question('Wähle eine Option: ', async (choice) => {
switch (choice) {
case '1':
await showUnapprovedGroups();
mainMenu();
break;
case '2':
await backdateGroup();
break;
case '3':
await previewCleanup();
break;
case '4':
await executeCleanup();
break;
case '5':
await showDeletionLog();
break;
case '0':
console.log('\n👋 Auf Wiedersehen!\n');
rl.close();
process.exit(0);
break;
default:
console.log('❌ Ungültige Option');
mainMenu();
}
});
}
// Start
console.log('\n🚀 Cleanup Test Script gestartet\n');
console.log('Hinweis: Stelle sicher, dass der Dev-Server läuft (./dev.sh)');
mainMenu();

View File

@ -1,21 +1,6 @@
const express = require('express'); const express = require('express');
const fs = require('fs');
const path = require('path');
const initiateResources = require('./utils/initiate-resources'); const initiateResources = require('./utils/initiate-resources');
const dbManager = require('./database/DatabaseManager'); const dbManager = require('./database/DatabaseManager');
const SchedulerService = require('./services/SchedulerService');
const TelegramNotificationService = require('./services/TelegramNotificationService');
// Singleton-Instanz des Telegram Service
const telegramService = new TelegramNotificationService();
// Dev: Swagger UI (mount only in non-production) — require lazily
let swaggerUi = null;
try {
swaggerUi = require('swagger-ui-express');
} catch (e) {
swaggerUi = null;
}
class Server { class Server {
_port; _port;
@ -24,42 +9,10 @@ class Server {
constructor(port) { constructor(port) {
this._port = port; this._port = port;
this._app = express(); this._app = express();
const trustProxyHops = Number.parseInt(process.env.TRUST_PROXY_HOPS ?? '1', 10);
if (!Number.isNaN(trustProxyHops) && trustProxyHops > 0) {
this._app.set('trust proxy', trustProxyHops);
}
}
async generateOpenApiSpecIfNeeded() {
if (process.env.NODE_ENV === 'production' || process.env.NODE_ENV === 'test') {
return;
}
try {
const generateOpenApi = require('./generate-openapi');
console.log('🔄 Generating OpenAPI specification...');
await generateOpenApi();
console.log('✓ OpenAPI spec generated');
} catch (error) {
console.warn('⚠️ Failed to generate OpenAPI spec:', error.message);
}
}
loadSwaggerDocument() {
try {
const specPath = path.join(__dirname, '..', 'docs', 'openapi.json');
const raw = fs.readFileSync(specPath, 'utf8');
return JSON.parse(raw);
} catch (error) {
console.warn('⚠️ Unable to load Swagger document:', error.message);
return null;
}
} }
async start() { async start() {
try { try {
await this.generateOpenApiSpecIfNeeded();
// Initialisiere Datenbank // Initialisiere Datenbank
console.log('🔄 Initialisiere Datenbank...'); console.log('🔄 Initialisiere Datenbank...');
await dbManager.initialize(); await dbManager.initialize();
@ -68,60 +21,15 @@ class Server {
// Starte Express Server // Starte Express Server
initiateResources(this._app); initiateResources(this._app);
this._app.use('/upload', express.static( __dirname + '/upload')); this._app.use('/upload', express.static( __dirname + '/upload'));
this._app.use('/api/previews', express.static( __dirname + '/data/previews'));
// Mount Swagger UI in dev only when available
if (process.env.NODE_ENV !== 'production' && swaggerUi) {
const swaggerDocument = this.loadSwaggerDocument();
if (swaggerDocument) {
this._app.use('/api/docs', swaggerUi.serve, swaggerUi.setup(swaggerDocument));
console.log(' Swagger UI mounted at /api/docs (dev only)');
}
}
this._app.listen(this._port, () => { this._app.listen(this._port, () => {
console.log(`✅ Server läuft auf Port ${this._port}`); console.log(`✅ Server läuft auf Port ${this._port}`);
console.log(`📊 SQLite Datenbank aktiv`); console.log(`📊 SQLite Datenbank aktiv`);
// Speichere SchedulerService in app für Admin-Endpoints
this._app.set('schedulerService', SchedulerService);
// Starte Scheduler für automatisches Cleanup
SchedulerService.start();
// Teste Telegram-Service (optional, nur in Development wenn aktiviert)
if (process.env.NODE_ENV === 'development'
&& process.env.TELEGRAM_SEND_TEST_ON_START === 'true'
&& telegramService.isAvailable()) {
telegramService.sendTestMessage()
.catch(err => console.error('[Telegram] Test message failed:', err.message));
}
}); });
} catch (error) { } catch (error) {
console.error('💥 Fehler beim Serverstart:', error); console.error('💥 Fehler beim Serverstart:', error);
process.exit(1); process.exit(1);
} }
} }
// Expose app for testing
getApp() {
return this._app;
}
// Initialize app without listening (for tests)
async initializeApp() {
await dbManager.initialize();
initiateResources(this._app);
this._app.use('/upload', express.static( __dirname + '/upload'));
this._app.use('/api/previews', express.static( __dirname + '/data/previews'));
if (process.env.NODE_ENV !== 'production' && swaggerUi) {
const swaggerDocument = this.loadSwaggerDocument();
if (swaggerDocument) {
this._app.use('/api/docs', swaggerUi.serve, swaggerUi.setup(swaggerDocument));
}
}
return this._app;
}
} }
module.exports = Server; module.exports = Server;

View File

@ -1,164 +0,0 @@
const bcrypt = require('bcryptjs');
const crypto = require('crypto');
const AdminUserRepository = require('../repositories/AdminUserRepository');
const DEFAULT_SALT_ROUNDS = parseInt(process.env.ADMIN_PASSWORD_SALT_ROUNDS || '12', 10);
class AdminAuthService {
async needsInitialSetup() {
const count = await AdminUserRepository.countActiveAdmins();
return count === 0;
}
async createInitialAdmin({ username, password }) {
const trimmedUsername = (username || '').trim().toLowerCase();
if (!trimmedUsername) {
throw new Error('USERNAME_REQUIRED');
}
if (!password || password.length < 10) {
throw new Error('PASSWORD_TOO_WEAK');
}
const needsSetup = await this.needsInitialSetup();
if (!needsSetup) {
throw new Error('SETUP_ALREADY_COMPLETED');
}
const passwordHash = await this.hashPassword(password);
const id = await AdminUserRepository.createAdminUser({
username: trimmedUsername,
passwordHash,
role: 'admin',
requiresPasswordChange: false
});
return {
id,
username: trimmedUsername,
role: 'admin'
};
}
async createAdminUser({ username, password, role = 'admin', requiresPasswordChange = false }) {
const trimmedUsername = (username || '').trim().toLowerCase();
if (!trimmedUsername) {
throw new Error('USERNAME_REQUIRED');
}
if (!password || password.length < 10) {
throw new Error('PASSWORD_TOO_WEAK');
}
const normalizedRole = (role || 'admin').trim().toLowerCase();
const targetRole = normalizedRole || 'admin';
const existing = await AdminUserRepository.getByUsername(trimmedUsername);
if (existing) {
throw new Error('USERNAME_IN_USE');
}
const passwordHash = await this.hashPassword(password);
const id = await AdminUserRepository.createAdminUser({
username: trimmedUsername,
passwordHash,
role: targetRole,
requiresPasswordChange
});
return {
id,
username: trimmedUsername,
role: targetRole,
requiresPasswordChange: Boolean(requiresPasswordChange)
};
}
async changePassword({ userId, currentPassword, newPassword }) {
if (!userId) {
throw new Error('USER_NOT_FOUND');
}
if (!currentPassword) {
throw new Error('CURRENT_PASSWORD_REQUIRED');
}
if (!newPassword || newPassword.length < 10) {
throw new Error('PASSWORD_TOO_WEAK');
}
const userRecord = await AdminUserRepository.getById(userId);
if (!userRecord || !userRecord.is_active) {
throw new Error('USER_NOT_FOUND');
}
const matches = await bcrypt.compare(currentPassword || '', userRecord.password_hash);
if (!matches) {
throw new Error('INVALID_CURRENT_PASSWORD');
}
const passwordHash = await this.hashPassword(newPassword);
await AdminUserRepository.updatePassword(userRecord.id, passwordHash, false);
return {
id: userRecord.id,
username: userRecord.username,
role: userRecord.role,
requiresPasswordChange: false
};
}
async hashPassword(password) {
return bcrypt.hash(password, DEFAULT_SALT_ROUNDS);
}
async verifyCredentials(username, password) {
const normalizedUsername = (username || '').trim().toLowerCase();
const user = await AdminUserRepository.getByUsername(normalizedUsername);
if (!user || !user.is_active) {
return null;
}
const matches = await bcrypt.compare(password || '', user.password_hash);
if (!matches) {
return null;
}
await AdminUserRepository.recordSuccessfulLogin(user.id);
return {
id: user.id,
username: user.username,
role: user.role,
requiresPasswordChange: Boolean(user.requires_password_change)
};
}
generateCsrfToken() {
return crypto.randomBytes(32).toString('hex');
}
startSession(req, user) {
const csrfToken = this.generateCsrfToken();
req.session.user = {
id: user.id,
username: user.username,
role: user.role,
requiresPasswordChange: user.requiresPasswordChange || false
};
req.session.csrfToken = csrfToken;
return csrfToken;
}
async destroySession(req) {
return new Promise((resolve, reject) => {
if (!req.session) {
return resolve();
}
req.session.destroy((err) => {
if (err) {
reject(err);
} else {
resolve();
}
});
});
}
}
module.exports = new AdminAuthService();

View File

@ -1,190 +0,0 @@
const GroupRepository = require('../repositories/GroupRepository');
const DeletionLogRepository = require('../repositories/DeletionLogRepository');
const fs = require('fs').promises;
const path = require('path');
class GroupCleanupService {
constructor() {
this.CLEANUP_DAYS = 7; // Gruppen älter als 7 Tage werden gelöscht
}
// Findet alle Gruppen, die gelöscht werden müssen
async findGroupsForDeletion() {
try {
const groups = await GroupRepository.findUnapprovedGroupsOlderThan(this.CLEANUP_DAYS);
console.log(`[Cleanup] Found ${groups.length} groups for deletion (older than ${this.CLEANUP_DAYS} days)`);
return groups;
} catch (error) {
console.error('[Cleanup] Error finding groups for deletion:', error);
throw error;
}
}
// Löscht eine Gruppe vollständig (DB + Dateien)
async deleteGroupCompletely(groupId) {
try {
console.log(`[Cleanup] Starting deletion of group: ${groupId}`);
// Hole Statistiken vor Löschung
const stats = await GroupRepository.getGroupStatistics(groupId);
if (!stats) {
console.warn(`[Cleanup] Group ${groupId} not found, skipping`);
return null;
}
// Lösche Gruppe aus DB (CASCADE löscht Bilder automatisch)
const deleteResult = await GroupRepository.deleteGroupCompletely(groupId);
// Lösche physische Dateien
const deletedFiles = await this.deletePhysicalFiles(deleteResult.imagePaths);
console.log(`[Cleanup] Deleted group ${groupId}: ${deletedFiles.success} files deleted, ${deletedFiles.failed} failed`);
// Erstelle Deletion Log
await this.logDeletion({
...stats,
deletedFiles: deletedFiles
});
return {
groupId: groupId,
imagesDeleted: deleteResult.deletedImages,
filesDeleted: deletedFiles.success
};
} catch (error) {
console.error(`[Cleanup] Error deleting group ${groupId}:`, error);
throw error;
}
}
// Löscht physische Dateien (Bilder + Previews)
async deletePhysicalFiles(imagePaths) {
const dataDir = path.join(__dirname, '../data');
let successCount = 0;
let failedCount = 0;
for (const image of imagePaths) {
// Lösche Original-Bild
if (image.file_path) {
const fullPath = path.join(dataDir, image.file_path);
try {
await fs.unlink(fullPath);
successCount++;
} catch (error) {
if (error.code !== 'ENOENT') { // Ignoriere "Datei nicht gefunden"
console.warn(`[Cleanup] Failed to delete file: ${fullPath}`, error.message);
failedCount++;
}
}
}
// Lösche Preview-Bild
if (image.preview_path) {
const previewPath = path.join(dataDir, image.preview_path);
try {
await fs.unlink(previewPath);
successCount++;
} catch (error) {
if (error.code !== 'ENOENT') {
console.warn(`[Cleanup] Failed to delete preview: ${previewPath}`, error.message);
failedCount++;
}
}
}
}
return {
success: successCount,
failed: failedCount
};
}
// Erstellt Eintrag im Deletion Log
async logDeletion(groupData) {
try {
await DeletionLogRepository.createDeletionEntry({
groupId: groupData.groupId,
year: groupData.year,
imageCount: groupData.imageCount,
uploadDate: groupData.uploadDate,
deletionReason: 'auto_cleanup_7days',
totalFileSize: groupData.totalFileSize
});
console.log(`[Cleanup] Logged deletion of group ${groupData.groupId}`);
} catch (error) {
console.error('[Cleanup] Error logging deletion:', error);
// Nicht werfen - Deletion Log ist nicht kritisch
}
}
// Hauptmethode: Führt kompletten Cleanup durch
async performScheduledCleanup() {
const startTime = Date.now();
console.log('');
console.log('========================================');
console.log('[Cleanup] Starting scheduled cleanup...');
console.log(`[Cleanup] Date: ${new Date().toISOString()}`);
console.log('========================================');
try {
const groupsToDelete = await this.findGroupsForDeletion();
if (groupsToDelete.length === 0) {
console.log('[Cleanup] No groups to delete. Cleanup complete.');
console.log('========================================');
return {
success: true,
deletedGroups: 0,
message: 'No groups to delete'
};
}
let successCount = 0;
let failedCount = 0;
for (const group of groupsToDelete) {
try {
await this.deleteGroupCompletely(group.group_id);
successCount++;
} catch (error) {
console.error(`[Cleanup] Failed to delete group ${group.group_id}:`, error);
failedCount++;
}
}
const duration = ((Date.now() - startTime) / 1000).toFixed(2);
console.log('');
console.log(`[Cleanup] Cleanup complete!`);
console.log(`[Cleanup] Deleted: ${successCount} groups`);
console.log(`[Cleanup] Failed: ${failedCount} groups`);
console.log(`[Cleanup] Duration: ${duration}s`);
console.log('========================================');
return {
success: true,
deletedGroups: successCount,
failedGroups: failedCount,
duration: duration
};
} catch (error) {
console.error('[Cleanup] Scheduled cleanup failed:', error);
console.log('========================================');
throw error;
}
}
// Berechnet verbleibende Tage bis zur Löschung
getDaysUntilDeletion(uploadDate) {
const upload = new Date(uploadDate);
const deleteDate = new Date(upload);
deleteDate.setDate(deleteDate.getDate() + this.CLEANUP_DAYS);
const now = new Date();
const diffTime = deleteDate - now;
const diffDays = Math.ceil(diffTime / (1000 * 60 * 60 * 24));
return Math.max(0, diffDays);
}
}
module.exports = new GroupCleanupService();

View File

@ -1,121 +0,0 @@
const cron = require('node-cron');
const GroupCleanupService = require('./GroupCleanupService');
const TelegramNotificationService = require('./TelegramNotificationService');
class SchedulerService {
constructor() {
this.tasks = [];
this.telegramService = new TelegramNotificationService();
}
start() {
// Don't start scheduler in test mode
if (process.env.NODE_ENV === 'test') {
console.log('[Scheduler] Skipped in test mode');
return;
}
console.log('[Scheduler] Starting scheduled tasks...');
// Cleanup-Job: Jeden Tag um 10:00 Uhr
const cleanupTask = cron.schedule('0 10 * * *', async () => {
console.log('[Scheduler] Running daily cleanup at 10:00 AM...');
try {
await GroupCleanupService.performScheduledCleanup();
} catch (error) {
console.error('[Scheduler] Cleanup task failed:', error);
}
}, {
scheduled: true,
timezone: "Europe/Berlin" // Anpassen nach Bedarf
});
this.tasks.push(cleanupTask);
// Telegram Warning-Job: Jeden Tag um 09:00 Uhr (1 Stunde vor Cleanup)
const telegramWarningTask = cron.schedule('0 9 * * *', async () => {
console.log('[Scheduler] Running daily Telegram deletion warning at 09:00 AM...');
try {
if (this.telegramService.isAvailable()) {
const groupsForDeletion = await GroupCleanupService.findGroupsForDeletion();
if (groupsForDeletion && groupsForDeletion.length > 0) {
await this.telegramService.sendDeletionWarning(groupsForDeletion);
console.log(`[Scheduler] Sent deletion warning for ${groupsForDeletion.length} groups`);
} else {
console.log('[Scheduler] No groups pending deletion');
}
} else {
console.log('[Scheduler] Telegram service not available, skipping warning');
}
} catch (error) {
console.error('[Scheduler] Telegram warning task failed:', error);
}
}, {
scheduled: true,
timezone: "Europe/Berlin"
});
this.tasks.push(telegramWarningTask);
console.log('✓ Scheduler started:');
console.log(' - Daily cleanup at 10:00 AM (Europe/Berlin)');
console.log(' - Daily Telegram warning at 09:00 AM (Europe/Berlin)');
// Für Development: Manueller Trigger
if (process.env.NODE_ENV === 'development') {
console.log('📝 Development Mode: Use GroupCleanupService.performScheduledCleanup() to trigger manually');
}
}
stop() {
console.log('[Scheduler] Stopping all scheduled tasks...');
this.tasks.forEach(task => task.stop());
this.tasks = [];
console.log('✓ Scheduler stopped');
}
// Für Development: Manueller Cleanup-Trigger
async triggerCleanupNow() {
console.log('[Scheduler] Manual cleanup triggered...');
return await GroupCleanupService.performScheduledCleanup();
}
// Für Development: Manueller Telegram-Warning-Trigger
async triggerTelegramWarningNow() {
console.log('[Scheduler] Manual Telegram warning triggered...');
try {
if (!this.telegramService.isAvailable()) {
console.log('[Scheduler] Telegram service not available');
return { success: false, message: 'Telegram service not available' };
}
const groupsForDeletion = await GroupCleanupService.findGroupsForDeletion();
if (!groupsForDeletion || groupsForDeletion.length === 0) {
console.log('[Scheduler] No groups pending deletion');
return { success: true, message: 'No groups pending deletion', groupCount: 0 };
}
await this.telegramService.sendDeletionWarning(groupsForDeletion);
console.log(`[Scheduler] Sent deletion warning for ${groupsForDeletion.length} groups`);
return {
success: true,
message: `Warning sent for ${groupsForDeletion.length} groups`,
groupCount: groupsForDeletion.length,
groups: groupsForDeletion.map(g => ({
groupId: g.groupId,
name: g.name,
year: g.year,
uploadDate: g.uploadDate
}))
};
} catch (error) {
console.error('[Scheduler] Manual Telegram warning failed:', error);
return { success: false, message: error.message };
}
}
}
module.exports = new SchedulerService();

View File

@ -1,312 +0,0 @@
const TelegramBot = require('node-telegram-bot-api');
/**
* TelegramNotificationService
*
* Versendet automatische Benachrichtigungen über Telegram an die Werkstatt-Gruppe.
*
* Features:
* - Upload-Benachrichtigungen (Phase 3)
* - Consent-Änderungs-Benachrichtigungen (Phase 4)
* - Gruppen-Lösch-Benachrichtigungen (Phase 4)
* - Tägliche Lösch-Warnungen (Phase 5)
*
* Phase 2: Backend-Service Integration (Basic Setup)
*/
class TelegramNotificationService {
constructor() {
this.enabled = process.env.TELEGRAM_ENABLED === 'true';
this.botToken = process.env.TELEGRAM_BOT_TOKEN;
this.chatId = process.env.TELEGRAM_CHAT_ID;
this.bot = null;
if (this.enabled) {
this.initialize();
} else {
console.log('[Telegram] Service disabled (TELEGRAM_ENABLED=false)');
}
}
/**
* Initialisiert den Telegram Bot
*/
initialize() {
try {
if (!this.botToken) {
throw new Error('TELEGRAM_BOT_TOKEN is not defined');
}
if (!this.chatId) {
throw new Error('TELEGRAM_CHAT_ID is not defined');
}
this.bot = new TelegramBot(this.botToken, { polling: false });
console.log('[Telegram] Service initialized successfully');
} catch (error) {
console.error('[Telegram] Initialization failed:', error.message);
this.enabled = false;
}
}
/**
* Prüft, ob der Service verfügbar ist
*/
isAvailable() {
return this.enabled && this.bot !== null;
}
/**
* Sendet eine Test-Nachricht
*
* @returns {Promise<Object>} Telegram API Response
*/
async sendTestMessage() {
if (!this.isAvailable()) {
console.log('[Telegram] Service not available, skipping test message');
return null;
}
try {
const timestamp = new Date().toLocaleString('de-DE', {
year: 'numeric',
month: '2-digit',
day: '2-digit',
hour: '2-digit',
minute: '2-digit',
second: '2-digit'
});
const message = `
🤖 Telegram Service Test
Service erfolgreich initialisiert!
Zeitstempel: ${timestamp}
Environment: ${process.env.NODE_ENV || 'development'}
---
Dieser Bot sendet automatische Benachrichtigungen für den Image Uploader.
`.trim();
const response = await this.bot.sendMessage(this.chatId, message);
console.log('[Telegram] Test message sent successfully');
return response;
} catch (error) {
console.error('[Telegram] Failed to send test message:', error.message);
throw error;
}
}
/**
* Phase 3: Sendet Benachrichtigung bei neuem Upload
*
* @param {Object} groupData - Gruppen-Informationen
* @param {string} groupData.name - Name des Uploaders
* @param {number} groupData.year - Jahr der Gruppe
* @param {string} groupData.title - Titel der Gruppe
* @param {number} groupData.imageCount - Anzahl hochgeladener Bilder
* @param {boolean} groupData.workshopConsent - Workshop-Consent Status
* @param {Array<string>} groupData.socialMediaConsents - Social Media Plattformen
* @param {string} groupData.token - Management-Token
*/
async sendUploadNotification(groupData) {
if (!this.isAvailable()) {
console.log('[Telegram] Service not available, skipping upload notification');
return null;
}
try {
const workshopIcon = groupData.workshopConsent ? '✅' : '❌';
const socialMediaIcons = this.formatSocialMediaIcons(groupData.socialMediaConsents);
const message = `
📸 Neuer Upload!
Uploader: ${groupData.name}
Bilder: ${groupData.imageCount}
Gruppe: ${groupData.year} - ${groupData.title}
Workshop: ${workshopIcon} ${groupData.workshopConsent ? 'Ja' : 'Nein'}
Social Media: ${socialMediaIcons || '❌ Keine'}
🔗 Zur Freigabe: ${this.getAdminUrl()}
`.trim();
const response = await this.bot.sendMessage(this.chatId, message);
console.log(`[Telegram] Upload notification sent for group: ${groupData.title}`);
return response;
} catch (error) {
console.error('[Telegram] Failed to send upload notification:', error.message);
// Fehler loggen, aber nicht werfen - Upload soll nicht fehlschlagen wegen Telegram
return null;
}
}
/**
* Phase 4: Sendet Benachrichtigung bei Consent-Änderung
*
* @param {Object} changeData - Änderungs-Informationen
* @param {string} changeData.name - Name des Uploaders
* @param {number} changeData.year - Jahr
* @param {string} changeData.title - Titel
* @param {string} changeData.consentType - 'workshop' oder 'social_media'
* @param {string} changeData.action - 'revoke' oder 'restore'
* @param {string} [changeData.platform] - Plattform-Name (nur bei social_media)
* @param {boolean} [changeData.newValue] - Neuer Wert (nur bei workshop)
*/
async sendConsentChangeNotification(changeData) {
if (!this.isAvailable()) {
console.log('[Telegram] Service not available, skipping consent change notification');
return null;
}
try {
const { name, year, title, consentType, action, platform, newValue } = changeData;
let changeDescription;
if (consentType === 'workshop') {
const icon = newValue ? '✅' : '❌';
const status = newValue ? 'Ja' : 'Nein';
const actionText = action === 'revoke' ? 'widerrufen' : 'wiederhergestellt';
changeDescription = `Workshop-Consent ${actionText}\nNeuer Status: ${icon} ${status}`;
} else if (consentType === 'social_media') {
const actionText = action === 'revoke' ? 'widerrufen' : 'erteilt';
changeDescription = `Social Media Consent ${actionText}\nPlattform: ${platform}`;
}
const message = `
Consent-Änderung
Gruppe: ${year} - ${title}
Uploader: ${name}
${changeDescription}
🔗 Details: ${this.getAdminUrl()}
`.trim();
const response = await this.bot.sendMessage(this.chatId, message);
console.log(`[Telegram] Consent change notification sent for: ${title}`);
return response;
} catch (error) {
console.error('[Telegram] Failed to send consent change notification:', error.message);
throw error;
}
}
/**
* Phase 4: Sendet Benachrichtigung bei Gruppen-Löschung durch User
*
* @param {Object} groupData - Gruppen-Informationen
*/
async sendGroupDeletedNotification(groupData) {
if (!this.isAvailable()) {
console.log('[Telegram] Service not available, skipping group deleted notification');
return null;
}
try {
const message = `
User-Änderung
Aktion: Gruppe gelöscht
Gruppe: ${groupData.year} - ${groupData.title}
Uploader: ${groupData.name}
Bilder: ${groupData.imageCount}
User hat Gruppe selbst über Management-Link gelöscht
`.trim();
const response = await this.bot.sendMessage(this.chatId, message);
console.log(`[Telegram] Group deleted notification sent for: ${groupData.title}`);
return response;
} catch (error) {
console.error('[Telegram] Failed to send group deleted notification:', error.message);
return null;
}
}
/**
* Phase 5: Sendet tägliche Warnung für bevorstehende Löschungen
*
* @param {Array<Object>} groupsList - Liste der zu löschenden Gruppen
*/
async sendDeletionWarning(groupsList) {
if (!this.isAvailable()) {
console.log('[Telegram] Service not available, skipping deletion warning');
return null;
}
if (!groupsList || groupsList.length === 0) {
console.log('[Telegram] No groups pending deletion, skipping warning');
return null;
}
try {
let groupsText = groupsList.map((group, index) => {
const uploadDate = new Date(group.created_at).toLocaleDateString('de-DE');
return `${index + 1}. ${group.year} - ${group.title}
Uploader: ${group.name}
Bilder: ${group.imageCount}
Hochgeladen: ${uploadDate}`;
}).join('\n\n');
const message = `
Löschung in 24 Stunden!
Folgende Gruppen werden morgen automatisch gelöscht:
${groupsText}
💡 Jetzt freigeben oder Freigabe bleibt aus!
🔗 Zur Moderation: ${this.getAdminUrl()}
`.trim();
const response = await this.bot.sendMessage(this.chatId, message);
console.log(`[Telegram] Deletion warning sent for ${groupsList.length} groups`);
return response;
} catch (error) {
console.error('[Telegram] Failed to send deletion warning:', error.message);
return null;
}
}
// =========================================================================
// Helper Methods
// =========================================================================
/**
* Formatiert Social Media Consents als Icons
*
* @param {Array<string>} platforms - Liste der Plattformen
* @returns {string} Formatierter String mit Icons
*/
formatSocialMediaIcons(platforms) {
if (!platforms || platforms.length === 0) {
return '';
}
const iconMap = {
'facebook': '📘 Facebook',
'instagram': '📷 Instagram',
'tiktok': '🎵 TikTok'
};
return platforms.map(p => iconMap[p.toLowerCase()] || p).join(', ');
}
/**
* Gibt die Admin-URL zurück
*
* @returns {string} Admin-Panel URL
*/
getAdminUrl() {
const host = process.env.INTERNAL_HOST || 'internal.hobbyhimmel.de';
const isProduction = process.env.NODE_ENV === 'production';
const protocol = isProduction ? 'https' : 'http';
const port = isProduction ? '' : ':3000';
return `${protocol}://${host}${port}/moderation`;
}
}
module.exports = TelegramNotificationService;

View File

@ -24,8 +24,6 @@ function formatGroupDetail(groupRow, images) {
name: groupRow.name, name: groupRow.name,
uploadDate: groupRow.upload_date, uploadDate: groupRow.upload_date,
approved: Boolean(groupRow.approved), approved: Boolean(groupRow.approved),
display_in_workshop: Boolean(groupRow.display_in_workshop),
consent_timestamp: groupRow.consent_timestamp || null,
images: images.map(img => ({ images: images.map(img => ({
id: img.id, id: img.id,
fileName: img.file_name, fileName: img.file_name,
@ -34,8 +32,7 @@ function formatGroupDetail(groupRow, images) {
previewPath: img.preview_path || null, previewPath: img.preview_path || null,
uploadOrder: img.upload_order, uploadOrder: img.upload_order,
fileSize: img.file_size || null, fileSize: img.file_size || null,
mimeType: img.mime_type || null, mimeType: img.mime_type || null
imageDescription: img.image_description || null
})), })),
imageCount: images.length imageCount: images.length
}; };

View File

@ -3,7 +3,7 @@ const { renderRoutes } = require('../routes/index');
const removeImages = require('./remove-images'); const removeImages = require('./remove-images');
const fs = require('fs'); const fs = require('fs');
const path = require('path'); const path = require('path');
const { endpoints, UPLOAD_FS_DIR, PREVIEW_FS_DIR } = require('../constants'); const { endpoints, UPLOAD_FS_DIR } = require('../constants');
const initiateResources = (app) => { const initiateResources = (app) => {
@ -11,23 +11,12 @@ const initiateResources = (app) => {
renderRoutes(app); renderRoutes(app);
// Ensure upload images directory exists // Ensure upload images directory exists: backend/src/../data/images
// In test mode, UPLOAD_FS_DIR is absolute (/tmp/...), otherwise relative (data/images) const imagesDir = path.join(__dirname, '..', UPLOAD_FS_DIR);
const imagesDir = path.isAbsolute(UPLOAD_FS_DIR)
? UPLOAD_FS_DIR
: path.join(__dirname, '..', UPLOAD_FS_DIR);
if (!fs.existsSync(imagesDir)){ if (!fs.existsSync(imagesDir)){
fs.mkdirSync(imagesDir, { recursive: true }); fs.mkdirSync(imagesDir, { recursive: true });
} }
// Ensure preview images directory exists
const previewsDir = path.isAbsolute(PREVIEW_FS_DIR)
? PREVIEW_FS_DIR
: path.join(__dirname, '..', PREVIEW_FS_DIR);
if (!fs.existsSync(previewsDir)){
fs.mkdirSync(previewsDir, { recursive: true });
}
// Ensure db directory exists: backend/src/../data/db // Ensure db directory exists: backend/src/../data/db
const dbDir = path.join(__dirname, '..', 'data', 'db'); const dbDir = path.join(__dirname, '..', 'data', 'db');
if (!fs.existsSync(dbDir)){ if (!fs.existsSync(dbDir)){

View File

@ -1,48 +0,0 @@
const { getRequest } = require('../testServer');
const { getAdminSession } = require('../utils/adminSession');
describe('Admin Auth Middleware', () => {
describe('Without Session', () => {
it('should reject requests without session cookie', async () => {
const response = await getRequest()
.get('/api/admin/deletion-log')
.expect(403);
expect(response.body).toHaveProperty('error');
expect(response.body).toHaveProperty('reason', 'SESSION_REQUIRED');
});
});
describe('With Valid Session', () => {
let adminSession;
beforeAll(async () => {
adminSession = await getAdminSession();
});
it('should allow access with valid session', async () => {
const response = await adminSession.agent
.get('/api/admin/deletion-log')
.expect(200);
expect(response.body).toHaveProperty('success');
});
it('should allow access to multiple admin endpoints', async () => {
const endpoints = [
'/api/admin/deletion-log',
'/api/admin/rate-limiter/stats',
'/api/admin/management-audit',
'/api/admin/groups'
];
for (const endpoint of endpoints) {
const response = await adminSession.agent
.get(endpoint)
.expect(200);
expect(response.body).toBeDefined();
}
});
});
});

View File

@ -1,67 +0,0 @@
const { getRequest } = require('../testServer');
describe('Admin API - Security', () => {
describe('Authentication & Authorization', () => {
const adminEndpoints = [
{ method: 'get', path: '/api/admin/deletion-log' },
{ method: 'get', path: '/api/admin/deletion-log/csv' },
{ method: 'post', path: '/api/admin/cleanup/run' },
{ method: 'get', path: '/api/admin/cleanup/status' },
{ method: 'get', path: '/api/admin/rate-limiter/stats' },
{ method: 'get', path: '/api/admin/management-audit' },
{ method: 'get', path: '/api/admin/groups' },
{ method: 'put', path: '/api/admin/groups/test-id/approve' },
{ method: 'delete', path: '/api/admin/groups/test-id' }
];
adminEndpoints.forEach(({ method, path }) => {
it(`should protect ${method.toUpperCase()} ${path} without authorization`, async () => {
await getRequest()
[method](path)
.expect(403);
});
});
});
describe('GET /api/admin/deletion-log', () => {
it('should require authorization header', async () => {
const response = await getRequest()
.get('/api/admin/deletion-log')
.expect(403);
expect(response.body).toHaveProperty('reason', 'SESSION_REQUIRED');
});
});
describe('GET /api/admin/cleanup/status', () => {
it('should require authorization', async () => {
await getRequest()
.get('/api/admin/cleanup/status')
.expect(403);
});
});
describe('GET /api/admin/rate-limiter/stats', () => {
it('should require authorization', async () => {
await getRequest()
.get('/api/admin/rate-limiter/stats')
.expect(403);
});
});
describe('GET /api/admin/groups', () => {
it('should require authorization', async () => {
await getRequest()
.get('/api/admin/groups')
.expect(403);
});
it('should validate query parameters with authorization', async () => {
// This test would require a logged-in admin session
// For now, we just ensure the endpoint rejects unauthenticated access
await getRequest()
.get('/api/admin/groups?status=invalid_status')
.expect(403); // Still 403 without auth, but validates endpoint exists
});
});
});

View File

@ -1,121 +0,0 @@
const { getRequest } = require('../testServer');
const { getAdminSession } = require('../utils/adminSession');
describe('Consent Management API', () => {
let adminSession;
beforeAll(async () => {
adminSession = await getAdminSession();
});
describe('GET /api/admin/social-media/platforms', () => {
it('should return list of social media platforms', async () => {
const response = await adminSession.agent
.get('/api/admin/social-media/platforms')
.expect('Content-Type', /json/)
.expect(200);
expect(Array.isArray(response.body)).toBe(true);
});
it('should include platform metadata', async () => {
const response = await adminSession.agent
.get('/api/admin/social-media/platforms');
if (response.body.length > 0) {
const platform = response.body[0];
expect(platform).toHaveProperty('id');
expect(platform).toHaveProperty('platform_name');
expect(platform).toHaveProperty('display_name');
}
});
});
describe('GET /api/admin/groups/:groupId/consents', () => {
it('should return 404 for non-existent group', async () => {
await adminSession.agent
.get('/api/admin/groups/non-existent-group/consents')
.expect(404);
});
it('should reject path traversal attempts', async () => {
await adminSession.agent
.get('/api/admin/groups/../../../etc/passwd/consents')
.expect(404);
});
});
describe('POST /api/admin/groups/:groupId/consents', () => {
it('should require admin authorization', async () => {
await getRequest()
.post('/api/admin/groups/test-group-id/consents')
.send({ consents: {} })
.expect(403); // No auth header
});
it('should require valid consent data with auth', async () => {
const response = await adminSession.agent
.post('/api/admin/groups/test-group-id/consents')
.set('X-CSRF-Token', adminSession.csrfToken)
.send({})
.expect(400);
expect(response.body).toHaveProperty('error');
});
});
describe('GET /api/admin/groups/by-consent', () => {
it('should return filtered groups', async () => {
const response = await adminSession.agent
.get('/api/admin/groups/by-consent')
.expect('Content-Type', /json/)
.expect(200);
expect(response.body).toHaveProperty('groups');
expect(response.body).toHaveProperty('count');
expect(Array.isArray(response.body.groups)).toBe(true);
});
it('should accept platform filter', async () => {
const response = await adminSession.agent
.get('/api/admin/groups/by-consent?platformId=1')
.expect(200);
expect(response.body).toHaveProperty('groups');
expect(response.body).toHaveProperty('filters');
});
it('should accept consent filter', async () => {
const response = await adminSession.agent
.get('/api/admin/groups/by-consent?displayInWorkshop=true')
.expect(200);
expect(response.body).toHaveProperty('groups');
expect(response.body.filters).toHaveProperty('displayInWorkshop', true);
});
});
describe('GET /api/admin/consents/export', () => {
it('should require admin authorization', async () => {
await getRequest()
.get('/api/admin/consents/export')
.expect(403);
});
it('should return CSV format with auth and format parameter', async () => {
const response = await adminSession.agent
.get('/api/admin/consents/export?format=csv')
.expect(200);
expect(response.headers['content-type']).toMatch(/text\/csv/);
expect(response.headers['content-disposition']).toMatch(/attachment/);
});
it('should include CSV header', async () => {
const response = await adminSession.agent
.get('/api/admin/consents/export?format=csv');
expect(response.text).toContain('group_id');
});
});
});

View File

@ -1,68 +0,0 @@
const { getRequest } = require('../testServer');
describe('System Migration API', () => {
describe('GET /api/system/migration/health', () => {
it('should return 200 with healthy status', async () => {
const response = await getRequest()
.get('/api/system/migration/health')
.expect('Content-Type', /json/)
.expect(200);
expect(response.body).toHaveProperty('database');
expect(response.body.database).toHaveProperty('healthy');
expect(response.body.database).toHaveProperty('status');
expect(response.body.database.healthy).toBe(true);
});
it('should include database connection status', async () => {
const response = await getRequest()
.get('/api/system/migration/health');
expect(response.body.database).toHaveProperty('healthy');
expect(typeof response.body.database.healthy).toBe('boolean');
expect(response.body.database.status).toBe('OK');
});
});
describe('GET /api/system/migration/status', () => {
it('should return current migration status', async () => {
const response = await getRequest()
.get('/api/system/migration/status')
.expect('Content-Type', /json/)
.expect(200);
expect(response.body).toHaveProperty('database');
expect(response.body).toHaveProperty('json');
expect(response.body).toHaveProperty('migrated');
expect(response.body).toHaveProperty('needsMigration');
expect(typeof response.body.migrated).toBe('boolean');
});
it('should return migration metadata', async () => {
const response = await getRequest()
.get('/api/system/migration/status');
expect(response.body.database).toHaveProperty('groups');
expect(response.body.database).toHaveProperty('images');
expect(response.body.database).toHaveProperty('initialized');
expect(typeof response.body.database.groups).toBe('number');
expect(typeof response.body.database.images).toBe('number');
});
});
describe('POST /api/system/migration/migrate', () => {
it('should require admin authorization', async () => {
await getRequest()
.post('/api/system/migration/migrate')
.expect(403); // Should be protected by auth
});
});
describe('POST /api/system/migration/rollback', () => {
it('should require admin authorization', async () => {
await getRequest()
.post('/api/system/migration/rollback')
.expect(403);
});
});
});

View File

@ -1,183 +0,0 @@
/**
* Integration Tests für Telegram Upload-Benachrichtigungen
*
* Phase 3: Upload-Benachrichtigungen
*
* Diese Tests prüfen die Integration zwischen Upload-Route und Telegram-Service
*/
const path = require('path');
const fs = require('fs');
const { getRequest } = require('../testServer');
describe('Telegram Upload Notifications (Integration)', () => {
let TelegramNotificationService;
let sendUploadNotificationSpy;
beforeAll(() => {
// Spy auf TelegramNotificationService
TelegramNotificationService = require('../../src/services/TelegramNotificationService');
});
beforeEach(() => {
// Spy auf sendUploadNotification erstellen
sendUploadNotificationSpy = jest.spyOn(TelegramNotificationService.prototype, 'sendUploadNotification')
.mockResolvedValue({ message_id: 42 });
// isAvailable() immer true zurückgeben für Tests
jest.spyOn(TelegramNotificationService.prototype, 'isAvailable')
.mockReturnValue(true);
});
afterEach(() => {
// Restore alle Spys
jest.restoreAllMocks();
});
describe('POST /api/upload/batch', () => {
const testImagePath = path.join(__dirname, '../utils/test-image.jpg');
// Erstelle Test-Bild falls nicht vorhanden
beforeAll(() => {
if (!fs.existsSync(testImagePath)) {
// Erstelle 1x1 px JPEG
const buffer = Buffer.from([
0xFF, 0xD8, 0xFF, 0xE0, 0x00, 0x10, 0x4A, 0x46,
0x49, 0x46, 0x00, 0x01, 0x01, 0x00, 0x00, 0x01,
0x00, 0x01, 0x00, 0x00, 0xFF, 0xDB, 0x00, 0x43,
0x00, 0x08, 0x06, 0x06, 0x07, 0x06, 0x05, 0x08,
0x07, 0x07, 0x07, 0x09, 0x09, 0x08, 0x0A, 0x0C,
0x14, 0x0D, 0x0C, 0x0B, 0x0B, 0x0C, 0x19, 0x12,
0x13, 0x0F, 0x14, 0x1D, 0x1A, 0x1F, 0x1E, 0x1D,
0x1A, 0x1C, 0x1C, 0x20, 0x24, 0x2E, 0x27, 0x20,
0x22, 0x2C, 0x23, 0x1C, 0x1C, 0x28, 0x37, 0x29,
0x2C, 0x30, 0x31, 0x34, 0x34, 0x34, 0x1F, 0x27,
0x39, 0x3D, 0x38, 0x32, 0x3C, 0x2E, 0x33, 0x34,
0x32, 0xFF, 0xC0, 0x00, 0x0B, 0x08, 0x00, 0x01,
0x00, 0x01, 0x01, 0x01, 0x11, 0x00, 0xFF, 0xC4,
0x00, 0x14, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x03, 0xFF, 0xC4, 0x00, 0x14,
0x10, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0xFF, 0xDA, 0x00, 0x08, 0x01, 0x01,
0x00, 0x00, 0x3F, 0x00, 0x37, 0xFF, 0xD9
]);
fs.writeFileSync(testImagePath, buffer);
}
});
it('sollte Telegram-Benachrichtigung bei erfolgreichem Upload senden', async () => {
const response = await getRequest()
.post('/api/upload/batch')
.field('year', '2024')
.field('title', 'Test Upload')
.field('name', 'Test User')
.field('consents', JSON.stringify({
workshopConsent: true,
socialMediaConsents: ['instagram', 'tiktok']
}))
.attach('images', testImagePath);
// Upload sollte erfolgreich sein
expect(response.status).toBe(200);
expect(response.body.message).toBe('Batch upload successful');
// Warte kurz auf async Telegram-Call
await new Promise(resolve => setTimeout(resolve, 150));
// Telegram-Service sollte aufgerufen worden sein
expect(sendUploadNotificationSpy).toHaveBeenCalledWith(
expect.objectContaining({
name: 'Test User',
year: 2024,
title: 'Test Upload',
imageCount: 1,
workshopConsent: true,
socialMediaConsents: ['instagram', 'tiktok']
})
);
});
it('sollte Upload nicht fehlschlagen wenn Telegram-Service nicht verfügbar', async () => {
// Restore mock und setze isAvailable auf false
jest.restoreAllMocks();
jest.spyOn(TelegramNotificationService.prototype, 'isAvailable')
.mockReturnValue(false);
sendUploadNotificationSpy = jest.spyOn(TelegramNotificationService.prototype, 'sendUploadNotification');
const response = await getRequest()
.post('/api/upload/batch')
.field('year', '2024')
.field('title', 'Test Upload')
.field('name', 'Test User')
.field('consents', JSON.stringify({
workshopConsent: false,
socialMediaConsents: []
}))
.attach('images', testImagePath);
// Upload sollte trotzdem erfolgreich sein
expect(response.status).toBe(200);
expect(response.body.message).toBe('Batch upload successful');
// Telegram sollte nicht aufgerufen worden sein
expect(sendUploadNotificationSpy).not.toHaveBeenCalled();
});
it('sollte Upload nicht fehlschlagen wenn Telegram-Benachrichtigung fehlschlägt', async () => {
sendUploadNotificationSpy.mockRejectedValueOnce(
new Error('Telegram API Error')
);
const response = await getRequest()
.post('/api/upload/batch')
.field('year', '2024')
.field('title', 'Test Upload')
.field('name', 'Test User')
.field('consents', JSON.stringify({
workshopConsent: true,
socialMediaConsents: []
}))
.attach('images', testImagePath);
// Upload sollte trotzdem erfolgreich sein
expect(response.status).toBe(200);
expect(response.body.message).toBe('Batch upload successful');
// Warte auf async error handling
await new Promise(resolve => setTimeout(resolve, 150));
// Telegram wurde versucht aufzurufen
expect(sendUploadNotificationSpy).toHaveBeenCalled();
});
it('sollte korrekte Daten an Telegram-Service übergeben', async () => {
const response = await getRequest()
.post('/api/upload/batch')
.field('year', '2025')
.field('title', 'Schweißkurs November')
.field('name', 'Max Mustermann')
.field('consents', JSON.stringify({
workshopConsent: true,
socialMediaConsents: ['facebook', 'instagram']
}))
.attach('images', testImagePath)
.attach('images', testImagePath);
expect(response.status).toBe(200);
await new Promise(resolve => setTimeout(resolve, 150));
expect(sendUploadNotificationSpy).toHaveBeenCalledWith({
name: 'Max Mustermann',
year: 2025,
title: 'Schweißkurs November',
imageCount: 2,
workshopConsent: true,
socialMediaConsents: ['facebook', 'instagram'],
token: expect.any(String)
});
});
});
});

View File

@ -1,58 +0,0 @@
const { getRequest } = require('../testServer');
const path = require('path');
describe('Upload API', () => {
describe('POST /api/upload', () => {
it('should reject upload without files', async () => {
const response = await getRequest()
.post('/api/upload')
.field('groupName', 'TestGroup')
.expect('Content-Type', /json/)
.expect(400);
expect(response.body).toHaveProperty('error');
expect(response.body.error).toMatch(/datei|file/i);
});
it('should accept upload with file and groupName', async () => {
// Create a simple test buffer (1x1 transparent PNG)
const testImageBuffer = Buffer.from(
'iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAADUlEQVR42mNk+M9QDwADhgGAWjR9awAAAABJRU5ErkJggg==',
'base64'
);
const response = await getRequest()
.post('/api/upload')
.attach('file', testImageBuffer, 'test.png')
.field('groupName', 'TestGroup');
// Log error for debugging
if (response.status !== 200) {
console.log('Upload failed:', response.body);
}
expect(response.status).toBe(200);
expect(response.body).toHaveProperty('filePath');
expect(response.body).toHaveProperty('fileName');
expect(response.body).toHaveProperty('groupId');
expect(response.body).toHaveProperty('groupName', 'TestGroup');
});
it('should use default group name if not provided', async () => {
const testImageBuffer = Buffer.from(
'iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAADUlEQVR42mNk+M9QDwADhgGAWjR9awAAAABJRU5ErkJggg==',
'base64'
);
const response = await getRequest()
.post('/api/upload')
.attach('file', testImageBuffer, 'test.png')
.expect('Content-Type', /json/)
.expect(200);
expect(response.body).toHaveProperty('groupName');
// Should use default: 'Unnamed Group'
expect(response.body.groupName).toBeTruthy();
});
});
});

View File

@ -1,4 +0,0 @@
process.env.NODE_ENV = 'test';
process.env.PORT = process.env.PORT || '5001';
process.env.ADMIN_SESSION_SECRET = process.env.ADMIN_SESSION_SECRET || 'test-session-secret';
process.env.SKIP_PREVIEW_GENERATION = process.env.SKIP_PREVIEW_GENERATION || '1';

View File

@ -1,33 +0,0 @@
/**
* Global Setup - Runs ONCE before all test suites
* Initialize test server and database here
*/
const Server = require('../src/server');
module.exports = async () => {
console.log('\n🔧 Global Test Setup - Initializing test server...\n');
// Set test environment variables
process.env.NODE_ENV = 'test';
process.env.PORT = 5001;
process.env.ADMIN_SESSION_SECRET = process.env.ADMIN_SESSION_SECRET || 'test-session-secret';
try {
// Create and initialize server
console.log('Creating server instance...');
const serverInstance = new Server(5001);
console.log('Initializing app...');
const app = await serverInstance.initializeApp();
// Store in global scope for all tests
global.__TEST_SERVER__ = serverInstance;
global.__TEST_APP__ = app;
console.log('✅ Test server initialized successfully\n');
} catch (error) {
console.error('❌ Failed to initialize test server:', error);
throw error;
}
};

View File

@ -1,14 +0,0 @@
/**
* Global Teardown - Runs ONCE after all test suites
* Cleanup resources here
*/
module.exports = async () => {
console.log('\n🧹 Global Test Teardown - Cleaning up...\n');
// Cleanup global references
delete global.__TEST_SERVER__;
delete global.__TEST_APP__;
console.log('✅ Test cleanup complete\n');
};

View File

@ -1,48 +0,0 @@
/**
* Setup file - Runs before EACH test file
* Initialize server singleton here
*/
// Ensure test environment variables are set before any application modules load
process.env.NODE_ENV = process.env.NODE_ENV || 'test';
process.env.PORT = process.env.PORT || 5001;
process.env.ADMIN_SESSION_SECRET = process.env.ADMIN_SESSION_SECRET || 'test-session-secret';
const Server = require('../src/server');
// Singleton pattern - initialize only once
let serverInstance = null;
let app = null;
async function initializeTestServer() {
if (!app) {
console.log('🔧 Initializing test server (one-time)...');
serverInstance = new Server(5001);
app = await serverInstance.initializeApp();
global.__TEST_SERVER__ = serverInstance;
global.__TEST_APP__ = app;
console.log('✅ Test server ready');
}
return app;
}
// Initialize before all tests
beforeAll(async () => {
await initializeTestServer();
});
// Test timeout
jest.setTimeout(10000);
// Suppress logs during tests
global.console = {
...console,
log: jest.fn(),
info: jest.fn(),
debug: jest.fn(),
error: console.error,
warn: console.warn,
};

View File

@ -1,50 +0,0 @@
const request = require('supertest');
/**
* Get supertest request instance
* Uses globally initialized server from globalSetup.js
*/
let cachedAgent = null;
function getApp() {
const app = global.__TEST_APP__;
if (!app) {
throw new Error(
'Test server not initialized. This should be handled by globalSetup.js automatically.'
);
}
return app;
}
function getRequest() {
return request(getApp());
}
function getAgent() {
if (!cachedAgent) {
cachedAgent = request.agent(getApp());
}
return cachedAgent;
}
/**
* Legacy compatibility - these are now no-ops
* Server is initialized globally
*/
async function setupTestServer() {
return {
app: global.__TEST_APP__,
serverInstance: global.__TEST_SERVER__
};
}
async function teardownTestServer() {
// No-op - cleanup happens in globalTeardown.js
}
module.exports = {
setupTestServer,
teardownTestServer,
getRequest,
getAgent
};

View File

@ -1,216 +0,0 @@
/**
* Unit Tests für TelegramNotificationService
*
* Phase 2: Basic Service Tests
*/
const TelegramNotificationService = require('../../src/services/TelegramNotificationService');
// Mock node-telegram-bot-api komplett
jest.mock('node-telegram-bot-api');
describe('TelegramNotificationService', () => {
let originalEnv;
let TelegramBot;
let mockBotInstance;
beforeAll(() => {
TelegramBot = require('node-telegram-bot-api');
});
beforeEach(() => {
// Speichere originale ENV-Variablen
originalEnv = { ...process.env };
// Setze Test-ENV
process.env.TELEGRAM_ENABLED = 'true';
process.env.TELEGRAM_BOT_TOKEN = 'test-bot-token-123';
process.env.TELEGRAM_CHAT_ID = '-1001234567890';
// Erstelle Mock-Bot-Instanz
mockBotInstance = {
sendMessage: jest.fn().mockResolvedValue({
message_id: 42,
chat: { id: -1001234567890 },
text: 'Test'
}),
getMe: jest.fn().mockResolvedValue({
id: 123456,
first_name: 'Test Bot',
username: 'test_bot'
})
};
// Mock TelegramBot constructor
TelegramBot.mockImplementation(() => mockBotInstance);
});
afterEach(() => {
// Restore original ENV
process.env = originalEnv;
});
describe('Initialization', () => {
it('sollte erfolgreich initialisieren wenn TELEGRAM_ENABLED=true', () => {
const service = new TelegramNotificationService();
expect(service.isAvailable()).toBe(true);
expect(TelegramBot).toHaveBeenCalledWith('test-bot-token-123', { polling: false });
});
it('sollte nicht initialisieren wenn TELEGRAM_ENABLED=false', () => {
process.env.TELEGRAM_ENABLED = 'false';
const service = new TelegramNotificationService();
expect(service.isAvailable()).toBe(false);
});
it('sollte fehlschlagen wenn TELEGRAM_BOT_TOKEN fehlt', () => {
delete process.env.TELEGRAM_BOT_TOKEN;
const service = new TelegramNotificationService();
expect(service.isAvailable()).toBe(false);
});
it('sollte fehlschlagen wenn TELEGRAM_CHAT_ID fehlt', () => {
delete process.env.TELEGRAM_CHAT_ID;
const service = new TelegramNotificationService();
expect(service.isAvailable()).toBe(false);
});
});
describe('sendTestMessage', () => {
it('sollte Test-Nachricht erfolgreich senden', async () => {
const service = new TelegramNotificationService();
const result = await service.sendTestMessage();
expect(result).toBeDefined();
expect(result.message_id).toBe(42);
expect(mockBotInstance.sendMessage).toHaveBeenCalledWith(
'-1001234567890',
expect.stringContaining('Telegram Service Test')
);
});
it('sollte null zurückgeben wenn Service nicht verfügbar', async () => {
process.env.TELEGRAM_ENABLED = 'false';
const service = new TelegramNotificationService();
const result = await service.sendTestMessage();
expect(result).toBeNull();
});
it('sollte Fehler werfen bei Telegram-API-Fehler', async () => {
const service = new TelegramNotificationService();
mockBotInstance.sendMessage.mockRejectedValueOnce(new Error('API Error'));
await expect(service.sendTestMessage()).rejects.toThrow('API Error');
});
});
describe('formatSocialMediaIcons', () => {
it('sollte Social Media Plattformen korrekt formatieren', () => {
const service = new TelegramNotificationService();
const result = service.formatSocialMediaIcons(['facebook', 'instagram', 'tiktok']);
expect(result).toBe('📘 Facebook, 📷 Instagram, 🎵 TikTok');
});
it('sollte leeren String bei leerer Liste zurückgeben', () => {
const service = new TelegramNotificationService();
const result = service.formatSocialMediaIcons([]);
expect(result).toBe('');
});
it('sollte case-insensitive sein', () => {
const service = new TelegramNotificationService();
const result = service.formatSocialMediaIcons(['FACEBOOK', 'Instagram', 'TikTok']);
expect(result).toBe('📘 Facebook, 📷 Instagram, 🎵 TikTok');
});
});
describe('getAdminUrl', () => {
it('sollte Admin-URL mit PUBLIC_URL generieren', () => {
process.env.PUBLIC_URL = 'https://test.example.com';
const service = new TelegramNotificationService();
const url = service.getAdminUrl();
expect(url).toBe('https://test.example.com/moderation');
});
it('sollte Default-URL verwenden wenn PUBLIC_URL nicht gesetzt', () => {
delete process.env.PUBLIC_URL;
const service = new TelegramNotificationService();
const url = service.getAdminUrl();
expect(url).toBe('https://internal.hobbyhimmel.de/moderation');
});
});
describe('sendUploadNotification (Phase 3)', () => {
it('sollte Upload-Benachrichtigung mit korrekten Daten senden', async () => {
const service = new TelegramNotificationService();
const groupData = {
name: 'Max Mustermann',
year: 2024,
title: 'Schweißkurs November',
imageCount: 12,
workshopConsent: true,
socialMediaConsents: ['instagram', 'tiktok'],
token: 'test-token-123'
};
await service.sendUploadNotification(groupData);
expect(mockBotInstance.sendMessage).toHaveBeenCalledWith(
'-1001234567890',
expect.stringContaining('📸 Neuer Upload!')
);
expect(mockBotInstance.sendMessage).toHaveBeenCalledWith(
'-1001234567890',
expect.stringContaining('Max Mustermann')
);
expect(mockBotInstance.sendMessage).toHaveBeenCalledWith(
'-1001234567890',
expect.stringContaining('Bilder: 12')
);
});
it('sollte null zurückgeben und nicht werfen bei Fehler', async () => {
const service = new TelegramNotificationService();
mockBotInstance.sendMessage.mockRejectedValueOnce(new Error('Network error'));
const groupData = {
name: 'Test User',
year: 2024,
title: 'Test',
imageCount: 5,
workshopConsent: false,
socialMediaConsents: []
};
const result = await service.sendUploadNotification(groupData);
expect(result).toBeNull();
});
});
});

View File

@ -1,148 +0,0 @@
const { requireAdminAuth } = require('../../src/middlewares/auth');
const AdminAuthService = require('../../src/services/AdminAuthService');
const AdminUserRepository = require('../../src/repositories/AdminUserRepository');
const dbManager = require('../../src/database/DatabaseManager');
describe('Auth Middleware Unit Test (Session based)', () => {
let req, res, next;
beforeEach(() => {
req = { session: null };
res = {
status: jest.fn().mockReturnThis(),
json: jest.fn(),
locals: {}
};
next = jest.fn();
});
test('should reject when no session exists', () => {
requireAdminAuth(req, res, next);
expect(res.status).toHaveBeenCalledWith(403);
expect(res.json).toHaveBeenCalledWith(
expect.objectContaining({
error: 'Zugriff verweigert',
reason: 'SESSION_REQUIRED'
})
);
expect(next).not.toHaveBeenCalled();
});
test('should reject when session user is missing', () => {
req.session = {};
requireAdminAuth(req, res, next);
expect(res.status).toHaveBeenCalledWith(403);
expect(res.json).toHaveBeenCalledWith(
expect.objectContaining({ reason: 'SESSION_REQUIRED' })
);
expect(next).not.toHaveBeenCalled();
});
test('should reject non-admin roles', () => {
req.session = { user: { id: 1, role: 'viewer' } };
requireAdminAuth(req, res, next);
expect(res.status).toHaveBeenCalledWith(403);
expect(res.json).toHaveBeenCalledWith(
expect.objectContaining({ reason: 'SESSION_REQUIRED' })
);
expect(next).not.toHaveBeenCalled();
});
test('should pass through for admin sessions and expose user on locals', () => {
const adminUser = { id: 1, role: 'admin', username: 'testadmin' };
req.session = { user: adminUser };
requireAdminAuth(req, res, next);
expect(next).toHaveBeenCalled();
expect(res.status).not.toHaveBeenCalled();
expect(res.locals.adminUser).toEqual(adminUser);
});
});
describe('AdminAuthService', () => {
beforeEach(async () => {
await dbManager.run('DELETE FROM admin_users');
});
afterEach(async () => {
await dbManager.run('DELETE FROM admin_users');
});
test('needsInitialSetup reflects admin count', async () => {
await expect(AdminAuthService.needsInitialSetup()).resolves.toBe(true);
await AdminAuthService.createInitialAdmin({
username: 'existing',
password: 'SuperSecure123!'
});
await expect(AdminAuthService.needsInitialSetup()).resolves.toBe(false);
});
test('createInitialAdmin validates input and detects completed setup', async () => {
await expect(
AdminAuthService.createInitialAdmin({ username: '', password: 'SuperSecure123!' })
).rejects.toThrow('USERNAME_REQUIRED');
await expect(
AdminAuthService.createInitialAdmin({ username: 'admin', password: 'short' })
).rejects.toThrow('PASSWORD_TOO_WEAK');
await AdminAuthService.createInitialAdmin({ username: 'seed', password: 'SuperSecure123!' });
await expect(
AdminAuthService.createInitialAdmin({ username: 'admin', password: 'SuperSecure123!' })
).rejects.toThrow('SETUP_ALREADY_COMPLETED');
});
test('createInitialAdmin persists normalized admin when setup allowed', async () => {
const result = await AdminAuthService.createInitialAdmin({
username: 'TestAdmin',
password: 'SuperSecure123!'
});
expect(result.username).toBe('testadmin');
expect(result.role).toBe('admin');
const stored = await AdminUserRepository.getByUsername('testadmin');
expect(stored).toMatchObject({ username: 'testadmin', role: 'admin', is_active: 1 });
});
test('verifyCredentials handles missing users and password mismatches', async () => {
await expect(AdminAuthService.verifyCredentials('admin', 'pw')).resolves.toBeNull();
const hash = await AdminAuthService.hashPassword('SuperSecure123!');
await AdminUserRepository.createAdminUser({
username: 'admin',
passwordHash: hash,
role: 'admin',
requiresPasswordChange: false
});
await expect(AdminAuthService.verifyCredentials('admin', 'wrong')).resolves.toBeNull();
});
test('verifyCredentials returns sanitized user for valid credentials', async () => {
const hash = await AdminAuthService.hashPassword('SuperSecure123!');
await AdminUserRepository.createAdminUser({
username: 'admin',
passwordHash: hash,
role: 'admin',
requiresPasswordChange: true
});
const result = await AdminAuthService.verifyCredentials('admin', 'SuperSecure123!');
expect(result).toEqual({
id: expect.any(Number),
username: 'admin',
role: 'admin',
requiresPasswordChange: true
});
});
});

View File

@ -1,153 +0,0 @@
const fs = require('fs');
const GroupRepository = require('../../src/repositories/GroupRepository');
const DeletionLogRepository = require('../../src/repositories/DeletionLogRepository');
const GroupCleanupService = require('../../src/services/GroupCleanupService');
describe('GroupCleanupService', () => {
beforeEach(() => {
jest.clearAllMocks();
});
afterEach(() => {
jest.restoreAllMocks();
});
describe('getDaysUntilDeletion', () => {
const NOW = new Date('2024-01-10T00:00:00Z');
beforeAll(() => {
jest.useFakeTimers();
jest.setSystemTime(NOW);
});
afterAll(() => {
jest.useRealTimers();
});
it('returns remaining days when future deletion date is ahead', () => {
const days = GroupCleanupService.getDaysUntilDeletion(new Date('2024-01-05T00:00:00Z'));
expect(days).toBe(2);
});
it('clamps negative differences to zero', () => {
const days = GroupCleanupService.getDaysUntilDeletion(new Date('2023-12-01T00:00:00Z'));
expect(days).toBe(0);
});
});
describe('deletePhysicalFiles', () => {
it('counts successful deletions and ignores missing files', async () => {
const unlinkMock = jest.spyOn(fs.promises, 'unlink');
unlinkMock
.mockResolvedValueOnce()
.mockRejectedValueOnce(Object.assign(new Error('missing'), { code: 'ENOENT' }))
.mockRejectedValueOnce(new Error('boom'))
.mockResolvedValueOnce();
const result = await GroupCleanupService.deletePhysicalFiles([
{ file_path: 'images/one.jpg', preview_path: 'previews/one.jpg' },
{ file_path: 'images/two.jpg', preview_path: 'previews/two.jpg' }
]);
expect(result).toEqual({ success: 2, failed: 1 });
expect(unlinkMock).toHaveBeenCalledTimes(4);
});
});
describe('findGroupsForDeletion', () => {
it('fetches unapproved groups older than default threshold', async () => {
const groups = [{ group_id: 'abc' }];
const findSpy = jest
.spyOn(GroupRepository, 'findUnapprovedGroupsOlderThan')
.mockResolvedValue(groups);
const result = await GroupCleanupService.findGroupsForDeletion();
expect(findSpy).toHaveBeenCalledWith(GroupCleanupService.CLEANUP_DAYS);
expect(result).toBe(groups);
});
});
describe('deleteGroupCompletely', () => {
it('returns null when statistics are missing', async () => {
jest.spyOn(GroupRepository, 'getGroupStatistics').mockResolvedValue(null);
const deleteSpy = jest.spyOn(GroupRepository, 'deleteGroupCompletely').mockResolvedValue({});
const result = await GroupCleanupService.deleteGroupCompletely('missing-group');
expect(result).toBeNull();
expect(deleteSpy).not.toHaveBeenCalled();
});
it('removes group, files and logs deletion', async () => {
jest.spyOn(GroupRepository, 'getGroupStatistics').mockResolvedValue({
groupId: 'group-1',
year: 2024,
imageCount: 3,
uploadDate: '2024-01-01',
totalFileSize: 1234
});
jest.spyOn(GroupRepository, 'deleteGroupCompletely').mockResolvedValue({
imagePaths: [{ file_path: 'images/a.jpg', preview_path: 'previews/a.jpg' }],
deletedImages: 3
});
const deleteFilesSpy = jest
.spyOn(GroupCleanupService, 'deletePhysicalFiles')
.mockResolvedValue({ success: 2, failed: 0 });
const logSpy = jest.spyOn(GroupCleanupService, 'logDeletion').mockResolvedValue();
const result = await GroupCleanupService.deleteGroupCompletely('group-1');
expect(deleteFilesSpy).toHaveBeenCalledWith([{ file_path: 'images/a.jpg', preview_path: 'previews/a.jpg' }]);
expect(logSpy).toHaveBeenCalledWith(
expect.objectContaining({ groupId: 'group-1', imageCount: 3, totalFileSize: 1234 })
);
expect(result).toEqual({ groupId: 'group-1', imagesDeleted: 3, filesDeleted: 2 });
});
});
describe('logDeletion', () => {
it('swallows repository errors so cleanup continues', async () => {
jest.spyOn(DeletionLogRepository, 'createDeletionEntry').mockRejectedValue(new Error('db down'));
await expect(
GroupCleanupService.logDeletion({ groupId: 'g1', year: 2024, imageCount: 1, uploadDate: '2024-01-01' })
).resolves.toBeUndefined();
});
});
describe('performScheduledCleanup', () => {
it('returns early when there is nothing to delete', async () => {
const findSpy = jest.spyOn(GroupCleanupService, 'findGroupsForDeletion').mockResolvedValue([]);
const result = await GroupCleanupService.performScheduledCleanup();
expect(findSpy).toHaveBeenCalled();
expect(result).toEqual({
success: true,
deletedGroups: 0,
message: 'No groups to delete'
});
});
it('keeps track of successes and failures', async () => {
const findSpy = jest
.spyOn(GroupCleanupService, 'findGroupsForDeletion')
.mockResolvedValue([{ group_id: 'g1' }, { group_id: 'g2' }]);
const deleteSpy = jest
.spyOn(GroupCleanupService, 'deleteGroupCompletely')
.mockResolvedValueOnce()
.mockRejectedValueOnce(new Error('boom'));
const result = await GroupCleanupService.performScheduledCleanup();
expect(findSpy).toHaveBeenCalled();
expect(deleteSpy).toHaveBeenCalledTimes(2);
expect(result.success).toBe(true);
expect(result.deletedGroups).toBe(1);
expect(result.failedGroups).toBe(1);
expect(result.duration).toBeDefined();
});
});
});

View File

@ -1,112 +0,0 @@
const { formatGroupListRow, formatGroupDetail } = require('../../src/utils/groupFormatter');
describe('groupFormatter', () => {
describe('formatGroupListRow', () => {
it('maps snake_case columns to camelCase dto', () => {
const row = {
group_id: 'foo',
year: 2024,
title: 'Title',
description: 'Desc',
name: 'Alice',
upload_date: '2024-01-01',
approved: 1,
image_count: '5',
preview_image: 'path/to/thumb.jpg'
};
expect(formatGroupListRow(row)).toEqual({
groupId: 'foo',
year: 2024,
title: 'Title',
description: 'Desc',
name: 'Alice',
uploadDate: '2024-01-01',
approved: true,
imageCount: 5,
previewImage: 'path/to/thumb.jpg'
});
});
it('provides sane defaults when optional values missing', () => {
const row = {
group_id: 'bar',
year: 2023,
title: 'Other',
description: null,
name: null,
upload_date: '2023-12-24',
approved: 0,
image_count: null,
preview_image: undefined
};
expect(formatGroupListRow(row)).toEqual({
groupId: 'bar',
year: 2023,
title: 'Other',
description: null,
name: null,
uploadDate: '2023-12-24',
approved: false,
imageCount: 0,
previewImage: null
});
});
});
describe('formatGroupDetail', () => {
it('maps nested image rows and flags', () => {
const group = {
group_id: 'foo',
year: 2024,
title: 'Title',
description: 'Desc',
name: 'Alice',
upload_date: '2024-01-01',
approved: 0,
display_in_workshop: 1,
consent_timestamp: null
};
const images = [
{
id: 1,
file_name: 'one.png',
original_name: 'one.png',
file_path: 'images/one.png',
preview_path: null,
upload_order: 1,
file_size: null,
mime_type: 'image/png',
image_description: 'desc'
}
];
expect(formatGroupDetail(group, images)).toEqual({
groupId: 'foo',
year: 2024,
title: 'Title',
description: 'Desc',
name: 'Alice',
uploadDate: '2024-01-01',
approved: false,
display_in_workshop: true,
consent_timestamp: null,
images: [
{
id: 1,
fileName: 'one.png',
originalName: 'one.png',
filePath: 'images/one.png',
previewPath: null,
uploadOrder: 1,
fileSize: null,
mimeType: 'image/png',
imageDescription: 'desc'
}
],
imageCount: 1
});
});
});
});

View File

@ -1,267 +0,0 @@
/**
* Unit Tests für hostGate Middleware
* Testet Host-basierte Zugriffskontrolle
*/
// Setup ENV VOR dem Require
process.env.ENABLE_HOST_RESTRICTION = 'true';
process.env.PUBLIC_HOST = 'public.example.com';
process.env.INTERNAL_HOST = 'internal.example.com';
process.env.NODE_ENV = 'development';
let hostGate;
// Helper to create mock request with headers
const createMockRequest = (hostname, path = '/') => {
return {
path,
get: (headerName) => {
if (headerName.toLowerCase() === 'x-forwarded-host') {
return hostname;
}
if (headerName.toLowerCase() === 'host') {
return hostname;
}
return null;
}
};
};
describe('Host Gate Middleware', () => {
let req, res, next;
let originalEnv;
beforeAll(() => {
// Sichere Original-Env
originalEnv = { ...process.env };
// Lade Modul NACH ENV setup
hostGate = require('../../../src/middlewares/hostGate');
});
beforeEach(() => {
// Mock response object
res = {
status: jest.fn().mockReturnThis(),
json: jest.fn()
};
// Mock next function
next = jest.fn();
// Reset req for each test
req = null;
// Setup Environment
process.env.ENABLE_HOST_RESTRICTION = 'true';
process.env.PUBLIC_HOST = 'public.example.com';
process.env.INTERNAL_HOST = 'internal.example.com';
process.env.NODE_ENV = 'development'; // NOT 'test' to enable restrictions
});
afterEach(() => {
jest.clearAllMocks();
});
afterAll(() => {
// Restore Original-Env
process.env = originalEnv;
});
describe('Host Detection', () => {
test('should detect public host from X-Forwarded-Host header', () => {
req = createMockRequest('public.example.com');
hostGate(req, res, next);
expect(req.isPublicHost).toBe(true);
expect(req.isInternalHost).toBe(false);
expect(req.requestSource).toBe('public');
});
test('should detect internal host from X-Forwarded-Host header', () => {
req = createMockRequest('internal.example.com');
hostGate(req, res, next);
expect(req.isPublicHost).toBe(false);
expect(req.isInternalHost).toBe(true);
expect(req.requestSource).toBe('internal');
});
test('should fallback to Host header if X-Forwarded-Host not present', () => {
req = createMockRequest('public.example.com');
hostGate(req, res, next);
expect(req.isPublicHost).toBe(true);
});
test('should handle localhost as internal host', () => {
req = createMockRequest('localhost:3000');
hostGate(req, res, next);
expect(req.isInternalHost).toBe(true);
expect(req.isPublicHost).toBe(false);
});
test('should strip port from hostname', () => {
req = createMockRequest('public.example.com:8080');
hostGate(req, res, next);
expect(req.isPublicHost).toBe(true);
});
});
describe('Route Protection', () => {
test('should block admin routes on public host', () => {
req = createMockRequest('public.example.com', '/api/admin/deletion-log');
hostGate(req, res, next);
expect(res.status).toHaveBeenCalledWith(403);
expect(res.json).toHaveBeenCalledWith({
error: 'Not available on public host',
message: 'This endpoint is only available on the internal network'
});
expect(next).not.toHaveBeenCalled();
});
test('should block groups routes on public host', () => {
req = createMockRequest('public.example.com', '/api/groups');
hostGate(req, res, next);
expect(res.status).toHaveBeenCalledWith(403);
});
test('should block slideshow routes on public host', () => {
req = createMockRequest('public.example.com', '/api/slideshow');
hostGate(req, res, next);
expect(res.status).toHaveBeenCalledWith(403);
});
test('should block migration routes on public host', () => {
req = createMockRequest('public.example.com', '/api/migration/start');
hostGate(req, res, next);
expect(res.status).toHaveBeenCalledWith(403);
});
test('should block auth login on public host', () => {
req = createMockRequest('public.example.com', '/api/auth/login');
hostGate(req, res, next);
expect(res.status).toHaveBeenCalledWith(403);
});
});
describe('Allowed Routes', () => {
test('should allow upload route on public host', () => {
req = createMockRequest('public.example.com', '/api/upload');
hostGate(req, res, next);
expect(next).toHaveBeenCalled();
expect(res.status).not.toHaveBeenCalled();
});
test('should allow manage routes on public host', () => {
req = createMockRequest('public.example.com', '/api/manage/abc-123');
hostGate(req, res, next);
expect(next).toHaveBeenCalled();
});
test('should allow preview routes on public host', () => {
req = createMockRequest('public.example.com', '/api/previews/image.jpg');
hostGate(req, res, next);
expect(next).toHaveBeenCalled();
});
test('should allow consent routes on public host', () => {
req = createMockRequest('public.example.com', '/api/consent');
hostGate(req, res, next);
expect(next).toHaveBeenCalled();
});
test('should allow all routes on internal host', () => {
req = createMockRequest('internal.example.com', '/api/admin/deletion-log');
hostGate(req, res, next);
expect(next).toHaveBeenCalled();
expect(res.status).not.toHaveBeenCalled();
});
});
describe('Feature Flags', () => {
test('should bypass restriction when NODE_ENV is test and not explicitly enabled', () => {
// Reload module with test environment
delete require.cache[require.resolve('../../../src/middlewares/hostGate')];
process.env.NODE_ENV = 'test';
process.env.ENABLE_HOST_RESTRICTION = 'false'; // Explicitly disabled
const hostGateTest = require('../../../src/middlewares/hostGate');
req = createMockRequest('public.example.com', '/api/admin/test');
hostGateTest(req, res, next);
expect(next).toHaveBeenCalled();
expect(res.status).not.toHaveBeenCalled();
expect(req.isInternalHost).toBe(true);
// Restore
delete require.cache[require.resolve('../../../src/middlewares/hostGate')];
process.env.NODE_ENV = 'development';
process.env.ENABLE_HOST_RESTRICTION = 'true';
});
test('should work in test environment when explicitly enabled', () => {
// Reload module with test environment BUT explicitly enabled
delete require.cache[require.resolve('../../../src/middlewares/hostGate')];
process.env.NODE_ENV = 'test';
process.env.ENABLE_HOST_RESTRICTION = 'true'; // Explicitly enabled
const hostGateTest = require('../../../src/middlewares/hostGate');
req = createMockRequest('public.example.com', '/api/admin/test');
hostGateTest(req, res, next);
// Should block because explicitly enabled
expect(res.status).toHaveBeenCalledWith(403);
expect(next).not.toHaveBeenCalled();
// Restore
delete require.cache[require.resolve('../../../src/middlewares/hostGate')];
process.env.NODE_ENV = 'development';
process.env.ENABLE_HOST_RESTRICTION = 'true';
});
});
describe('Request Source Tracking', () => {
test('should set requestSource to "public" for public host', () => {
req = createMockRequest('public.example.com', '/api/upload');
hostGate(req, res, next);
expect(req.requestSource).toBe('public');
});
test('should set requestSource to "internal" for internal host', () => {
req = createMockRequest('internal.example.com', '/api/admin/test');
hostGate(req, res, next);
expect(req.requestSource).toBe('internal');
});
test('should set requestSource to "internal" when restrictions disabled', () => {
// Reload module with disabled restriction
delete require.cache[require.resolve('../../../src/middlewares/hostGate')];
process.env.ENABLE_HOST_RESTRICTION = 'false';
const hostGateDisabled = require('../../../src/middlewares/hostGate');
req = createMockRequest('anything.example.com', '/api/test');
hostGateDisabled(req, res, next);
expect(req.requestSource).toBe('internal');
// Restore
delete require.cache[require.resolve('../../../src/middlewares/hostGate')];
process.env.ENABLE_HOST_RESTRICTION = 'true';
});
});
});

View File

@ -1,73 +0,0 @@
const { getAgent } = require('../testServer');
const DEFAULT_CREDENTIALS = {
username: 'testadmin',
password: 'SuperSicher123!'
};
let cachedSession = null;
async function initializeSession() {
const agent = getAgent();
const statusResponse = await agent
.get('/auth/setup/status')
.expect(200);
let csrfToken;
if (statusResponse.body.needsSetup) {
const setupResponse = await agent
.post('/auth/setup/initial-admin')
.send(DEFAULT_CREDENTIALS)
.expect(201);
csrfToken = setupResponse.body?.csrfToken;
} else {
const loginResponse = await agent
.post('/auth/login')
.send(DEFAULT_CREDENTIALS);
if (loginResponse.status === 409 && loginResponse.body?.error === 'SETUP_REQUIRED') {
// Edge case: setup status may lag behind perform setup now
const setupResponse = await agent
.post('/auth/setup/initial-admin')
.send(DEFAULT_CREDENTIALS)
.expect(201);
csrfToken = setupResponse.body?.csrfToken;
} else if (loginResponse.status !== 200) {
throw new Error(
`Failed to log in test admin (status ${loginResponse.status}): ${JSON.stringify(loginResponse.body)}`
);
} else {
csrfToken = loginResponse.body?.csrfToken;
}
}
if (!csrfToken) {
const csrfResponse = await agent.get('/auth/csrf-token').expect(200);
csrfToken = csrfResponse.body.csrfToken;
}
cachedSession = { agent, csrfToken };
return cachedSession;
}
async function getAdminSession() {
if (cachedSession) {
return cachedSession;
}
return initializeSession();
}
async function refreshCsrfToken() {
const session = await getAdminSession();
const csrfResponse = await session.agent.get('/auth/csrf-token').expect(200);
session.csrfToken = csrfResponse.body.csrfToken;
return session.csrfToken;
}
module.exports = {
getAdminSession,
refreshCsrfToken
};

Binary file not shown.

Before

Width:  |  Height:  |  Size: 159 B

40
dev.sh
View File

@ -1,40 +0,0 @@
#!/bin/bash
# Development Environment Startup Script
# Starts the Project Image Uploader in development mode
set -euo pipefail
echo "Starting Project Image Uploader - Development Environment"
echo " Frontend: http://localhost:3000"
echo " Backend: http://localhost:5001"
echo ""
# Check if production is running
if docker compose ps | grep -q "image-uploader-frontend.*Up"; then
echo "⚠️ Production environment is running (Port 80)"
echo " Development will run on Port 3000 (no conflict)"
echo ""
fi
# Start development environment
echo "Starting development containers..."
docker compose -f docker/dev/docker-compose.yml up -d
echo ""
echo "Development environment started!"
echo ""
echo "Container Status:"
docker compose -f docker/dev/docker-compose.yml ps
echo ""
echo "Access URLs:"
echo " Frontend (Development): http://localhost:3000"
echo " Backend API (Development): http://localhost:5001"
echo ""
echo "Useful Commands:"
echo " Show logs: docker compose -f docker/dev/docker-compose.yml logs -f"
echo " Stop: docker compose -f docker/dev/docker-compose.yml down"
echo " Restart: docker compose -f docker/dev/docker-compose.yml restart"
echo " Rebuild: docker compose -f docker/dev/docker-compose.yml build --no-cache"
echo ""

View File

@ -0,0 +1,68 @@
# Development override to mount the frontend source into a node container
# and run the React dev server with HMR so you can edit files locally
# without rebuilding images. This file is intended to be used together
# with the existing docker-compose.yml from the repository.
services:
image-uploader-frontend:
container_name: image-uploader-frontend-dev
# For dev convenience nginx needs to be able to bind to port 80 and write the pid file
# and we also adjust file permissions on bind-mounted node_modules; run as root in dev.
user: root
# Build and run a development image that contains both nginx and the
# React dev server. nginx will act as a reverse proxy to the dev server
# so the app behaves more like production while HMR still works.
build:
context: ./frontend
dockerfile: Dockerfile.dev
working_dir: /app
# Map host port 3000 to the nginx listener (container:80) so you can open
# http://localhost:3000 and see the nginx-served dev site.
ports:
- "3000:80"
volumes:
- ./frontend:/app:cached
# Keep container node_modules separate so host node_modules doesn't conflict
- node_modules:/app/node_modules
environment:
# Use the backend service name so the dev frontend (running in the same
# compose project) can reach the backend via the internal docker network.
- CHOKIDAR_USEPOLLING=true
- HOST=0.0.0.0
- API_URL=http://image-uploader-backend:5000
- CLIENT_URL=http://localhost:3000
networks:
- npm-nw
- image-uploader-internal
depends_on:
- image-uploader-backend
image-uploader-backend:
container_name: image-uploader-backend-dev
build:
context: ./backend
dockerfile: Dockerfile
working_dir: /usr/src/app
ports:
- "5000:5000"
volumes:
- ./backend:/usr/src/app:cached
- backend_node_modules:/usr/src/app/node_modules
environment:
- NODE_ENV=development
networks:
- image-uploader-internal
command: [ "npm", "run", "server" ]
# The Dockerfile.dev provides a proper CMD that starts nginx and the
# react dev server; no ad-hoc command is required here.
networks:
npm-nw:
external: true
image-uploader-internal:
driver: bridge
volumes:
node_modules:
driver: local
backend_node_modules:
driver: local

40
docker-compose.yml Normal file
View File

@ -0,0 +1,40 @@
services:
image-uploader-frontend:
image: gitea.lan.hobbyhimmel.de/hobbyhimmel/image-uploader-frontend:latest
ports:
- "80:80"
build:
context: ./frontend
dockerfile: ./Dockerfile
depends_on:
- "image-uploader-backend"
environment:
- "API_URL=http://image-uploader-backend:5000"
- "CLIENT_URL=http://localhost"
container_name: "image-uploader-frontend"
networks:
- npm-nw
- image-uploader-internal
image-uploader-backend:
image: gitea.lan.hobbyhimmel.de/hobbyhimmel/image-uploader-backend:latest
ports:
- "5000:5000"
build:
context: ./backend
dockerfile: ./Dockerfile
container_name: "image-uploader-backend"
networks:
- image-uploader-internal
volumes:
- app-data:/usr/src/app/src/data
volumes:
app-data:
driver: local
networks:
npm-nw:
external: true
image-uploader-internal:
driver: bridge

View File

@ -1,31 +0,0 @@
# Backend Environment Variables
# Copy this file to .env and adjust values for local development
# Whether to remove images when starting the server (cleanup)
REMOVE_IMAGES=false
# Node.js environment (development, production, test)
NODE_ENV=development
# Port for the backend server
PORT=5000
# Admin Session Secret (IMPORTANT: Change in production!)
# Generate with: openssl rand -base64 32
ADMIN_SESSION_SECRET=change-me-in-production
# Telegram Bot Configuration (optional)
TELEGRAM_ENABLED=false
# Send test message on server start (development only)
TELEGRAM_SEND_TEST_ON_START=false
# Bot-Token from @BotFather
# Example: 123456789:ABCdefGHIjklMNOpqrsTUVwxyz1234567890
TELEGRAM_BOT_TOKEN=your-bot-token-here
# Chat-ID of the Telegram group (negative for groups!)
# Get via: https://api.telegram.org/bot<TOKEN>/getUpdates
# Example: -1001234567890
TELEGRAM_CHAT_ID=your-chat-id-here
# Database settings (if needed in future)
# DB_HOST=localhost
# DB_PORT=3306

View File

@ -1,9 +0,0 @@
# Frontend Environment Variables
# These variables are used in both development and production containers
# Backend API URL - where the frontend should connect to the backend
# Development: http://backend-dev:5000 (container-to-container)
# Production: http://backend:5000 (container-to-container)
API_URL=http://backend:5000
# Public/Internal host separation (optional)

View File

@ -1,16 +0,0 @@
# Docker Compose Environment Variables for Development
# Copy this file to .env and adjust values
# Admin Session Secret (optional, has default: dev-session-secret-change-me)
#ADMIN_SESSION_SECRET=your-secret-here
# Telegram Bot Configuration (optional)
TELEGRAM_ENABLED=false
TELEGRAM_SEND_TEST_ON_START=false
# Bot-Token from @BotFather
# Example: 123456789:ABCdefGHIjklMNOpqrsTUVwxyz1234567890
TELEGRAM_BOT_TOKEN=your-bot-token-here
# Chat-ID of the Telegram group (negative for groups!)
# Get via: https://api.telegram.org/bot<TOKEN>/getUpdates
# Example: -1001234567890
TELEGRAM_CHAT_ID=your-chat-id-here

View File

@ -1,22 +0,0 @@
FROM node:24
WORKDIR /usr/src/app
# Install SQLite for database operations
RUN apt-get update && apt-get install -y sqlite3 && rm -rf /var/lib/apt/lists/*
# Copy package files and install dependencies
COPY backend/package*.json ./
RUN npm install
# Copy backend source code
COPY backend/ .
# Note: Environment variables are set via docker-compose.yml
# No .env file needed in the image
# Expose port
EXPOSE 5000
# Development command (will be overridden by docker-compose)
CMD ["npm", "run", "server"]

View File

@ -1,78 +0,0 @@
# Development Environment
# Usage: docker compose -f docker/dev/docker-compose.yml up -d
# Or use: ./dev.sh
services:
frontend-dev:
container_name: image-uploader-frontend-dev
user: root
build:
context: ../../
dockerfile: docker/dev/frontend/Dockerfile
working_dir: /app
ports:
- "3000:80"
volumes:
- ../../frontend:/app:cached
- dev_frontend_node_modules:/app/node_modules
environment:
- CHOKIDAR_USEPOLLING=true
- API_URL=http://localhost:5001
- PUBLIC_HOST=public.test.local
- INTERNAL_HOST=internal.test.local
depends_on:
- backend-dev
networks:
- dev-internal
backend-dev:
container_name: image-uploader-backend-dev
user: "1000:1000"
build:
context: ../../
dockerfile: docker/dev/backend/Dockerfile
working_dir: /usr/src/app
ports:
- "5001:5000"
volumes:
- ../../backend:/usr/src/app:cached
- dev_backend_node_modules:/usr/src/app/node_modules
environment:
- NODE_ENV=development
- PORT=5000
- REMOVE_IMAGES=false
- ADMIN_SESSION_SECRET=${ADMIN_SESSION_SECRET:-dev-session-secret-change-me}
- PUBLIC_HOST=public.test.local
- INTERNAL_HOST=internal.test.local
- ENABLE_HOST_RESTRICTION=true
- TRUST_PROXY_HOPS=0
- PUBLIC_UPLOAD_RATE_LIMIT=20
- TELEGRAM_ENABLED=${TELEGRAM_ENABLED:-false}
- TELEGRAM_BOT_TOKEN=${TELEGRAM_BOT_TOKEN}
- TELEGRAM_CHAT_ID=${TELEGRAM_CHAT_ID}
- TELEGRAM_SEND_TEST_ON_START=${TELEGRAM_SEND_TEST_ON_START:-false}
networks:
- dev-internal
command: [ "npm", "run", "server" ]
sqliteweb:
image: tomdesinto/sqliteweb
ports:
- "8080:8080"
volumes:
- ../../backend/src/data:/usr/src/app/src/data:ro # identischer Host-Pfad wie im Backend
command: /usr/src/app/src/data/db/image_uploader.db
networks:
- dev-internal
depends_on:
- backend-dev
networks:
dev-internal:
driver: bridge
volumes:
dev_frontend_node_modules:
driver: local
dev_backend_node_modules:
driver: local

Some files were not shown because too many files have changed in this diff Show More