Add services
This commit is contained in:
parent
17414fee4a
commit
9bb47abc95
1
.gitignore
vendored
1
.gitignore
vendored
@ -1,6 +1,7 @@
|
||||
# Configuration files with sensitive data
|
||||
backup.env
|
||||
config/restic.conf
|
||||
services/*/.env
|
||||
|
||||
# Logs
|
||||
*.log
|
||||
|
132
CLAUDE.md
132
CLAUDE.md
@ -1,132 +0,0 @@
|
||||
# CLAUDE.md
|
||||
|
||||
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
|
||||
|
||||
## Critical Convention
|
||||
|
||||
**Use the unified interface scripts that auto-source configuration:**
|
||||
|
||||
```bash
|
||||
# Installation and setup
|
||||
./install # Complete setup (config + repository)
|
||||
./install config # Generate configuration only
|
||||
./install repo # Initialize repository only
|
||||
|
||||
# All backup operations via manage
|
||||
./manage list # List backup timers
|
||||
./manage install <service> # Install service timer
|
||||
./manage run <service> # Manual backup
|
||||
./manage restore <service> # Restore (test mode)
|
||||
./manage restore-prod <service> # Restore (production)
|
||||
```
|
||||
|
||||
**NEVER call scripts in backup/ directly** - they are protected and will error. Always use `./manage` or `./install`.
|
||||
|
||||
## Common Commands
|
||||
|
||||
### Initial Setup
|
||||
```bash
|
||||
cp backup.env.sample backup.env # Copy configuration template
|
||||
# Edit backup.env to match environment
|
||||
./install # Complete installation (config + repository)
|
||||
```
|
||||
|
||||
### Service Management
|
||||
```bash
|
||||
./manage list # List all backup timers
|
||||
./manage status <service> # Service backup status
|
||||
./manage run <service> # Manual backup execution
|
||||
./manage logs <service> # View backup logs
|
||||
./manage available # List services with backup.sh
|
||||
sudo ./manage install <service> # Install systemd timer
|
||||
```
|
||||
|
||||
### Backup Operations
|
||||
```bash
|
||||
./manage snapshots [service] # List available snapshots
|
||||
./manage restore <service> # Test restoration
|
||||
./manage restore-prod <service> # Production restoration
|
||||
```
|
||||
|
||||
### Configuration Testing
|
||||
```bash
|
||||
# Source configuration manually for utility functions
|
||||
source backup.env
|
||||
show_config # Display current configuration
|
||||
validate_paths # Check directory existence
|
||||
```
|
||||
|
||||
## Architecture
|
||||
|
||||
### Configuration System
|
||||
- **`backup.env`**: Master configuration file at project root containing all environment variables
|
||||
- **`config/restic.conf`**: Generated Restic-specific configuration (created by `gen-conf.sh`)
|
||||
- **Variable substitution**: systemd templates use `${VAR}` placeholders replaced during installation
|
||||
|
||||
### Core Components
|
||||
- **`./manage`**: Primary interface for all backup operations and service management (auto-sources config)
|
||||
- **`./install`**: Installation script consolidating configuration generation and repository initialization
|
||||
- **`backup/install-service`**: Systemd timer installer (called via `./manage install`)
|
||||
- **`backup/restore`**: Advanced restoration tool (called via `./manage restore/restore-prod`)
|
||||
- **Templates**: `service-backup@.service` and `service-backup@.timer` are systemd unit templates
|
||||
|
||||
### Variable Override Pattern
|
||||
Configuration uses environment variable defaults with override capability:
|
||||
```bash
|
||||
BACKUP_USER="${BACKUP_USER:-citadel}"
|
||||
PROJECT_ROOT="${PROJECT_ROOT:-/home/nicolas/dev/quantumrick}"
|
||||
```
|
||||
|
||||
Users can customize by setting environment variables before sourcing `backup.env`.
|
||||
|
||||
### Systemd Integration
|
||||
- Templates in `backup/` directory contain `${VARIABLE}` placeholders
|
||||
- `install-service` script performs `sed` substitution to generate final systemd units
|
||||
- Generated units placed in `/etc/systemd/system/` with proper permissions
|
||||
|
||||
### Security Model
|
||||
- Restic passwords auto-generated with OpenSSL
|
||||
- Configuration files have restricted permissions (600)
|
||||
- Scripts validate directory existence before operations
|
||||
- Restoration includes test mode for safety
|
||||
|
||||
## Key Files and Their Roles
|
||||
|
||||
### Configuration Layer
|
||||
- `backup.env`: Master configuration with all variables and utility functions
|
||||
- `config/restic.conf`: Generated Restic authentication and repository settings
|
||||
|
||||
### Operational Scripts
|
||||
- `./manage`: Main interface (list, status, run, logs commands) - auto-sources configuration
|
||||
- `./install`: Consolidated installation script (config + repository)
|
||||
- `backup/install-service`: Systemd timer installation with template substitution
|
||||
- `backup/list-snapshots`: Snapshot browsing utility
|
||||
- `backup/restore`: Production-grade restoration tool
|
||||
|
||||
### Templates and Restoration
|
||||
- `backup/service-backup@.{service,timer}`: Systemd unit templates with variable placeholders
|
||||
|
||||
## Development Guidelines
|
||||
|
||||
### Script Protection
|
||||
All scripts in `backup/` are protected against direct execution:
|
||||
- Use `CALLED_FROM_MANAGE` environment variable check
|
||||
- Scripts error with helpful message if called directly
|
||||
- Always route through `./manage` interface
|
||||
|
||||
### Adding New Scripts
|
||||
1. If adding scripts to `backup/`, include protection check:
|
||||
```bash
|
||||
if [ "${CALLED_FROM_MANAGE:-}" != "true" ]; then
|
||||
echo "ERROR: Use ./manage <command> instead"
|
||||
exit 1
|
||||
fi
|
||||
```
|
||||
2. Add corresponding command to `./manage` with `CALLED_FROM_MANAGE=true`
|
||||
3. Follow existing error handling and logging patterns
|
||||
|
||||
### Configuration Changes
|
||||
- All path variables should have environment override capability
|
||||
- Maintain backward compatibility with default values
|
||||
- Update both `backup.env` and this documentation if adding new variables
|
||||
- Test with both `./manage` and `./install` interfaces
|
12
claude.md
Normal file
12
claude.md
Normal file
@ -0,0 +1,12 @@
|
||||
#My name is Nicolas
|
||||
|
||||
This file provides guidance to Claude Code when working with this repository.
|
||||
|
||||
This is a pack of script for manage a self hosted server.
|
||||
|
||||
I interact with you in french, you can anwser in french
|
||||
IMPORTANT all code comment and documentation must be write in english
|
||||
IMPORTANT do not analyse file mentioned in .gitignore
|
||||
|
||||
Keep things simple
|
||||
Use shell script
|
66
services/nextcloud/docker-compose.yml
Normal file
66
services/nextcloud/docker-compose.yml
Normal file
@ -0,0 +1,66 @@
|
||||
services:
|
||||
nextcloud:
|
||||
image: lscr.io/linuxserver/nextcloud:latest
|
||||
container_name: nextcloud
|
||||
environment:
|
||||
- PUID=${PUID}
|
||||
- PGID=${PGID}
|
||||
- TZ=${TZ}
|
||||
volumes:
|
||||
- /home/citadel/data/nextcloud/config:/config
|
||||
- /home/citadel/data/nextcloud/data:/data
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
- /etc/timezone:/etc/timezone:ro
|
||||
networks:
|
||||
- services
|
||||
- nextcloud_internal
|
||||
restart: unless-stopped
|
||||
depends_on:
|
||||
- nextcloud_db
|
||||
- nextcloud_redis
|
||||
labels:
|
||||
- "com.docker.compose.project=nextcloud"
|
||||
- "backup.enable=true"
|
||||
- "backup.path=/config,/data"
|
||||
|
||||
nextcloud_db:
|
||||
image: postgres:16-alpine
|
||||
container_name: nextcloud_db
|
||||
environment:
|
||||
- POSTGRES_DB=${DB_NAME}
|
||||
- POSTGRES_USER=${DB_USER}
|
||||
- POSTGRES_PASSWORD=${DB_PASSWORD}
|
||||
- POSTGRES_INITDB_ARGS=--encoding=UTF-8 --lc-collate=C --lc-ctype=C
|
||||
volumes:
|
||||
- nextcloud_db_data:/var/lib/postgresql/data
|
||||
networks:
|
||||
- nextcloud_internal
|
||||
restart: unless-stopped
|
||||
labels:
|
||||
- "com.docker.compose.project=nextcloud"
|
||||
- "backup.enable=true"
|
||||
|
||||
nextcloud_redis:
|
||||
image: redis:7-alpine
|
||||
container_name: nextcloud_redis
|
||||
command: redis-server --requirepass ${REDIS_PASSWORD}
|
||||
volumes:
|
||||
- nextcloud_redis_data:/data
|
||||
networks:
|
||||
- nextcloud_internal
|
||||
restart: unless-stopped
|
||||
labels:
|
||||
- "com.docker.compose.project=nextcloud"
|
||||
|
||||
volumes:
|
||||
nextcloud_db_data:
|
||||
name: nextcloud_db_data
|
||||
nextcloud_redis_data:
|
||||
name: nextcloud_redis_data
|
||||
|
||||
networks:
|
||||
services:
|
||||
external: true
|
||||
nextcloud_internal:
|
||||
driver: bridge
|
||||
name: nextcloud_internal
|
67
services/nextcloud/generate-secret.sh
Normal file
67
services/nextcloud/generate-secret.sh
Normal file
@ -0,0 +1,67 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Colors for output
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m'
|
||||
|
||||
log_info() {
|
||||
echo -e "${GREEN}[INFO]${NC} $1"
|
||||
}
|
||||
|
||||
log_warn() {
|
||||
echo -e "${YELLOW}[WARN]${NC} $1"
|
||||
}
|
||||
|
||||
log_header() {
|
||||
echo -e "${BLUE}$1${NC}"
|
||||
}
|
||||
|
||||
# Function to generate secure password
|
||||
generate_password() {
|
||||
openssl rand -base64 32 | tr -d "=+/" | cut -c1-25
|
||||
}
|
||||
|
||||
log_header "Generating Nextcloud secrets"
|
||||
echo "============================="
|
||||
|
||||
ENV_FILE=".env"
|
||||
|
||||
# Check if .env file exists
|
||||
if [ ! -f "$ENV_FILE" ]; then
|
||||
log_warn ".env file not found. Please ensure it exists first."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Generate passwords
|
||||
log_info "Generating secure passwords..."
|
||||
DB_PASSWORD=$(generate_password)
|
||||
REDIS_PASSWORD=$(generate_password)
|
||||
|
||||
# Backup existing .env file
|
||||
log_info "Creating backup of existing .env file..."
|
||||
cp "$ENV_FILE" "${ENV_FILE}.backup.$(date +%Y%m%d_%H%M%S)"
|
||||
|
||||
# Update .env file
|
||||
log_info "Updating .env file with generated passwords..."
|
||||
sed -i "s/DB_PASSWORD=.*/DB_PASSWORD=${DB_PASSWORD}/" "$ENV_FILE"
|
||||
sed -i "s/REDIS_PASSWORD=.*/REDIS_PASSWORD=${REDIS_PASSWORD}/" "$ENV_FILE"
|
||||
|
||||
# Display generated passwords (for reference)
|
||||
echo ""
|
||||
log_header "Generated credentials:"
|
||||
echo "======================"
|
||||
echo "Database Password: $DB_PASSWORD"
|
||||
echo "Redis Password: $REDIS_PASSWORD"
|
||||
echo ""
|
||||
log_info "Passwords have been saved to $ENV_FILE"
|
||||
log_info "A backup of the previous .env file has been created"
|
||||
|
||||
# Security reminder
|
||||
echo ""
|
||||
log_warn "SECURITY REMINDER:"
|
||||
echo "- Keep these passwords secure"
|
||||
echo "- Do not share them in version control"
|
||||
echo "- The backup file also contains these passwords"
|
80
services/paperless/.env.sample
Normal file
80
services/paperless/.env.sample
Normal file
@ -0,0 +1,80 @@
|
||||
# Paperless-NGX Environment Configuration Sample
|
||||
# Copy this file to .env and adapt the values according to your environment
|
||||
|
||||
# === Database Configuration ===
|
||||
# PostgreSQL database name
|
||||
POSTGRES_DB=paperless
|
||||
|
||||
# PostgreSQL username
|
||||
POSTGRES_USER=paperless
|
||||
|
||||
# PostgreSQL password (change this to a secure password)
|
||||
POSTGRES_PASSWORD=your_secure_database_password_here
|
||||
|
||||
# === Docker Container Configuration ===
|
||||
# User ID mapping for container (use your user ID)
|
||||
USERMAP_UID=1000
|
||||
|
||||
# Group ID mapping for container (use your group ID)
|
||||
USERMAP_GID=1000
|
||||
|
||||
# === Data Directory Configuration ===
|
||||
# Base directory for paperless data storage
|
||||
# Adjust the path according to your system
|
||||
PAPERLESS_DATA_DIR=/home/your_user/data/paperless
|
||||
|
||||
# === Paperless-NGX Configuration ===
|
||||
# Public URL for accessing the paperless service
|
||||
# Replace with your actual domain or IP address
|
||||
PAPERLESS_URL=https://paperless.yourdomain.com
|
||||
|
||||
# CSRF trusted origins (add your domain if different from PAPERLESS_URL)
|
||||
# Example: https://paperless.yourdomain.com,https://paperless.internal.com
|
||||
PAPERLESS_CSRF_TRUSTED_ORIGINS=
|
||||
|
||||
# Time zone for the application
|
||||
# See: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones
|
||||
PAPERLESS_TIME_ZONE=Europe/Brussels
|
||||
|
||||
# Primary OCR language (3-letter code)
|
||||
# Common values: eng (English), fra/fre (French), deu (German), spa (Spanish)
|
||||
PAPERLESS_OCR_LANGUAGE=eng
|
||||
|
||||
# Available OCR languages (space-separated list)
|
||||
# Include all languages you want to support for document recognition
|
||||
PAPERLESS_OCR_LANGUAGES="eng fra"
|
||||
|
||||
# === Optional Configuration ===
|
||||
# Uncomment and configure as needed:
|
||||
|
||||
# Secret key for Django (generate a secure random string)
|
||||
# PAPERLESS_SECRET_KEY=your_32_character_secret_key_here
|
||||
|
||||
# Admin user configuration (for initial setup)
|
||||
# PAPERLESS_ADMIN_USER=admin
|
||||
# PAPERLESS_ADMIN_PASSWORD=your_admin_password
|
||||
# PAPERLESS_ADMIN_MAIL=admin@yourdomain.com
|
||||
|
||||
# Additional security settings
|
||||
# PAPERLESS_ALLOWED_HOSTS=paperless.yourdomain.com,localhost,127.0.0.1
|
||||
# PAPERLESS_CORS_ALLOWED_HOSTS=https://paperless.yourdomain.com
|
||||
|
||||
# Email configuration (for notifications)
|
||||
# PAPERLESS_EMAIL_HOST=smtp.yourdomain.com
|
||||
# PAPERLESS_EMAIL_PORT=587
|
||||
# PAPERLESS_EMAIL_HOST_USER=paperless@yourdomain.com
|
||||
# PAPERLESS_EMAIL_HOST_PASSWORD=your_email_password
|
||||
# PAPERLESS_EMAIL_USE_TLS=true
|
||||
# PAPERLESS_DEFAULT_FROM_EMAIL=paperless@yourdomain.com
|
||||
|
||||
# Consumer configuration
|
||||
# PAPERLESS_CONSUMER_POLLING=0
|
||||
# PAPERLESS_CONSUMER_DELETE_DUPLICATES=true
|
||||
# PAPERLESS_CONSUMER_RECURSIVE=true
|
||||
|
||||
# === Notes ===
|
||||
# 1. Ensure the PAPERLESS_DATA_DIR exists and has proper permissions
|
||||
# 2. Change all default passwords to secure values
|
||||
# 3. Adjust USERMAP_UID and USERMAP_GID to match your system user
|
||||
# 4. Update PAPERLESS_URL to your actual domain or IP address
|
||||
# 5. Configure OCR languages based on your document languages
|
149
services/paperless/backup.sh
Executable file
149
services/paperless/backup.sh
Executable file
@ -0,0 +1,149 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Generic Service Backup Script for Docker Compose services
|
||||
# Location: /home/citadel/services/paperless/backup.sh
|
||||
# Runs daily at 3 AM via systemd timer
|
||||
|
||||
set -e
|
||||
|
||||
# Service Configuration
|
||||
SERVICE_NAME="paperless"
|
||||
SERVICE_DIR="/home/citadel/services/$SERVICE_NAME"
|
||||
DATA_DIR="/home/citadel/data/$SERVICE_NAME"
|
||||
TEMP_BACKUP_DIR="/tmp/$SERVICE_NAME"
|
||||
CONFIG_FILE="/home/citadel/backup/restic.conf"
|
||||
COMPOSE_FILE="$SERVICE_DIR/docker-compose.yml"
|
||||
|
||||
# Logging
|
||||
LOG_FILE="/var/log/$SERVICE_NAME-backup.log"
|
||||
exec 1> >(tee -a "$LOG_FILE")
|
||||
exec 2> >(tee -a "$LOG_FILE" >&2)
|
||||
|
||||
log() {
|
||||
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1"
|
||||
}
|
||||
|
||||
cleanup() {
|
||||
log "Cleaning up temporary files..."
|
||||
rm -rf "$TEMP_BACKUP_DIR"
|
||||
|
||||
# Ensure containers are running in case of error
|
||||
if docker compose -f "$COMPOSE_FILE" ps --services --filter "status=exited" | grep -q .; then
|
||||
log "Some containers are stopped, restarting..."
|
||||
cd "$SERVICE_DIR"
|
||||
docker compose up -d
|
||||
fi
|
||||
}
|
||||
|
||||
# Set up cleanup on exit
|
||||
trap cleanup EXIT
|
||||
|
||||
log "=== Starting $SERVICE_NAME Backup ==="
|
||||
|
||||
# Check if configuration exists
|
||||
if [ ! -f "$CONFIG_FILE" ]; then
|
||||
log "ERROR: Configuration file $CONFIG_FILE not found!"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Source Restic configuration
|
||||
source "$CONFIG_FILE"
|
||||
|
||||
# Create temporary backup directory
|
||||
log "Creating temporary backup directory: $TEMP_BACKUP_DIR"
|
||||
mkdir -p "$TEMP_BACKUP_DIR"
|
||||
|
||||
# Navigate to service directory
|
||||
cd "$SERVICE_DIR"
|
||||
|
||||
# Check if compose file exists
|
||||
if [ ! -f "$COMPOSE_FILE" ]; then
|
||||
log "ERROR: Docker compose file $COMPOSE_FILE not found!"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
log "Stopping $SERVICE_NAME containers..."
|
||||
docker compose down
|
||||
|
||||
# Wait a moment for containers to fully stop
|
||||
sleep 5
|
||||
|
||||
log "Creating PostgreSQL database dump..."
|
||||
# Start only the database container for backup
|
||||
docker compose up -d db
|
||||
|
||||
# Wait for database to be ready
|
||||
sleep 10
|
||||
|
||||
# Get database credentials from .env
|
||||
if [ -f "$SERVICE_DIR/.env" ]; then
|
||||
source "$SERVICE_DIR/.env"
|
||||
else
|
||||
log "ERROR: .env file not found!"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Create database dump
|
||||
DUMP_FILE="$TEMP_BACKUP_DIR/${SERVICE_NAME}_db_$(date +%Y%m%d_%H%M%S).sql"
|
||||
log "Creating database dump: $DUMP_FILE"
|
||||
|
||||
docker compose exec -T db pg_dump -U "$POSTGRES_USER" -d "$POSTGRES_DB" > "$DUMP_FILE"
|
||||
|
||||
if [ $? -eq 0 ]; then
|
||||
log "✅ Database dump created successfully"
|
||||
else
|
||||
log "❌ Database dump failed!"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Stop database container
|
||||
docker compose down
|
||||
|
||||
log "Copying application data to temporary directory..."
|
||||
# Copy data directories
|
||||
cp -r "$DATA_DIR"/* "$TEMP_BACKUP_DIR/" 2>/dev/null || true
|
||||
|
||||
# Copy service configuration
|
||||
cp -r "$SERVICE_DIR" "$TEMP_BACKUP_DIR/service_config"
|
||||
|
||||
log "Creating Restic backup..."
|
||||
|
||||
restic backup "$TEMP_BACKUP_DIR" \
|
||||
--tag "$SERVICE_NAME" \
|
||||
--tag "daily"
|
||||
|
||||
if [ $? -eq 0 ]; then
|
||||
log "✅ Restic backup completed successfully with tag: $SERVICE_NAME"
|
||||
else
|
||||
log "❌ Restic backup failed!"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
log "Restarting $SERVICE_NAME containers..."
|
||||
docker compose up -d
|
||||
|
||||
# Wait for services to be ready
|
||||
sleep 15
|
||||
|
||||
# Check if services are running
|
||||
if docker compose ps --services --filter "status=running" | grep -q "webserver"; then
|
||||
log "✅ $SERVICE_NAME containers restarted successfully"
|
||||
else
|
||||
log "⚠️ Warning: Some containers may not be running properly"
|
||||
fi
|
||||
|
||||
log "Running Restic maintenance (forget old snapshots)..."
|
||||
# Keep: 7 daily, 4 weekly, 12 monthly, 2 yearly
|
||||
restic forget \
|
||||
--tag "$SERVICE_NAME" \
|
||||
--keep-daily 7 \
|
||||
--keep-weekly 4 \
|
||||
--keep-monthly 12 \
|
||||
--keep-yearly 2 \
|
||||
--prune
|
||||
|
||||
log "=== Backup completed successfully ==="
|
||||
|
||||
# Show backup statistics
|
||||
log "Current repository stats:"
|
||||
restic stats --mode raw-data
|
79
services/paperless/docker-compose.yml
Normal file
79
services/paperless/docker-compose.yml
Normal file
@ -0,0 +1,79 @@
|
||||
services:
|
||||
broker:
|
||||
image: docker.io/library/redis:8
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- redisdata:/data
|
||||
networks:
|
||||
- paperless-internal
|
||||
|
||||
db:
|
||||
image: postgres:13
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- paperless-ngx_pgdata:/var/lib/postgresql/data
|
||||
environment:
|
||||
POSTGRES_DB: ${POSTGRES_DB}
|
||||
POSTGRES_USER: ${POSTGRES_USER}
|
||||
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
|
||||
networks:
|
||||
- paperless-internal
|
||||
|
||||
webserver:
|
||||
image: ghcr.io/paperless-ngx/paperless-ngx:2.14.1
|
||||
restart: unless-stopped
|
||||
depends_on:
|
||||
- db
|
||||
- broker
|
||||
- gotenberg
|
||||
- tika
|
||||
volumes:
|
||||
- ${PAPERLESS_DATA_DIR}/data:/usr/src/paperless/data
|
||||
- ${PAPERLESS_DATA_DIR}/media:/usr/src/paperless/media
|
||||
- ${PAPERLESS_DATA_DIR}/export:/usr/src/paperless/export
|
||||
- ${PAPERLESS_DATA_DIR}/consume:/usr/src/paperless/consume
|
||||
env_file: .env
|
||||
environment:
|
||||
PAPERLESS_REDIS: redis://broker:6379
|
||||
PAPERLESS_DBHOST: db
|
||||
PAPERLESS_DBNAME: ${POSTGRES_DB}
|
||||
PAPERLESS_DBUSER: ${POSTGRES_USER}
|
||||
PAPERLESS_DBPASS: ${POSTGRES_PASSWORD}
|
||||
PAPERLESS_TIKA_ENABLED: 1
|
||||
PAPERLESS_TIKA_GOTENBERG_ENDPOINT: http://gotenberg:3000
|
||||
PAPERLESS_TIKA_ENDPOINT: http://tika:9998
|
||||
networks:
|
||||
- paperless-internal
|
||||
- services
|
||||
- mail
|
||||
|
||||
gotenberg:
|
||||
image: docker.io/gotenberg/gotenberg:8.20
|
||||
restart: unless-stopped
|
||||
command:
|
||||
- "gotenberg"
|
||||
- "--chromium-disable-javascript=true"
|
||||
- "--chromium-allow-list=file:///tmp/.*"
|
||||
networks:
|
||||
- paperless-internal
|
||||
|
||||
tika:
|
||||
image: docker.io/apache/tika:latest
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- paperless-internal
|
||||
|
||||
volumes:
|
||||
paperless-ngx_pgdata:
|
||||
external: true
|
||||
redisdata:
|
||||
|
||||
networks:
|
||||
paperless-internal:
|
||||
name: paperless-internal
|
||||
services:
|
||||
external: true
|
||||
name: services
|
||||
mail:
|
||||
external: true
|
||||
name: mail
|
594
services/paperless/restore
Executable file
594
services/paperless/restore
Executable file
@ -0,0 +1,594 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Paperless Restore Script
|
||||
# Location: /home/citadel/services/paperless/restore
|
||||
# Usage: ./restore <snapshot_id> --test|--production|--extract
|
||||
# ./restore --clean
|
||||
# ./restore --clean-all
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Configuration
|
||||
readonly SERVICE_NAME="paperless"
|
||||
readonly SERVICE_DIR="/home/citadel/services/$SERVICE_NAME"
|
||||
readonly TEST_DIR="$SERVICE_DIR/test-restore"
|
||||
readonly CONFIG_FILE="/home/citadel/backup/restic.conf"
|
||||
readonly SCRIPT_NAME=$(basename "$0")
|
||||
|
||||
# Colors
|
||||
readonly RED='\033[0;31m'
|
||||
readonly GREEN='\033[0;32m'
|
||||
readonly YELLOW='\033[1;33m'
|
||||
readonly BLUE='\033[0;34m'
|
||||
readonly NC='\033[0m'
|
||||
|
||||
# Logging functions
|
||||
log() { echo -e "${BLUE}[$(date '+%H:%M:%S')]${NC} $1"; }
|
||||
error() { echo -e "${RED}[ERROR]${NC} $1" >&2; exit 1; }
|
||||
success() { echo -e "${GREEN}[SUCCESS]${NC} $1"; }
|
||||
warning() { echo -e "${YELLOW}[WARNING]${NC} $1"; }
|
||||
|
||||
# Global variables
|
||||
SNAPSHOT_ID=""
|
||||
MODE=""
|
||||
TEMP_DIR=""
|
||||
|
||||
# Show help
|
||||
show_help() {
|
||||
cat << EOF
|
||||
Usage: $SCRIPT_NAME <snapshot_id> --test|--production|--extract
|
||||
$SCRIPT_NAME --clean|--clean-all
|
||||
|
||||
Arguments:
|
||||
snapshot_id The restic snapshot ID to restore from
|
||||
|
||||
Options:
|
||||
--test Restore to test instance (isolated environment)
|
||||
--production Restore to production instance (NOT YET IMPLEMENTED)
|
||||
--extract Extract snapshot to /tmp directory only
|
||||
--clean Clean test environment (stop containers and remove volumes)
|
||||
--clean-all Clean test environment AND remove all temp directories
|
||||
-h, --help Show this help message
|
||||
|
||||
Examples:
|
||||
$SCRIPT_NAME abc123 --test # Restore snapshot abc123 to test instance
|
||||
$SCRIPT_NAME abc123 --extract # Extract snapshot abc123 to /tmp only
|
||||
$SCRIPT_NAME latest --production # Restore latest snapshot to production
|
||||
$SCRIPT_NAME --clean # Clean test environment
|
||||
$SCRIPT_NAME --clean-all # Clean test environment + temp directories
|
||||
|
||||
The test instance will be created in: $TEST_DIR
|
||||
EOF
|
||||
}
|
||||
|
||||
# Cleanup function
|
||||
cleanup() {
|
||||
if [ -n "$TEMP_DIR" ] && [ -d "$TEMP_DIR" ] && [[ "$TEMP_DIR" == /tmp/* ]]; then
|
||||
log "Cleaning up temporary directory: $TEMP_DIR"
|
||||
rm -rf "$TEMP_DIR"
|
||||
fi
|
||||
}
|
||||
|
||||
# Set up cleanup on exit (only for extract mode)
|
||||
setup_cleanup() {
|
||||
if [ "$MODE" = "extract" ]; then
|
||||
trap cleanup EXIT
|
||||
fi
|
||||
}
|
||||
|
||||
# Parse arguments
|
||||
parse_arguments() {
|
||||
if [ $# -eq 0 ]; then
|
||||
show_help
|
||||
exit 0
|
||||
fi
|
||||
|
||||
while [ $# -gt 0 ]; do
|
||||
case "$1" in
|
||||
--test)
|
||||
MODE="test"
|
||||
shift
|
||||
;;
|
||||
--production)
|
||||
MODE="production"
|
||||
shift
|
||||
;;
|
||||
--extract)
|
||||
MODE="extract"
|
||||
shift
|
||||
;;
|
||||
--clean)
|
||||
MODE="clean"
|
||||
shift
|
||||
;;
|
||||
--clean-all)
|
||||
MODE="clean-all"
|
||||
shift
|
||||
;;
|
||||
-h|--help)
|
||||
show_help
|
||||
exit 0
|
||||
;;
|
||||
-*)
|
||||
error "Unknown option: $1"
|
||||
;;
|
||||
*)
|
||||
if [ -z "$SNAPSHOT_ID" ]; then
|
||||
SNAPSHOT_ID="$1"
|
||||
else
|
||||
error "Too many arguments"
|
||||
fi
|
||||
shift
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Validate required arguments
|
||||
if [ "$MODE" != "clean" ] && [ "$MODE" != "clean-all" ]; then
|
||||
if [ -z "$SNAPSHOT_ID" ]; then
|
||||
error "Snapshot ID is required (except for --clean or --clean-all mode)"
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ -z "$MODE" ]; then
|
||||
error "Mode (--test, --production, --extract, --clean, or --clean-all) is required"
|
||||
fi
|
||||
|
||||
# Clean modes don't need snapshot ID
|
||||
if [ "$MODE" = "clean" ] || [ "$MODE" = "clean-all" ]; then
|
||||
if [ -n "$SNAPSHOT_ID" ]; then
|
||||
error "Snapshot ID should not be provided with --clean or --clean-all mode"
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
# Check prerequisites
|
||||
check_prerequisites() {
|
||||
# Skip restic checks for clean modes
|
||||
if [ "$MODE" = "clean" ] || [ "$MODE" = "clean-all" ]; then
|
||||
# Only check Docker for clean modes
|
||||
if ! command -v docker &>/dev/null; then
|
||||
error "Docker not found"
|
||||
fi
|
||||
|
||||
if ! docker compose version &>/dev/null; then
|
||||
error "Docker Compose not found"
|
||||
fi
|
||||
|
||||
log "Prerequisites OK (clean mode)"
|
||||
return
|
||||
fi
|
||||
|
||||
# Check configuration file
|
||||
if [ ! -f "$CONFIG_FILE" ]; then
|
||||
error "Config file not found: $CONFIG_FILE"
|
||||
fi
|
||||
|
||||
# Source config
|
||||
source "$CONFIG_FILE"
|
||||
|
||||
# Check required variables
|
||||
if [ -z "${RESTIC_REPOSITORY:-}" ]; then
|
||||
error "RESTIC_REPOSITORY not defined in config"
|
||||
fi
|
||||
|
||||
# Test restic access
|
||||
if ! restic snapshots &>/dev/null; then
|
||||
error "Cannot access Restic repository"
|
||||
fi
|
||||
|
||||
# Check Docker (except for extract mode)
|
||||
if [ "$MODE" != "extract" ]; then
|
||||
if ! command -v docker &>/dev/null; then
|
||||
error "Docker not found"
|
||||
fi
|
||||
|
||||
# Check docker compose
|
||||
if ! docker compose version &>/dev/null; then
|
||||
error "Docker Compose not found"
|
||||
fi
|
||||
fi
|
||||
|
||||
log "Prerequisites OK"
|
||||
}
|
||||
|
||||
# Create secure temporary directory
|
||||
create_temp_dir() {
|
||||
if [ "$MODE" = "extract" ]; then
|
||||
# For extract mode, create in /tmp with readable name
|
||||
TEMP_DIR="/tmp/paperless-extract-$(date +%Y%m%d-%H%M%S)"
|
||||
mkdir -p "$TEMP_DIR"
|
||||
chmod 755 "$TEMP_DIR" # More permissive for extract mode
|
||||
else
|
||||
# For other modes, use secure temp
|
||||
TEMP_DIR=$(mktemp -d "/tmp/paperless-restore-$(date +%Y%m%d-%H%M%S)-XXXXXX")
|
||||
chmod 700 "$TEMP_DIR"
|
||||
fi
|
||||
|
||||
if [ ! -d "$TEMP_DIR" ]; then
|
||||
error "Failed to create temporary directory"
|
||||
fi
|
||||
|
||||
log "Created temporary directory: $TEMP_DIR"
|
||||
}
|
||||
|
||||
# Extract snapshot
|
||||
extract_snapshot() {
|
||||
log "Extracting snapshot $SNAPSHOT_ID to temporary directory"
|
||||
|
||||
if ! restic restore "$SNAPSHOT_ID" --target "$TEMP_DIR"; then
|
||||
error "Failed to extract snapshot $SNAPSHOT_ID"
|
||||
fi
|
||||
|
||||
success "Snapshot extracted successfully"
|
||||
|
||||
# Show what was extracted
|
||||
log "Extracted contents:"
|
||||
ls -la "$TEMP_DIR" | head -10
|
||||
|
||||
# Show paperless data structure if exists
|
||||
if [ -d "$TEMP_DIR/tmp/paperless" ]; then
|
||||
echo ""
|
||||
log "Paperless data structure:"
|
||||
ls -la "$TEMP_DIR/tmp/paperless/" | head -10
|
||||
fi
|
||||
}
|
||||
|
||||
# Extract only mode
|
||||
extract_only() {
|
||||
log "Starting extract-only mode"
|
||||
|
||||
# Create temp directory in /tmp
|
||||
create_temp_dir
|
||||
|
||||
# Extract snapshot
|
||||
extract_snapshot
|
||||
|
||||
# Display information about extracted content
|
||||
echo ""
|
||||
success "Snapshot $SNAPSHOT_ID extracted to: $TEMP_DIR"
|
||||
echo ""
|
||||
echo "📁 Extracted structure:"
|
||||
if [ -d "$TEMP_DIR/tmp/paperless" ]; then
|
||||
echo " Main data location: $TEMP_DIR/tmp/paperless/"
|
||||
ls -la "$TEMP_DIR/tmp/paperless/" | head -10
|
||||
else
|
||||
find "$TEMP_DIR" -maxdepth 2 -type d | head -20
|
||||
fi
|
||||
echo ""
|
||||
echo "📊 Content summary:"
|
||||
echo " Database dumps: $(find "$TEMP_DIR" -name "*.sql" | wc -l) files"
|
||||
echo " Data directories: $(find "$TEMP_DIR/tmp/paperless" -maxdepth 1 -type d 2>/dev/null | grep -v "^$TEMP_DIR/tmp/paperless$" | wc -l) directories"
|
||||
echo " Total size: $(du -sh "$TEMP_DIR" | cut -f1)"
|
||||
echo ""
|
||||
echo "💡 Manual inspection commands:"
|
||||
echo " cd $TEMP_DIR/tmp/paperless"
|
||||
echo " ls -la"
|
||||
echo " find . -name '*.sql' -exec ls -lh {} +"
|
||||
echo ""
|
||||
echo "⚠️ The directory will remain until you delete it manually:"
|
||||
echo " rm -rf $TEMP_DIR"
|
||||
echo ""
|
||||
|
||||
# Don't run cleanup for extract mode - leave directory for user
|
||||
trap - EXIT
|
||||
}
|
||||
|
||||
# Clean test environment
|
||||
clean_test_environment() {
|
||||
log "Starting cleanup of test environment"
|
||||
|
||||
# Navigate to service directory to ensure we have access to .env
|
||||
cd "$SERVICE_DIR"
|
||||
|
||||
# Stop test containers
|
||||
log "Stopping test containers..."
|
||||
if [ -d "$TEST_DIR" ]; then
|
||||
cd "$TEST_DIR"
|
||||
docker compose --env-file "$SERVICE_DIR/.env" -p paperless-test down --remove-orphans 2>/dev/null || true
|
||||
cd "$SERVICE_DIR"
|
||||
else
|
||||
log "Test directory doesn't exist, checking for running containers anyway..."
|
||||
fi
|
||||
|
||||
# Try to stop containers by name pattern (in case they're running from different location)
|
||||
log "Stopping any remaining paperless test containers..."
|
||||
docker stop paperless-webserver-test paperless-db-test paperless-broker-test paperless-gotenberg-test paperless-tika-test 2>/dev/null || true
|
||||
docker rm paperless-webserver-test paperless-db-test paperless-broker-test paperless-gotenberg-test paperless-tika-test 2>/dev/null || true
|
||||
|
||||
# Remove test volumes
|
||||
log "Removing test volumes..."
|
||||
local volumes_to_remove=(
|
||||
"paperless-ngx_pgdata-test"
|
||||
"redisdata-test"
|
||||
"paperless-data-test"
|
||||
"paperless-media-test"
|
||||
"paperless-export-test"
|
||||
"paperless-consume-test"
|
||||
)
|
||||
|
||||
for volume in "${volumes_to_remove[@]}"; do
|
||||
if docker volume ls -q | grep -q "^${volume}$"; then
|
||||
log "Removing volume: $volume"
|
||||
docker volume rm "$volume" 2>/dev/null || warning "Failed to remove volume: $volume"
|
||||
else
|
||||
log "Volume not found (already removed): $volume"
|
||||
fi
|
||||
done
|
||||
|
||||
# Clean up any dangling images related to paperless
|
||||
log "Cleaning up dangling images..."
|
||||
docker image prune -f &>/dev/null || true
|
||||
|
||||
success "Test environment cleaned successfully"
|
||||
echo ""
|
||||
echo "🧹 Cleanup completed!"
|
||||
echo " ✅ Test containers stopped and removed"
|
||||
echo " ✅ Test volumes removed"
|
||||
echo " ✅ Dangling images cleaned"
|
||||
echo ""
|
||||
echo "💡 The production environment remains untouched."
|
||||
echo "💡 The test directory ($TEST_DIR) is preserved."
|
||||
echo "💡 You can now run a fresh test restore if needed."
|
||||
}
|
||||
|
||||
# Clean all environment (test + temp directories)
|
||||
clean_all_environment() {
|
||||
log "Starting cleanup of test environment and temp directories"
|
||||
|
||||
# First, run the standard test environment cleanup
|
||||
clean_test_environment
|
||||
|
||||
# Then clean temporary directories
|
||||
echo ""
|
||||
log "Cleaning temporary directories..."
|
||||
|
||||
# Find and remove paperless restore temp directories
|
||||
local temp_dirs_restore=()
|
||||
local temp_dirs_extract=()
|
||||
|
||||
# Use find to safely locate temp directories
|
||||
while IFS= read -r -d '' dir; do
|
||||
temp_dirs_restore+=("$dir")
|
||||
done < <(find /tmp -maxdepth 1 -type d -name "paperless-restore-*" -print0 2>/dev/null)
|
||||
|
||||
while IFS= read -r -d '' dir; do
|
||||
temp_dirs_extract+=("$dir")
|
||||
done < <(find /tmp -maxdepth 1 -type d -name "paperless-extract-*" -print0 2>/dev/null)
|
||||
|
||||
# Remove restore temp directories
|
||||
if [ ${#temp_dirs_restore[@]} -gt 0 ]; then
|
||||
log "Found ${#temp_dirs_restore[@]} restore temp directories to remove"
|
||||
for dir in "${temp_dirs_restore[@]}"; do
|
||||
if [ -d "$dir" ]; then
|
||||
log "Removing: $(basename "$dir")"
|
||||
rm -rf "$dir" || warning "Failed to remove: $dir"
|
||||
fi
|
||||
done
|
||||
else
|
||||
log "No restore temp directories found"
|
||||
fi
|
||||
|
||||
# Remove extract temp directories
|
||||
if [ ${#temp_dirs_extract[@]} -gt 0 ]; then
|
||||
log "Found ${#temp_dirs_extract[@]} extract temp directories to remove"
|
||||
for dir in "${temp_dirs_extract[@]}"; do
|
||||
if [ -d "$dir" ]; then
|
||||
log "Removing: $(basename "$dir")"
|
||||
rm -rf "$dir" || warning "Failed to remove: $dir"
|
||||
fi
|
||||
done
|
||||
else
|
||||
log "No extract temp directories found"
|
||||
fi
|
||||
|
||||
# Show final summary
|
||||
echo ""
|
||||
success "Complete cleanup finished!"
|
||||
echo ""
|
||||
echo "🧹 Full cleanup completed!"
|
||||
echo " ✅ Test containers stopped and removed"
|
||||
echo " ✅ Test volumes removed"
|
||||
echo " ✅ Dangling images cleaned"
|
||||
echo " ✅ Restore temp directories removed: ${#temp_dirs_restore[@]}"
|
||||
echo " ✅ Extract temp directories removed: ${#temp_dirs_extract[@]}"
|
||||
echo ""
|
||||
echo "💡 The production environment remains untouched."
|
||||
echo "💡 The test directory ($TEST_DIR) is preserved."
|
||||
echo "💡 System is now clean and ready for fresh operations."
|
||||
}
|
||||
|
||||
# Restore to test instance
|
||||
restore_to_test() {
|
||||
log "Starting restore to test instance"
|
||||
|
||||
# Create test directory if it doesn't exist
|
||||
mkdir -p "$TEST_DIR"
|
||||
|
||||
# Navigate to test directory
|
||||
cd "$TEST_DIR"
|
||||
|
||||
log "Stopping test containers (if running)..."
|
||||
docker compose --env-file "$SERVICE_DIR/.env" -p paperless-test down --remove-orphans 2>/dev/null || true
|
||||
|
||||
# Remove existing test volumes to ensure clean restore
|
||||
log "Removing existing test volumes..."
|
||||
docker volume rm paperless-ngx_pgdata-test redisdata-test \
|
||||
paperless-data-test paperless-media-test \
|
||||
paperless-export-test paperless-consume-test 2>/dev/null || true
|
||||
|
||||
# Find database dump
|
||||
local db_dump
|
||||
db_dump=$(find "$TEMP_DIR" -name "*_db_*.sql" | head -1)
|
||||
|
||||
if [ -z "$db_dump" ]; then
|
||||
error "No database dump found in snapshot"
|
||||
fi
|
||||
|
||||
log "Found database dump: $(basename "$db_dump")"
|
||||
|
||||
# Create PostgreSQL volume before starting database container
|
||||
log "Creating PostgreSQL volume..."
|
||||
docker volume create paperless-ngx_pgdata-test
|
||||
|
||||
# Start only the database container for restore
|
||||
log "Starting test database container..."
|
||||
docker compose --env-file "$SERVICE_DIR/.env" -p paperless-test up -d db-test
|
||||
|
||||
# Wait for database to be ready
|
||||
log "Waiting for database to be ready..."
|
||||
sleep 15
|
||||
|
||||
# Source environment variables
|
||||
if [ -f "$SERVICE_DIR/.env" ]; then
|
||||
source "$SERVICE_DIR/.env"
|
||||
else
|
||||
error "Environment file not found: $SERVICE_DIR/.env"
|
||||
fi
|
||||
|
||||
# Restore database
|
||||
log "Restoring database from dump..."
|
||||
if docker compose --env-file "$SERVICE_DIR/.env" -p paperless-test exec -T db-test psql -U "$POSTGRES_USER" -d "$POSTGRES_DB" < "$db_dump"; then
|
||||
success "Database restored successfully"
|
||||
else
|
||||
error "Database restore failed"
|
||||
fi
|
||||
|
||||
# Stop database container
|
||||
docker compose --env-file "$SERVICE_DIR/.env" -p paperless-test down
|
||||
|
||||
# Restore data volumes using temporary containers
|
||||
log "Restoring data volumes..."
|
||||
|
||||
# Create ALL volumes (Docker Compose will not create them since they're external)
|
||||
log "Creating all required volumes..."
|
||||
docker volume create paperless-ngx_pgdata-test
|
||||
docker volume create redisdata-test
|
||||
docker volume create paperless-data-test
|
||||
docker volume create paperless-media-test
|
||||
docker volume create paperless-export-test
|
||||
docker volume create paperless-consume-test
|
||||
|
||||
# Function to restore volume data
|
||||
restore_volume_data() {
|
||||
local source_path="$1"
|
||||
local volume_name="$2"
|
||||
local container_path="$3"
|
||||
|
||||
if [ -d "$source_path" ]; then
|
||||
log "Restoring $volume_name from $source_path"
|
||||
docker run --rm \
|
||||
-v "$volume_name:$container_path" \
|
||||
-v "$source_path:$container_path-source:ro" \
|
||||
alpine:latest \
|
||||
sh -c "cp -rf $container_path-source/* $container_path/ 2>/dev/null || true"
|
||||
else
|
||||
warning "Source path not found: $source_path"
|
||||
fi
|
||||
}
|
||||
|
||||
# Restore each data directory
|
||||
restore_volume_data "$TEMP_DIR/tmp/paperless/data" "paperless-data-test" "/data"
|
||||
restore_volume_data "$TEMP_DIR/tmp/paperless/media" "paperless-media-test" "/media"
|
||||
restore_volume_data "$TEMP_DIR/tmp/paperless/export" "paperless-export-test" "/export"
|
||||
restore_volume_data "$TEMP_DIR/tmp/paperless/consume" "paperless-consume-test" "/consume"
|
||||
|
||||
# Start all test containers
|
||||
log "Starting test instance..."
|
||||
docker compose --env-file "$SERVICE_DIR/.env" -p paperless-test up -d
|
||||
|
||||
# Wait for services to be ready
|
||||
log "Waiting for services to start..."
|
||||
sleep 30
|
||||
|
||||
# Check if webserver is running
|
||||
if docker compose --env-file "$SERVICE_DIR/.env" -p paperless-test ps --filter "status=running" | grep -q "webserver-test"; then
|
||||
success "Test instance started successfully"
|
||||
echo ""
|
||||
echo "🎉 Test instance is ready!"
|
||||
echo "📍 Container name: paperless-webserver-test"
|
||||
echo "🌐 Access: Configure SWAG to redirect paperless.alouettes.jombi.fr temporarily"
|
||||
echo "🔧 Docker compose location: $TEST_DIR"
|
||||
echo ""
|
||||
echo "💡 To stop the test instance:"
|
||||
echo " cd $TEST_DIR && docker compose --env-file $SERVICE_DIR/.env -p paperless-test down"
|
||||
echo ""
|
||||
echo "💡 To clean test environment completely:"
|
||||
echo " $SERVICE_DIR/restore --clean"
|
||||
else
|
||||
error "Test instance failed to start properly"
|
||||
fi
|
||||
}
|
||||
|
||||
# Restore to production (not implemented)
|
||||
restore_to_production() {
|
||||
echo ""
|
||||
echo "🚧 PRODUCTION RESTORE NOT YET IMPLEMENTED 🚧"
|
||||
echo ""
|
||||
echo "Production restore functionality is not yet available."
|
||||
echo "This feature requires additional safety measures and validation."
|
||||
echo ""
|
||||
echo "For now, you can:"
|
||||
echo "1. Use --test mode to restore to the test instance"
|
||||
echo "2. Verify the restored data in the test environment"
|
||||
echo "3. Manually copy data from test to production if needed"
|
||||
echo ""
|
||||
echo "Production restore will be implemented in a future version."
|
||||
echo ""
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Main function
|
||||
main() {
|
||||
echo "=== Paperless Restore Script ==="
|
||||
|
||||
# Parse arguments
|
||||
parse_arguments "$@"
|
||||
|
||||
# Display operation info
|
||||
if [ "$MODE" = "clean" ]; then
|
||||
log "Mode: $MODE"
|
||||
else
|
||||
log "Snapshot ID: $SNAPSHOT_ID"
|
||||
log "Mode: $MODE"
|
||||
fi
|
||||
|
||||
# Check prerequisites
|
||||
check_prerequisites
|
||||
|
||||
# Setup cleanup if needed
|
||||
setup_cleanup
|
||||
|
||||
# Execute based on mode
|
||||
case "$MODE" in
|
||||
clean)
|
||||
clean_test_environment
|
||||
;;
|
||||
clean-all)
|
||||
clean_all_environment
|
||||
;;
|
||||
extract)
|
||||
extract_only
|
||||
;;
|
||||
test)
|
||||
# Create temporary directory and extract
|
||||
create_temp_dir
|
||||
extract_snapshot
|
||||
restore_to_test
|
||||
;;
|
||||
production)
|
||||
# Create temporary directory and extract
|
||||
create_temp_dir
|
||||
extract_snapshot
|
||||
restore_to_production
|
||||
;;
|
||||
*)
|
||||
error "Invalid mode: $MODE"
|
||||
;;
|
||||
esac
|
||||
|
||||
success "Operation completed!"
|
||||
}
|
||||
|
||||
# Entry point
|
||||
main "$@"
|
101
services/paperless/test-restore/docker-compose.yml
Normal file
101
services/paperless/test-restore/docker-compose.yml
Normal file
@ -0,0 +1,101 @@
|
||||
services:
|
||||
broker-test:
|
||||
image: docker.io/library/redis:8
|
||||
restart: unless-stopped
|
||||
container_name: paperless-broker-test
|
||||
volumes:
|
||||
- redisdata-test:/data
|
||||
networks:
|
||||
- paperless-test-internal
|
||||
|
||||
db-test:
|
||||
image: postgres:13
|
||||
restart: unless-stopped
|
||||
container_name: paperless-db-test
|
||||
volumes:
|
||||
- paperless-ngx_pgdata-test:/var/lib/postgresql/data
|
||||
environment:
|
||||
POSTGRES_DB: ${POSTGRES_DB}
|
||||
POSTGRES_USER: ${POSTGRES_USER}
|
||||
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
|
||||
networks:
|
||||
- paperless-test-internal
|
||||
|
||||
webserver-test:
|
||||
image: ghcr.io/paperless-ngx/paperless-ngx:2.14.1
|
||||
restart: unless-stopped
|
||||
container_name: paperless-webserver-test
|
||||
depends_on:
|
||||
- db-test
|
||||
- broker-test
|
||||
- gotenberg-test
|
||||
- tika-test
|
||||
volumes:
|
||||
- paperless-data-test:/usr/src/paperless/data
|
||||
- paperless-media-test:/usr/src/paperless/media
|
||||
- paperless-export-test:/usr/src/paperless/export
|
||||
- paperless-consume-test:/usr/src/paperless/consume
|
||||
env_file: ../.env
|
||||
environment:
|
||||
PAPERLESS_REDIS: redis://broker-test:6379
|
||||
PAPERLESS_DBHOST: db-test
|
||||
PAPERLESS_DBNAME: ${POSTGRES_DB}
|
||||
PAPERLESS_DBUSER: ${POSTGRES_USER}
|
||||
PAPERLESS_DBPASS: ${POSTGRES_PASSWORD}
|
||||
PAPERLESS_TIKA_ENABLED: 1
|
||||
PAPERLESS_TIKA_GOTENBERG_ENDPOINT: http://gotenberg-test:3000
|
||||
PAPERLESS_TIKA_ENDPOINT: http://tika-test:9998
|
||||
# Override URL for test instance
|
||||
PAPERLESS_URL: https://paperless.alouettes.jombi.fr
|
||||
networks:
|
||||
- paperless-test-internal
|
||||
- services
|
||||
- mail
|
||||
|
||||
gotenberg-test:
|
||||
image: docker.io/gotenberg/gotenberg:8.20
|
||||
restart: unless-stopped
|
||||
container_name: paperless-gotenberg-test
|
||||
command:
|
||||
- "gotenberg"
|
||||
- "--chromium-disable-javascript=true"
|
||||
- "--chromium-allow-list=file:///tmp/.*"
|
||||
networks:
|
||||
- paperless-test-internal
|
||||
|
||||
tika-test:
|
||||
image: docker.io/apache/tika:latest
|
||||
restart: unless-stopped
|
||||
container_name: paperless-tika-test
|
||||
networks:
|
||||
- paperless-test-internal
|
||||
|
||||
volumes:
|
||||
paperless-ngx_pgdata-test:
|
||||
external: true
|
||||
name: paperless-ngx_pgdata-test
|
||||
redisdata-test:
|
||||
external: true
|
||||
name: redisdata-test
|
||||
paperless-data-test:
|
||||
external: true
|
||||
name: paperless-data-test
|
||||
paperless-media-test:
|
||||
external: true
|
||||
name: paperless-media-test
|
||||
paperless-export-test:
|
||||
external: true
|
||||
name: paperless-export-test
|
||||
paperless-consume-test:
|
||||
external: true
|
||||
name: paperless-consume-test
|
||||
|
||||
networks:
|
||||
paperless-test-internal:
|
||||
name: paperless-test-internal
|
||||
services:
|
||||
external: true
|
||||
name: services
|
||||
mail:
|
||||
external: true
|
||||
name: mail
|
16
services/proton-bridge/docker-compose.yml
Normal file
16
services/proton-bridge/docker-compose.yml
Normal file
@ -0,0 +1,16 @@
|
||||
services:
|
||||
protonmail-bridge:
|
||||
image: shenxn/protonmail-bridge
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- protonmail:/root
|
||||
networks:
|
||||
- mail
|
||||
|
||||
volumes:
|
||||
protonmail:
|
||||
name: protonmail
|
||||
|
||||
networks:
|
||||
mail:
|
||||
external: true
|
35
services/swag/docker-compose.yml
Normal file
35
services/swag/docker-compose.yml
Normal file
@ -0,0 +1,35 @@
|
||||
services:
|
||||
swag:
|
||||
image: ghcr.io/linuxserver/swag:latest
|
||||
container_name: swag
|
||||
cap_add:
|
||||
- NET_ADMIN
|
||||
environment:
|
||||
- PUID=${PUID}
|
||||
- PGID=${PGID}
|
||||
- TZ=${TZ}
|
||||
- URL=${DOMAIN}
|
||||
- SUBDOMAINS=${SUBDOMAINS}
|
||||
- VALIDATION=dns
|
||||
- DNSPLUGIN=ovh
|
||||
- EMAIL=${EMAIL}
|
||||
- ONLY_SUBDOMAINS=false
|
||||
- STAGING=false
|
||||
volumes:
|
||||
- /home/citadel/data/swag:/config
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
- /etc/timezone:/etc/timezone:ro
|
||||
ports:
|
||||
- "80:80"
|
||||
- "443:443"
|
||||
networks:
|
||||
- services
|
||||
restart: unless-stopped
|
||||
labels:
|
||||
- "com.docker.compose.project=swag"
|
||||
- "backup.enable=true"
|
||||
- "backup.path=/config"
|
||||
|
||||
networks:
|
||||
services:
|
||||
external: true
|
76
services/swag/start.sh
Normal file
76
services/swag/start.sh
Normal file
@ -0,0 +1,76 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Configuration
|
||||
SERVICE_NAME="swag"
|
||||
SERVICE_DIR="/home/citadel/services/$SERVICE_NAME"
|
||||
DATA_DIR="/home/citadel/data/$SERVICE_NAME"
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
log_info() {
|
||||
echo -e "${GREEN}[INFO]${NC} $1"
|
||||
}
|
||||
|
||||
log_warn() {
|
||||
echo -e "${YELLOW}[WARN]${NC} $1"
|
||||
}
|
||||
|
||||
log_error() {
|
||||
echo -e "${RED}[ERROR]${NC} $1"
|
||||
}
|
||||
|
||||
# Create necessary directories
|
||||
log_info "Creating directories..."
|
||||
mkdir -p "$SERVICE_DIR"
|
||||
mkdir -p "$DATA_DIR"
|
||||
|
||||
# Set correct ownership
|
||||
log_info "Setting directory ownership..."
|
||||
sudo chown -R $(id -u):$(id -g) "$SERVICE_DIR" "$DATA_DIR"
|
||||
|
||||
# Check if Docker network exists
|
||||
if ! docker network ls | grep -q "services"; then
|
||||
log_warn "Docker network 'services' not found. Creating..."
|
||||
docker network create services
|
||||
fi
|
||||
|
||||
# Navigate to service directory
|
||||
cd "$SERVICE_DIR"
|
||||
|
||||
# Check if .env file exists
|
||||
if [ ! -f ".env" ]; then
|
||||
log_error ".env file not found in $SERVICE_DIR"
|
||||
log_info "Please create the .env file with required variables"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Update .env with current user IDs
|
||||
log_info "Updating .env with current user IDs..."
|
||||
sed -i "s/^PUID=.*/PUID=$(id -u)/" .env
|
||||
sed -i "s/^PGID=.*/PGID=$(id -g)/" .env
|
||||
|
||||
# Check if OVH credentials are configured
|
||||
if [ ! -f "$DATA_DIR/dns-conf/ovh.ini" ]; then
|
||||
log_warn "OVH DNS credentials not found at $DATA_DIR/dns-conf/ovh.ini"
|
||||
log_info "Remember to configure OVH API credentials after first start"
|
||||
fi
|
||||
|
||||
# Start the service
|
||||
log_info "Starting SWAG service..."
|
||||
docker-compose up -d
|
||||
|
||||
# Check if service is running
|
||||
sleep 5
|
||||
if docker-compose ps | grep -q "Up"; then
|
||||
log_info "SWAG service started successfully"
|
||||
log_info "Check logs with: docker-compose logs -f"
|
||||
else
|
||||
log_error "SWAG service failed to start"
|
||||
log_info "Check logs with: docker-compose logs"
|
||||
exit 1
|
||||
fi
|
38
services/taiga/.env.backup.20250604_161336
Normal file
38
services/taiga/.env.backup.20250604_161336
Normal file
@ -0,0 +1,38 @@
|
||||
# Taiga Configuration for production with SWAG
|
||||
# ==============================================
|
||||
|
||||
# Taiga's URLs - Variables to define where Taiga should be served
|
||||
TAIGA_SCHEME=https
|
||||
TAIGA_DOMAIN=taiga.alouettes.jombi.fr
|
||||
SUBPATH=""
|
||||
WEBSOCKETS_SCHEME=wss
|
||||
|
||||
# Taiga's Secret Key - Variable to provide cryptographic signing
|
||||
# IMPORTANT: Change this to a secure random value!
|
||||
SECRET_KEY="CHANGE_ME_TO_SECURE_SECRET_KEY"
|
||||
|
||||
# Taiga's Database settings - Variables to create the Taiga database and connect to it
|
||||
POSTGRES_USER=taiga
|
||||
POSTGRES_PASSWORD=CHANGE_ME_TO_SECURE_DB_PASSWORD
|
||||
|
||||
# Taiga's SMTP settings - Variables to send Taiga's emails to the users
|
||||
EMAIL_BACKEND=console # change to "smtp" when configuring email
|
||||
EMAIL_HOST=smtp.host.example.com
|
||||
EMAIL_PORT=587
|
||||
EMAIL_HOST_USER=user
|
||||
EMAIL_HOST_PASSWORD=password
|
||||
EMAIL_DEFAULT_FROM=noreply@alouettes.jombi.fr
|
||||
EMAIL_USE_TLS=True
|
||||
EMAIL_USE_SSL=False
|
||||
|
||||
# Taiga's RabbitMQ settings - Variables to leave messages for the realtime and asynchronous events
|
||||
RABBITMQ_USER=taiga
|
||||
RABBITMQ_PASS=CHANGE_ME_TO_SECURE_RABBITMQ_PASSWORD
|
||||
RABBITMQ_VHOST=taiga
|
||||
RABBITMQ_ERLANG_COOKIE=CHANGE_ME_TO_SECURE_ERLANG_COOKIE
|
||||
|
||||
# Taiga's Attachments settings
|
||||
ATTACHMENTS_MAX_AGE=360
|
||||
|
||||
# Taiga's Telemetry
|
||||
ENABLE_TELEMETRY=False
|
38
services/taiga/.env.backup.20250604_161517
Normal file
38
services/taiga/.env.backup.20250604_161517
Normal file
@ -0,0 +1,38 @@
|
||||
# Taiga Configuration for production with SWAG
|
||||
# ==============================================
|
||||
|
||||
# Taiga's URLs - Variables to define where Taiga should be served
|
||||
TAIGA_SCHEME=https
|
||||
TAIGA_DOMAIN=taiga.alouettes.jombi.fr
|
||||
SUBPATH=""
|
||||
WEBSOCKETS_SCHEME=wss
|
||||
|
||||
# Taiga's Secret Key - Variable to provide cryptographic signing
|
||||
# IMPORTANT: Change this to a secure random value!
|
||||
SECRET_KEY="CHANGE_ME_TO_SECURE_SECRET_KEY"
|
||||
|
||||
# Taiga's Database settings - Variables to create the Taiga database and connect to it
|
||||
POSTGRES_USER=taiga
|
||||
POSTGRES_PASSWORD=CHANGE_ME_TO_SECURE_DB_PASSWORD
|
||||
|
||||
# Taiga's SMTP settings - Variables to send Taiga's emails to the users
|
||||
EMAIL_BACKEND=console # change to "smtp" when configuring email
|
||||
EMAIL_HOST=smtp.host.example.com
|
||||
EMAIL_PORT=587
|
||||
EMAIL_HOST_USER=user
|
||||
EMAIL_HOST_PASSWORD=password
|
||||
EMAIL_DEFAULT_FROM=noreply@alouettes.jombi.fr
|
||||
EMAIL_USE_TLS=True
|
||||
EMAIL_USE_SSL=False
|
||||
|
||||
# Taiga's RabbitMQ settings - Variables to leave messages for the realtime and asynchronous events
|
||||
RABBITMQ_USER=taiga
|
||||
RABBITMQ_PASS=CHANGE_ME_TO_SECURE_RABBITMQ_PASSWORD
|
||||
RABBITMQ_VHOST=taiga
|
||||
RABBITMQ_ERLANG_COOKIE=CHANGE_ME_TO_SECURE_ERLANG_COOKIE
|
||||
|
||||
# Taiga's Attachments settings
|
||||
ATTACHMENTS_MAX_AGE=360
|
||||
|
||||
# Taiga's Telemetry
|
||||
ENABLE_TELEMETRY=False
|
50
services/taiga/docker-compose-inits.yml
Normal file
50
services/taiga/docker-compose-inits.yml
Normal file
@ -0,0 +1,50 @@
|
||||
x-environment: &default-back-environment
|
||||
# Database settings
|
||||
POSTGRES_DB: taiga
|
||||
POSTGRES_USER: ${POSTGRES_USER}
|
||||
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
|
||||
POSTGRES_HOST: taiga-db
|
||||
POSTGRES_PORT: 5432
|
||||
|
||||
# Taiga settings
|
||||
TAIGA_SECRET_KEY: ${SECRET_KEY}
|
||||
TAIGA_SITES_SCHEME: ${TAIGA_SCHEME}
|
||||
TAIGA_SITES_DOMAIN: ${TAIGA_DOMAIN}
|
||||
TAIGA_SUBPATH: ${SUBPATH}
|
||||
|
||||
# Email settings
|
||||
EMAIL_BACKEND: ${EMAIL_BACKEND}
|
||||
EMAIL_HOST: ${EMAIL_HOST}
|
||||
EMAIL_PORT: ${EMAIL_PORT}
|
||||
EMAIL_HOST_USER: ${EMAIL_HOST_USER}
|
||||
EMAIL_HOST_PASSWORD: ${EMAIL_HOST_PASSWORD}
|
||||
DEFAULT_FROM_EMAIL: ${EMAIL_DEFAULT_FROM}
|
||||
EMAIL_USE_TLS: ${EMAIL_USE_TLS}
|
||||
EMAIL_USE_SSL: ${EMAIL_USE_SSL}
|
||||
|
||||
# RabbitMQ settings for events
|
||||
EVENTS_PUSH_BACKEND: "rabbitmq"
|
||||
EVENTS_PUSH_BACKEND_URL: "amqp://${RABBITMQ_USER}:${RABBITMQ_PASS}@taiga-events-rabbitmq:5672/${RABBITMQ_VHOST}"
|
||||
|
||||
# RabbitMQ settings for async
|
||||
CELERY_BROKER_URL: "amqp://${RABBITMQ_USER}:${RABBITMQ_PASS}@taiga-async-rabbitmq:5672/${RABBITMQ_VHOST}"
|
||||
|
||||
# Telemetry
|
||||
ENABLE_TELEMETRY: ${ENABLE_TELEMETRY}
|
||||
|
||||
x-volumes: &default-back-volumes
|
||||
- /home/citadel/data/taiga/static:/taiga-back/static
|
||||
- /home/citadel/data/taiga/media:/taiga-back/media
|
||||
|
||||
services:
|
||||
taiga-manage:
|
||||
image: taigaio/taiga-back:latest
|
||||
environment: *default-back-environment
|
||||
depends_on:
|
||||
- taiga-db
|
||||
entrypoint: "python manage.py"
|
||||
volumes: *default-back-volumes
|
||||
networks:
|
||||
- taiga_internal
|
||||
- services
|
||||
- mail
|
193
services/taiga/docker-compose.yml
Normal file
193
services/taiga/docker-compose.yml
Normal file
@ -0,0 +1,193 @@
|
||||
x-environment: &default-back-environment
|
||||
# Database settings
|
||||
POSTGRES_DB: taiga
|
||||
POSTGRES_USER: ${POSTGRES_USER}
|
||||
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
|
||||
POSTGRES_HOST: taiga-db
|
||||
POSTGRES_PORT: 5432
|
||||
|
||||
# Taiga settings
|
||||
TAIGA_SECRET_KEY: ${SECRET_KEY}
|
||||
TAIGA_SITES_SCHEME: ${TAIGA_SCHEME}
|
||||
TAIGA_SITES_DOMAIN: ${TAIGA_DOMAIN}
|
||||
TAIGA_SUBPATH: ${SUBPATH}
|
||||
|
||||
# Email settings
|
||||
EMAIL_BACKEND: ${EMAIL_BACKEND}
|
||||
EMAIL_HOST: ${EMAIL_HOST}
|
||||
EMAIL_PORT: ${EMAIL_PORT}
|
||||
EMAIL_HOST_USER: ${EMAIL_HOST_USER}
|
||||
EMAIL_HOST_PASSWORD: ${EMAIL_HOST_PASSWORD}
|
||||
DEFAULT_FROM_EMAIL: ${EMAIL_DEFAULT_FROM}
|
||||
EMAIL_USE_TLS: ${EMAIL_USE_TLS}
|
||||
EMAIL_USE_SSL: ${EMAIL_USE_SSL}
|
||||
|
||||
# RabbitMQ settings for events
|
||||
EVENTS_PUSH_BACKEND: "rabbitmq"
|
||||
EVENTS_PUSH_BACKEND_URL: "amqp://${RABBITMQ_USER}:${RABBITMQ_PASS}@taiga-events-rabbitmq:5672/${RABBITMQ_VHOST}"
|
||||
|
||||
# RabbitMQ settings for async
|
||||
CELERY_BROKER_URL: "amqp://${RABBITMQ_USER}:${RABBITMQ_PASS}@taiga-async-rabbitmq:5672/${RABBITMQ_VHOST}"
|
||||
|
||||
# Telemetry
|
||||
ENABLE_TELEMETRY: ${ENABLE_TELEMETRY}
|
||||
|
||||
x-volumes: &default-back-volumes
|
||||
- /home/citadel/data/taiga/static:/taiga-back/static
|
||||
- /home/citadel/data/taiga/media:/taiga-back/media
|
||||
|
||||
services:
|
||||
taiga-db:
|
||||
image: postgres:12.3
|
||||
environment:
|
||||
POSTGRES_DB: taiga
|
||||
POSTGRES_USER: ${POSTGRES_USER}
|
||||
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER}"]
|
||||
interval: 2s
|
||||
timeout: 15s
|
||||
retries: 5
|
||||
start_period: 3s
|
||||
volumes:
|
||||
- taiga-db-data:/var/lib/postgresql/data
|
||||
networks:
|
||||
- taiga_internal
|
||||
restart: unless-stopped
|
||||
labels:
|
||||
- "com.docker.compose.project=taiga"
|
||||
|
||||
taiga-back:
|
||||
image: taigaio/taiga-back:latest
|
||||
environment: *default-back-environment
|
||||
volumes: *default-back-volumes
|
||||
networks:
|
||||
- taiga_internal
|
||||
- mail
|
||||
depends_on:
|
||||
taiga-db:
|
||||
condition: service_healthy
|
||||
taiga-events-rabbitmq:
|
||||
condition: service_started
|
||||
taiga-async-rabbitmq:
|
||||
condition: service_started
|
||||
restart: unless-stopped
|
||||
labels:
|
||||
- "com.docker.compose.project=taiga"
|
||||
- "backup.enable=true"
|
||||
|
||||
taiga-async:
|
||||
image: taigaio/taiga-back:latest
|
||||
entrypoint: ["/taiga-back/docker/async_entrypoint.sh"]
|
||||
environment: *default-back-environment
|
||||
volumes: *default-back-volumes
|
||||
networks:
|
||||
- taiga_internal
|
||||
depends_on:
|
||||
taiga-db:
|
||||
condition: service_healthy
|
||||
taiga-async-rabbitmq:
|
||||
condition: service_started
|
||||
restart: unless-stopped
|
||||
labels:
|
||||
- "com.docker.compose.project=taiga"
|
||||
|
||||
taiga-async-rabbitmq:
|
||||
image: rabbitmq:3.8-management-alpine
|
||||
environment:
|
||||
RABBITMQ_ERLANG_COOKIE: ${RABBITMQ_ERLANG_COOKIE}
|
||||
RABBITMQ_DEFAULT_USER: ${RABBITMQ_USER}
|
||||
RABBITMQ_DEFAULT_PASS: ${RABBITMQ_PASS}
|
||||
RABBITMQ_DEFAULT_VHOST: ${RABBITMQ_VHOST}
|
||||
hostname: taiga-async-rabbitmq
|
||||
volumes:
|
||||
- taiga-async-rabbitmq-data:/var/lib/rabbitmq
|
||||
networks:
|
||||
- taiga_internal
|
||||
restart: unless-stopped
|
||||
labels:
|
||||
- "com.docker.compose.project=taiga"
|
||||
|
||||
taiga-front:
|
||||
image: taigaio/taiga-front:latest
|
||||
environment:
|
||||
TAIGA_URL: "${TAIGA_SCHEME}://${TAIGA_DOMAIN}"
|
||||
TAIGA_WEBSOCKETS_URL: "${WEBSOCKETS_SCHEME}://${TAIGA_DOMAIN}"
|
||||
TAIGA_SUBPATH: "${SUBPATH}"
|
||||
networks:
|
||||
- taiga_internal
|
||||
restart: unless-stopped
|
||||
labels:
|
||||
- "com.docker.compose.project=taiga"
|
||||
|
||||
taiga-events:
|
||||
image: taigaio/taiga-events:latest
|
||||
environment:
|
||||
RABBITMQ_USER: ${RABBITMQ_USER}
|
||||
RABBITMQ_PASS: ${RABBITMQ_PASS}
|
||||
TAIGA_SECRET_KEY: ${SECRET_KEY}
|
||||
networks:
|
||||
- taiga_internal
|
||||
depends_on:
|
||||
- taiga-events-rabbitmq
|
||||
restart: unless-stopped
|
||||
labels:
|
||||
- "com.docker.compose.project=taiga"
|
||||
|
||||
taiga-events-rabbitmq:
|
||||
image: rabbitmq:3.8-management-alpine
|
||||
environment:
|
||||
RABBITMQ_ERLANG_COOKIE: ${RABBITMQ_ERLANG_COOKIE}
|
||||
RABBITMQ_DEFAULT_USER: ${RABBITMQ_USER}
|
||||
RABBITMQ_DEFAULT_PASS: ${RABBITMQ_PASS}
|
||||
RABBITMQ_DEFAULT_VHOST: ${RABBITMQ_VHOST}
|
||||
hostname: taiga-events-rabbitmq
|
||||
volumes:
|
||||
- taiga-events-rabbitmq-data:/var/lib/rabbitmq
|
||||
networks:
|
||||
- taiga_internal
|
||||
restart: unless-stopped
|
||||
labels:
|
||||
- "com.docker.compose.project=taiga"
|
||||
|
||||
taiga-protected:
|
||||
image: taigaio/taiga-protected:latest
|
||||
environment:
|
||||
MAX_AGE: ${ATTACHMENTS_MAX_AGE}
|
||||
SECRET_KEY: ${SECRET_KEY}
|
||||
networks:
|
||||
- taiga_internal
|
||||
restart: unless-stopped
|
||||
labels:
|
||||
- "com.docker.compose.project=taiga"
|
||||
|
||||
taiga-gateway:
|
||||
image: nginx:1.19-alpine
|
||||
volumes:
|
||||
- /home/citadel/data/taiga/static:/taiga/static:ro
|
||||
- /home/citadel/data/taiga/media:/taiga/media:ro
|
||||
- ./taiga-gateway.conf:/etc/nginx/conf.d/default.conf:ro
|
||||
networks:
|
||||
- taiga_internal
|
||||
- services
|
||||
depends_on:
|
||||
- taiga-front
|
||||
- taiga-back
|
||||
- taiga-events
|
||||
|
||||
volumes:
|
||||
taiga-db-data:
|
||||
name: taiga-db-data
|
||||
taiga-async-rabbitmq-data:
|
||||
name: taiga-async-rabbitmq-data
|
||||
taiga-events-rabbitmq-data:
|
||||
name: taiga-events-rabbitmq-data
|
||||
|
||||
networks:
|
||||
services:
|
||||
external: true
|
||||
mail:
|
||||
external: true
|
||||
taiga_internal:
|
||||
driver: bridge
|
||||
name: taiga_internal
|
87
services/taiga/generate-secrets.sh
Normal file
87
services/taiga/generate-secrets.sh
Normal file
@ -0,0 +1,87 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Colors for output
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
RED='\033[0;31m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
log_info() {
|
||||
echo -e "${GREEN}[INFO]${NC} $1"
|
||||
}
|
||||
|
||||
log_warn() {
|
||||
echo -e "${YELLOW}[WARN]${NC} $1"
|
||||
}
|
||||
|
||||
log_error() {
|
||||
echo -e "${RED}[ERROR]${NC} $1"
|
||||
}
|
||||
|
||||
log_header() {
|
||||
echo -e "${BLUE}$1${NC}"
|
||||
}
|
||||
|
||||
# Function to generate secure random string (alphanumeric only)
|
||||
generate_secret() {
|
||||
local length=${1:-32}
|
||||
cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w "$length" | head -n 1
|
||||
}
|
||||
|
||||
ENV_FILE=".env"
|
||||
|
||||
log_header "Taiga Secrets Generator (Alternative Method)"
|
||||
echo "=============================================="
|
||||
|
||||
# Check if .env file exists
|
||||
if [[ ! -f "$ENV_FILE" ]]; then
|
||||
log_error ".env file not found! Please create it first."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Create backup
|
||||
log_info "Creating backup of .env file..."
|
||||
cp "$ENV_FILE" "${ENV_FILE}.backup.$(date +%Y%m%d_%H%M%S)"
|
||||
|
||||
# Generate secrets
|
||||
log_info "Generating secure secrets..."
|
||||
|
||||
SECRET_KEY=$(generate_secret 50)
|
||||
DB_PASSWORD=$(generate_secret 32)
|
||||
RABBITMQ_PASSWORD=$(generate_secret 32)
|
||||
ERLANG_COOKIE=$(generate_secret 20)
|
||||
|
||||
# Create new .env file using awk (more robust than sed)
|
||||
log_info "Updating .env file with new secrets..."
|
||||
|
||||
awk -v secret_key="$SECRET_KEY" \
|
||||
-v db_password="$DB_PASSWORD" \
|
||||
-v rabbitmq_password="$RABBITMQ_PASSWORD" \
|
||||
-v erlang_cookie="$ERLANG_COOKIE" '
|
||||
{
|
||||
if ($0 ~ /^SECRET_KEY="CHANGE_ME_TO_SECURE_SECRET_KEY"/) {
|
||||
print "SECRET_KEY=\"" secret_key "\""
|
||||
} else if ($0 ~ /^POSTGRES_PASSWORD=CHANGE_ME_TO_SECURE_DB_PASSWORD/) {
|
||||
print "POSTGRES_PASSWORD=" db_password
|
||||
} else if ($0 ~ /^RABBITMQ_PASS=CHANGE_ME_TO_SECURE_RABBITMQ_PASSWORD/) {
|
||||
print "RABBITMQ_PASS=" rabbitmq_password
|
||||
} else if ($0 ~ /^RABBITMQ_ERLANG_COOKIE=CHANGE_ME_TO_SECURE_ERLANG_COOKIE/) {
|
||||
print "RABBITMQ_ERLANG_COOKIE=" erlang_cookie
|
||||
} else {
|
||||
print $0
|
||||
}
|
||||
}' "$ENV_FILE" > "${ENV_FILE}.tmp" && mv "${ENV_FILE}.tmp" "$ENV_FILE"
|
||||
|
||||
log_info "Secrets generated and updated successfully!"
|
||||
echo ""
|
||||
log_warn "IMPORTANT: Keep these credentials secure!"
|
||||
echo "- SECRET_KEY: $SECRET_KEY (50 chars)"
|
||||
echo "- POSTGRES_PASSWORD: $DB_PASSWORD (32 chars)"
|
||||
echo "- RABBITMQ_PASS: $RABBITMQ_PASSWORD (32 chars)"
|
||||
echo "- RABBITMQ_ERLANG_COOKIE: $ERLANG_COOKIE (20 chars)"
|
||||
echo ""
|
||||
log_info "Original .env file backed up."
|
||||
echo ""
|
||||
log_warn "Next step: Review EMAIL settings in .env if you want to configure SMTP"
|
75
services/taiga/taiga-gateway.conf
Normal file
75
services/taiga/taiga-gateway.conf
Normal file
@ -0,0 +1,75 @@
|
||||
server {
|
||||
listen 80 default_server;
|
||||
|
||||
client_max_body_size 100M;
|
||||
charset utf-8;
|
||||
|
||||
# Frontend
|
||||
location / {
|
||||
proxy_pass http://taiga-front/;
|
||||
proxy_pass_header Server;
|
||||
proxy_set_header Host $http_host;
|
||||
proxy_redirect off;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Scheme $scheme;
|
||||
}
|
||||
|
||||
# API
|
||||
location /api/ {
|
||||
proxy_pass http://taiga-back:8000/api/;
|
||||
proxy_pass_header Server;
|
||||
proxy_set_header Host $http_host;
|
||||
proxy_redirect off;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Scheme $scheme;
|
||||
}
|
||||
|
||||
# Admin
|
||||
location /admin/ {
|
||||
proxy_pass http://taiga-back:8000/admin/;
|
||||
proxy_pass_header Server;
|
||||
proxy_set_header Host $http_host;
|
||||
proxy_redirect off;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Scheme $scheme;
|
||||
}
|
||||
|
||||
# Static
|
||||
location /static/ {
|
||||
alias /taiga/static/;
|
||||
}
|
||||
|
||||
# Media
|
||||
location /_protected/ {
|
||||
internal;
|
||||
alias /taiga/media/;
|
||||
add_header Content-disposition "attachment";
|
||||
}
|
||||
|
||||
# Unprotected section
|
||||
location /media/exports/ {
|
||||
alias /taiga/media/exports/;
|
||||
add_header Content-disposition "attachment";
|
||||
}
|
||||
|
||||
location /media/ {
|
||||
proxy_set_header Host $http_host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Scheme $scheme;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_pass http://taiga-protected:8003/;
|
||||
proxy_redirect off;
|
||||
}
|
||||
|
||||
# Events
|
||||
location /events {
|
||||
proxy_pass http://taiga-events:8888/events;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection "upgrade";
|
||||
proxy_connect_timeout 7d;
|
||||
proxy_send_timeout 7d;
|
||||
proxy_read_timeout 7d;
|
||||
}
|
||||
}
|
10
services/taiga/taiga-manage.sh
Normal file
10
services/taiga/taiga-manage.sh
Normal file
@ -0,0 +1,10 @@
|
||||
#!/usr/bin/env sh
|
||||
|
||||
# This Source Code Form is subject to the terms of the Mozilla Public
|
||||
# License, v. 2.0. If a copy of the MPL was not distributed with this
|
||||
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
|
||||
#
|
||||
# Copyright (c) 2021-present Kaleidos INC
|
||||
|
||||
set -x
|
||||
exec docker compose -f docker-compose.yml -f docker-compose-inits.yml run --rm taiga-manage $@
|
Loading…
x
Reference in New Issue
Block a user