Initial commit

This commit is contained in:
Azwan Ngali 2025-12-17 16:51:09 +00:00
commit 1eba2eb205
84 changed files with 8182 additions and 0 deletions

11
gitea/.env Normal file
View File

@ -0,0 +1,11 @@
COMPOSE_PROJECT_NAME=ascidiia-bridoon
APP_NAME=gitea
SUBDOMAIN=ascidiia-bridoon
DOMAIN=merakit.my
URL=ascidiia-bridoon.merakit.my
GITEA_VERSION=1.21
POSTGRES_VERSION=16-alpine
DB_NAME=angali_27658dcb_gitea_ascidiia_bridoon
DB_USER=angali_27658dcb_gitea_ascidiia_bridoon
DB_PASSWORD=diapason-dukkha-munchausen
DISABLE_REGISTRATION=false

View File

@ -0,0 +1,11 @@
COMPOSE_PROJECT_NAME=gitea-template
APP_NAME=gitea
SUBDOMAIN=gitea-template
DOMAIN=merakit.my
URL=gitea-template.merakit.my
GITEA_VERSION=1.21
POSTGRES_VERSION=16-alpine
DB_NAME=gitea_db
DB_USER=gitea_user
DB_PASSWORD=change-me
DISABLE_REGISTRATION=false

View File

@ -0,0 +1,11 @@
COMPOSE_PROJECT_NAME=dodman-kuichua
APP_NAME=gitea
SUBDOMAIN=dodman-kuichua
DOMAIN=merakit.my
URL=dodman-kuichua.merakit.my
GITEA_VERSION=1.21
POSTGRES_VERSION=16-alpine
DB_NAME=angali_7675e8e6_gitea_dodman_kuichua
DB_USER=angali_7675e8e6_gitea_dodman_kuichua
DB_PASSWORD=viva-overheats-chusite
DISABLE_REGISTRATION=false

View File

@ -0,0 +1,11 @@
COMPOSE_PROJECT_NAME=artfully-copious
APP_NAME=gitea
SUBDOMAIN=artfully-copious
DOMAIN=merakit.my
URL=artfully-copious.merakit.my
GITEA_VERSION=1.21
POSTGRES_VERSION=16-alpine
DB_NAME=angali_2f2ec2eb_gitea_artfully_copious
DB_USER=angali_2f2ec2eb_gitea_artfully_copious
DB_PASSWORD=bannerer-tetchy-polyaxone
DISABLE_REGISTRATION=false

251
gitea/README.md Normal file
View File

@ -0,0 +1,251 @@
# Gitea Deployment Template
Production-ready Gitea deployment with automated DNS, environment generation, and health checking.
## Features
- **Automated Environment Generation**: Random subdomain and secure password generation
- **DNS Management**: Automatic Cloudflare DNS record creation
- **Health Checking**: Automated deployment verification
- **Rollback Support**: Automatic rollback on deployment failure
- **Webhook Notifications**: Optional webhook notifications for deployment events
- **Deployment Tracking**: Track and manage all deployments
- **Dry-Run Mode**: Preview changes before applying
## Architecture
```
gitea/
├── docker-compose.yml # Docker Compose configuration
├── .env # Environment variables (generated)
├── deploy.py # Main deployment script
├── destroy.py # Deployment destruction script
├── requirements.txt # Python dependencies
├── deployments/ # Deployment configuration tracking
├── logs/ # Deployment logs
│ ├── success/ # Successful deployment logs
│ └── failed/ # Failed deployment logs
└── gitea_deployer/ # Python deployment module
├── config.py # Configuration management
├── orchestrator.py # Deployment orchestration
├── env_generator.py # Environment generation
├── dns_manager.py # DNS management (Cloudflare)
├── docker_manager.py # Docker operations
├── health.py # Health checking
├── webhooks.py # Webhook notifications
├── deployment_logger.py # File logging
└── deployment_config_manager.py # Deployment tracking
```
## Prerequisites
- Docker and Docker Compose
- Python 3.9+
- Cloudflare account with API token
- Traefik reverse proxy running on `proxy` network
- `/usr/share/dict/words` file (install `words` package)
## Installation
1. Install Python dependencies:
```bash
pip3 install -r requirements.txt
```
2. Set environment variables:
```bash
export CLOUDFLARE_API_TOKEN="your-token"
export CLOUDFLARE_ZONE_ID="your-zone-id"
```
3. Ensure Docker proxy network exists:
```bash
docker network create proxy
```
## Usage
### Deploy Gitea
Basic deployment:
```bash
./deploy.py
```
With options:
```bash
# Dry-run mode (preview only)
./deploy.py --dry-run
# Debug mode
./deploy.py --log-level DEBUG
# With webhook notifications
./deploy.py --webhook-url https://hooks.slack.com/your-webhook
# Custom retry count for DNS conflicts
./deploy.py --max-retries 5
```
### List Deployments
```bash
./destroy.py --list
```
### Destroy Deployment
By subdomain:
```bash
./destroy.py --subdomain my-gitea-site
```
By URL:
```bash
./destroy.py --url my-gitea-site.merakit.my
```
With options:
```bash
# Skip confirmation
./destroy.py --subdomain my-gitea-site --yes
# Dry-run mode
./destroy.py --subdomain my-gitea-site --dry-run
# Keep deployment config file
./destroy.py --subdomain my-gitea-site --keep-config
```
## Environment Variables
### Required
- `CLOUDFLARE_API_TOKEN`: Cloudflare API token with DNS edit permissions
- `CLOUDFLARE_ZONE_ID`: Cloudflare zone ID for your domain
### Optional
- `DEPLOYMENT_WEBHOOK_URL`: Webhook URL for deployment notifications
- `DEPLOYMENT_MAX_RETRIES`: Max retries for DNS conflicts (default: 3)
- `DEPLOYMENT_HEALTHCHECK_TIMEOUT`: Health check timeout in seconds (default: 60)
- `DEPLOYMENT_HEALTHCHECK_INTERVAL`: Health check interval in seconds (default: 10)
## Configuration
### Docker Compose Services
- **postgres**: PostgreSQL 16 database
- **gitea**: Gitea 1.21 Git service
### Generated Values
The deployment automatically generates:
- Random subdomain (e.g., `awesome-robot.merakit.my`)
- Database name with prefix `angali_{random}_{app}_{subdomain}`
- Database user with same pattern
- Secure memorable passwords (3-word format)
### Customization
Edit `.env` file to customize:
- `GITEA_VERSION`: Gitea version (default: 1.21)
- `POSTGRES_VERSION`: PostgreSQL version (default: 16-alpine)
- `DISABLE_REGISTRATION`: Disable user registration (default: false)
- `DOMAIN`: Base domain (default: merakit.my)
## Deployment Workflow
1. **Validation**: Check dependencies and configuration
2. **Environment Generation**: Generate random subdomain and credentials
3. **DNS Setup**: Create Cloudflare DNS record
4. **Container Deployment**: Pull images and start services
5. **Health Check**: Verify deployment is accessible
6. **Logging**: Record deployment success/failure
## Rollback
If deployment fails at any stage, automatic rollback occurs:
1. Stop and remove containers
2. Remove DNS records
3. Restore previous `.env` file
## Troubleshooting
### DNS Conflicts
If subdomain is already taken, the script automatically retries with a new random subdomain (up to `max_retries` times).
### Health Check Failures
Health checks wait up to 60 seconds by default. Increase timeout if needed:
```bash
export DEPLOYMENT_HEALTHCHECK_TIMEOUT=120
./deploy.py
```
### Missing Dictionary File
Install the words package:
```bash
# Ubuntu/Debian
sudo apt-get install wamerican
# RHEL/CentOS
sudo yum install words
```
## Logs
- Success logs: `logs/success/success_{url}_{timestamp}.txt`
- Failure logs: `logs/failed/failed_{url}_{timestamp}.txt`
## Deployment Tracking
Deployment configurations are saved in `deployments/` directory:
- Format: `{subdomain}_{timestamp}.json`
- Contains: containers, volumes, networks, DNS records
- Used by `destroy.py` for cleanup
## Security Notes
- Passwords are generated using cryptographically secure random generation
- API tokens are never logged or displayed
- SSL verification is enabled by default (use `--no-verify-ssl` only for testing)
- Database credentials are automatically generated per deployment
## Integration
### Webhook Notifications
The script can send webhook notifications for:
- `deployment_started`: When deployment begins
- `dns_added`: When DNS record is created
- `health_check_passed`: When health check succeeds
- `deployment_success`: When deployment completes
- `deployment_failed`: When deployment fails
Example webhook payload:
```json
{
"event_type": "deployment_success",
"timestamp": "2024-01-01T12:00:00Z",
"subdomain": "awesome-robot",
"url": "awesome-robot.merakit.my",
"message": "Deployment successful for awesome-robot.merakit.my",
"metadata": {
"duration": 45.2
}
}
```
## License
This deployment template is part of the infrastructure management system.

202
gitea/deploy.py Executable file
View File

@ -0,0 +1,202 @@
#!/usr/bin/env python3
"""
Production-ready Gitea deployment script
Combines environment generation and deployment with:
- Configuration validation
- Rollback capability
- Dry-run mode
- Monitoring hooks
"""
import argparse
import logging
import sys
from pathlib import Path
from typing import NoReturn
from rich.console import Console
from rich.logging import RichHandler
from gitea_deployer.config import ConfigurationError, DeploymentConfig
from gitea_deployer.orchestrator import DeploymentError, DeploymentOrchestrator
console = Console()
def setup_logging(log_level: str) -> None:
"""
Setup rich logging with colored output
Args:
log_level: Logging level (DEBUG, INFO, WARNING, ERROR)
"""
logging.basicConfig(
level=log_level.upper(),
format="%(message)s",
datefmt="[%X]",
handlers=[RichHandler(console=console, rich_tracebacks=True, show_path=False)]
)
# Reduce noise from urllib3/requests
logging.getLogger("urllib3").setLevel(logging.WARNING)
logging.getLogger("requests").setLevel(logging.WARNING)
def parse_args() -> argparse.Namespace:
"""
Parse CLI arguments
Returns:
argparse.Namespace with parsed arguments
"""
parser = argparse.ArgumentParser(
description="Deploy Gitea with automatic environment generation",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog="""
Examples:
# Normal deployment
./deploy.py
# Dry-run mode (preview only)
./deploy.py --dry-run
# With webhook notifications
./deploy.py --webhook-url https://hooks.slack.com/xxx
# Debug mode
./deploy.py --log-level DEBUG
# Custom retry count
./deploy.py --max-retries 5
Environment Variables:
CLOUDFLARE_API_TOKEN Cloudflare API token (required)
CLOUDFLARE_ZONE_ID Cloudflare zone ID (required)
DEPLOYMENT_WEBHOOK_URL Webhook URL for notifications (optional)
DEPLOYMENT_MAX_RETRIES Max retries for DNS conflicts (default: 3)
For more information, see the documentation at:
/infra/templates/gitea/README.md
"""
)
parser.add_argument(
"--dry-run",
action="store_true",
help="Preview deployment without making changes"
)
parser.add_argument(
"--env-file",
type=Path,
default=Path(".env"),
help="Path to .env file (default: .env)"
)
parser.add_argument(
"--compose-file",
type=Path,
default=Path("docker-compose.yml"),
help="Path to docker-compose.yml (default: docker-compose.yml)"
)
parser.add_argument(
"--max-retries",
type=int,
default=3,
help="Max retries for DNS conflicts (default: 3)"
)
parser.add_argument(
"--webhook-url",
type=str,
help="Webhook URL for deployment notifications"
)
parser.add_argument(
"--log-level",
choices=["DEBUG", "INFO", "WARNING", "ERROR"],
default="INFO",
help="Logging level (default: INFO)"
)
parser.add_argument(
"--no-verify-ssl",
action="store_true",
help="Skip SSL verification for health checks (not recommended for production)"
)
return parser.parse_args()
def print_banner() -> None:
"""Print deployment banner"""
console.print("\n[bold cyan]╔══════════════════════════════════════════════╗[/bold cyan]")
console.print("[bold cyan]║[/bold cyan] [bold white]Gitea Production Deployment[/bold white] [bold cyan]║[/bold cyan]")
console.print("[bold cyan]╚══════════════════════════════════════════════╝[/bold cyan]\n")
def main() -> NoReturn:
"""
Main entry point
Exit codes:
0: Success
1: Deployment failure
130: User interrupt (Ctrl+C)
"""
args = parse_args()
setup_logging(args.log_level)
logger = logging.getLogger(__name__)
print_banner()
try:
# Load configuration
logger.debug("Loading configuration...")
config = DeploymentConfig.from_env_and_args(args)
config.validate()
logger.debug("Configuration loaded successfully")
if config.dry_run:
console.print("[bold yellow]━━━ DRY-RUN MODE: No changes will be made ━━━[/bold yellow]\n")
# Create orchestrator and deploy
orchestrator = DeploymentOrchestrator(config)
orchestrator.deploy()
console.print("\n[bold green]╔══════════════════════════════════════════════╗[/bold green]")
console.print("[bold green]║[/bold green] [bold white]✓ Deployment Successful![/bold white] [bold green]║[/bold green]")
console.print("[bold green]╚══════════════════════════════════════════════╝[/bold green]\n")
sys.exit(0)
except ConfigurationError as e:
logger.error(f"Configuration error: {e}")
console.print(f"\n[bold red]✗ Configuration error: {e}[/bold red]\n")
console.print("[yellow]Please check your environment variables and configuration.[/yellow]")
console.print("[yellow]Required: CLOUDFLARE_API_TOKEN, CLOUDFLARE_ZONE_ID[/yellow]\n")
sys.exit(1)
except DeploymentError as e:
logger.error(f"Deployment failed: {e}")
console.print(f"\n[bold red]✗ Deployment failed: {e}[/bold red]\n")
sys.exit(1)
except KeyboardInterrupt:
logger.warning("Deployment interrupted by user")
console.print("\n[bold yellow]✗ Deployment interrupted by user[/bold yellow]\n")
sys.exit(130)
except Exception as e:
logger.exception("Unexpected error")
console.print(f"\n[bold red]✗ Unexpected error: {e}[/bold red]\n")
console.print("[yellow]Please check the logs above for more details.[/yellow]\n")
sys.exit(1)
if __name__ == "__main__":
main()

View File

@ -0,0 +1,23 @@
{
"subdomain": "ascidiia-bridoon",
"url": "ascidiia-bridoon.merakit.my",
"domain": "merakit.my",
"compose_project_name": "ascidiia-bridoon",
"db_name": "angali_27658dcb_gitea_ascidiia_bridoon",
"db_user": "angali_27658dcb_gitea_ascidiia_bridoon",
"deployment_timestamp": "2025-12-17T16:01:55.543308",
"dns_record_id": "0e5fef38bac853f3e3c65b6bdbc62f2e",
"dns_ip": "64.120.92.151",
"containers": [
"ascidiia-bridoon_db",
"ascidiia-bridoon_gitea"
],
"volumes": [
"ascidiia-bridoon_db_data",
"ascidiia-bridoon_gitea_data"
],
"networks": [
"ascidiia-bridoon_internal"
],
"env_file_path": "/infra/templates/gitea/.env"
}

529
gitea/destroy.py Executable file
View File

@ -0,0 +1,529 @@
#!/usr/bin/env python3
"""
Gitea Deployment Destroyer
Destroys Gitea deployments based on saved deployment configurations
"""
import argparse
import logging
import subprocess
import sys
from pathlib import Path
from typing import List, NoReturn, Optional
from rich.console import Console
from rich.logging import RichHandler
from rich.prompt import Confirm
from rich.table import Table
from gitea_deployer.deployment_config_manager import (
DeploymentConfigManager,
DeploymentMetadata
)
from gitea_deployer.dns_manager import DNSError, DNSManager
console = Console()
def setup_logging(log_level: str) -> None:
"""
Setup rich logging with colored output
Args:
log_level: Logging level (DEBUG, INFO, WARNING, ERROR)
"""
logging.basicConfig(
level=log_level.upper(),
format="%(message)s",
datefmt="[%X]",
handlers=[RichHandler(console=console, rich_tracebacks=True, show_path=False)]
)
def parse_args() -> argparse.Namespace:
"""
Parse CLI arguments
Returns:
argparse.Namespace with parsed arguments
"""
parser = argparse.ArgumentParser(
description="Destroy Gitea deployments",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog="""
Examples:
# List all deployments
./destroy.py --list
# Destroy by subdomain
./destroy.py --subdomain my-site
# Destroy by URL
./destroy.py --url my-site.example.com
# Destroy by config file
./destroy.py --config deployments/my-site_20231215_120000.json
# Destroy without confirmation
./destroy.py --subdomain my-site --yes
# Dry-run mode (preview only)
./destroy.py --subdomain my-site --dry-run
Environment Variables:
CLOUDFLARE_API_TOKEN Cloudflare API token (required)
CLOUDFLARE_ZONE_ID Cloudflare zone ID (required)
"""
)
# Action group - mutually exclusive
action_group = parser.add_mutually_exclusive_group(required=True)
action_group.add_argument(
"--list",
action="store_true",
help="List all deployments"
)
action_group.add_argument(
"--subdomain",
type=str,
help="Subdomain to destroy"
)
action_group.add_argument(
"--url",
type=str,
help="Full URL to destroy"
)
action_group.add_argument(
"--config",
type=Path,
help="Path to deployment config file"
)
# Options
parser.add_argument(
"--yes", "-y",
action="store_true",
help="Skip confirmation prompts"
)
parser.add_argument(
"--dry-run",
action="store_true",
help="Preview destruction without making changes"
)
parser.add_argument(
"--keep-config",
action="store_true",
help="Keep deployment config file after destruction"
)
parser.add_argument(
"--log-level",
choices=["DEBUG", "INFO", "WARNING", "ERROR"],
default="INFO",
help="Logging level (default: INFO)"
)
return parser.parse_args()
def print_banner() -> None:
"""Print destruction banner"""
console.print("\n[bold red]╔══════════════════════════════════════════════╗[/bold red]")
console.print("[bold red]║[/bold red] [bold white]Gitea Deployment Destroyer[/bold white] [bold red]║[/bold red]")
console.print("[bold red]╚══════════════════════════════════════════════╝[/bold red]\n")
def list_deployments(config_manager: DeploymentConfigManager) -> None:
"""
List all deployments
Args:
config_manager: DeploymentConfigManager instance
"""
deployments = config_manager.list_deployments()
if not deployments:
console.print("[yellow]No deployments found[/yellow]")
return
table = Table(title="Active Deployments")
table.add_column("Subdomain", style="cyan")
table.add_column("URL", style="green")
table.add_column("Deployed", style="yellow")
table.add_column("Config File", style="blue")
for config_file in deployments:
try:
metadata = config_manager.load_deployment(config_file)
table.add_row(
metadata.subdomain,
metadata.url,
metadata.deployment_timestamp,
config_file.name
)
except Exception as e:
console.print(f"[red]Error loading {config_file}: {e}[/red]")
console.print(table)
console.print(f"\n[bold]Total deployments: {len(deployments)}[/bold]\n")
def find_config(
args: argparse.Namespace,
config_manager: DeploymentConfigManager
) -> Optional[Path]:
"""
Find deployment config based on arguments
Args:
args: CLI arguments
config_manager: DeploymentConfigManager instance
Returns:
Path to config file or None
"""
if args.config:
return args.config if args.config.exists() else None
if args.subdomain:
return config_manager.find_deployment_by_subdomain(args.subdomain)
if args.url:
return config_manager.find_deployment_by_url(args.url)
return None
def run_command(cmd: List[str], dry_run: bool = False) -> bool:
"""
Run a shell command
Args:
cmd: Command and arguments
dry_run: If True, only print command
Returns:
True if successful, False otherwise
"""
cmd_str = " ".join(cmd)
if dry_run:
console.print(f"[dim]Would run: {cmd_str}[/dim]")
return True
try:
result = subprocess.run(
cmd,
capture_output=True,
text=True,
timeout=30
)
if result.returncode != 0:
logging.warning(f"Command failed: {cmd_str}")
logging.debug(f"Error: {result.stderr}")
return False
return True
except subprocess.TimeoutExpired:
logging.error(f"Command timed out: {cmd_str}")
return False
except Exception as e:
logging.error(f"Failed to run command: {e}")
return False
def destroy_containers(metadata: DeploymentMetadata, dry_run: bool = False) -> bool:
"""
Stop and remove containers
Args:
metadata: Deployment metadata
dry_run: If True, only preview
Returns:
True if successful
"""
console.print("\n[bold yellow]═══ Destroying Containers ═══[/bold yellow]")
success = True
if metadata.containers:
for container in metadata.containers:
console.print(f"Stopping container: [cyan]{container}[/cyan]")
if not run_command(["docker", "stop", container], dry_run):
success = False
console.print(f"Removing container: [cyan]{container}[/cyan]")
if not run_command(["docker", "rm", "-f", container], dry_run):
success = False
else:
# Try to stop by project name
console.print(f"Stopping docker-compose project: [cyan]{metadata.compose_project_name}[/cyan]")
if not run_command(
["docker", "compose", "-p", metadata.compose_project_name, "down"],
dry_run
):
success = False
return success
def destroy_volumes(metadata: DeploymentMetadata, dry_run: bool = False) -> bool:
"""
Remove Docker volumes
Args:
metadata: Deployment metadata
dry_run: If True, only preview
Returns:
True if successful
"""
console.print("\n[bold yellow]═══ Destroying Volumes ═══[/bold yellow]")
success = True
if metadata.volumes:
for volume in metadata.volumes:
console.print(f"Removing volume: [cyan]{volume}[/cyan]")
if not run_command(["docker", "volume", "rm", "-f", volume], dry_run):
success = False
else:
# Try with project name
volumes = [
f"{metadata.compose_project_name}_db_data",
f"{metadata.compose_project_name}_gitea_data"
]
for volume in volumes:
console.print(f"Removing volume: [cyan]{volume}[/cyan]")
run_command(["docker", "volume", "rm", "-f", volume], dry_run)
return success
def destroy_networks(metadata: DeploymentMetadata, dry_run: bool = False) -> bool:
"""
Remove Docker networks (except external ones)
Args:
metadata: Deployment metadata
dry_run: If True, only preview
Returns:
True if successful
"""
console.print("\n[bold yellow]═══ Destroying Networks ═══[/bold yellow]")
success = True
if metadata.networks:
for network in metadata.networks:
# Skip external networks
if network == "proxy":
console.print(f"Skipping external network: [cyan]{network}[/cyan]")
continue
console.print(f"Removing network: [cyan]{network}[/cyan]")
if not run_command(["docker", "network", "rm", network], dry_run):
# Networks might not exist or be in use, don't fail
pass
return success
def destroy_dns(
metadata: DeploymentMetadata,
dns_manager: DNSManager,
dry_run: bool = False
) -> bool:
"""
Remove DNS record
Args:
metadata: Deployment metadata
dns_manager: DNSManager instance
dry_run: If True, only preview
Returns:
True if successful
"""
console.print("\n[bold yellow]═══ Destroying DNS Record ═══[/bold yellow]")
if not metadata.url:
console.print("[yellow]No URL found in metadata, skipping DNS cleanup[/yellow]")
return True
console.print(f"Looking up DNS record: [cyan]{metadata.url}[/cyan]")
if dry_run:
console.print("[dim]Would remove DNS record[/dim]")
return True
try:
# Look up and remove by hostname to get the real record ID from Cloudflare
# This ensures we don't rely on potentially stale/fake IDs from the config
dns_manager.remove_record(metadata.url, dry_run=False)
console.print("[green]✓ DNS record removed[/green]")
return True
except DNSError as e:
console.print(f"[red]✗ Failed to remove DNS record: {e}[/red]")
return False
def destroy_deployment(
metadata: DeploymentMetadata,
config_path: Path,
args: argparse.Namespace,
dns_manager: DNSManager
) -> bool:
"""
Destroy a deployment
Args:
metadata: Deployment metadata
config_path: Path to config file
args: CLI arguments
dns_manager: DNSManager instance
Returns:
True if successful
"""
# Show deployment info
console.print("\n[bold]Deployment Information:[/bold]")
console.print(f" Subdomain: [cyan]{metadata.subdomain}[/cyan]")
console.print(f" URL: [cyan]{metadata.url}[/cyan]")
console.print(f" Project: [cyan]{metadata.compose_project_name}[/cyan]")
console.print(f" Deployed: [cyan]{metadata.deployment_timestamp}[/cyan]")
console.print(f" Containers: [cyan]{len(metadata.containers or [])}[/cyan]")
console.print(f" DNS Record ID: [cyan]{metadata.dns_record_id or 'N/A'}[/cyan]")
if args.dry_run:
console.print("\n[bold yellow]━━━ DRY-RUN MODE: No changes will be made ━━━[/bold yellow]")
# Confirm destruction
if not args.yes and not args.dry_run:
console.print()
if not Confirm.ask(
f"[bold red]Are you sure you want to destroy {metadata.url}?[/bold red]",
default=False
):
console.print("\n[yellow]Destruction cancelled[/yellow]\n")
return False
# Execute destruction
success = True
# 1. Destroy containers
if not destroy_containers(metadata, args.dry_run):
success = False
# 2. Destroy volumes
if not destroy_volumes(metadata, args.dry_run):
success = False
# 3. Destroy networks
if not destroy_networks(metadata, args.dry_run):
success = False
# 4. Destroy DNS
if not destroy_dns(metadata, dns_manager, args.dry_run):
success = False
# 5. Delete config file
if not args.keep_config and not args.dry_run:
console.print("\n[bold yellow]═══ Deleting Config File ═══[/bold yellow]")
console.print(f"Deleting: [cyan]{config_path}[/cyan]")
try:
config_path.unlink()
console.print("[green]✓ Config file deleted[/green]")
except Exception as e:
console.print(f"[red]✗ Failed to delete config: {e}[/red]")
success = False
return success
def main() -> NoReturn:
"""
Main entry point
Exit codes:
0: Success
1: Failure
2: Not found
"""
args = parse_args()
setup_logging(args.log_level)
print_banner()
config_manager = DeploymentConfigManager()
# Handle list command
if args.list:
list_deployments(config_manager)
sys.exit(0)
# Find deployment config
config_path = find_config(args, config_manager)
if not config_path:
console.print("[red]✗ Deployment not found[/red]")
console.print("\nUse --list to see all deployments\n")
sys.exit(2)
# Load deployment metadata
try:
metadata = config_manager.load_deployment(config_path)
except Exception as e:
console.print(f"[red]✗ Failed to load deployment config: {e}[/red]\n")
sys.exit(1)
# Initialize DNS manager
import os
cloudflare_token = os.getenv("CLOUDFLARE_API_TOKEN")
cloudflare_zone = os.getenv("CLOUDFLARE_ZONE_ID")
if not cloudflare_token or not cloudflare_zone:
console.print("[yellow]⚠ Cloudflare credentials not found[/yellow]")
console.print("[yellow] DNS record will not be removed[/yellow]")
console.print("[yellow] Set CLOUDFLARE_API_TOKEN and CLOUDFLARE_ZONE_ID to enable DNS cleanup[/yellow]\n")
dns_manager = None
else:
dns_manager = DNSManager(cloudflare_token, cloudflare_zone)
# Destroy deployment
try:
success = destroy_deployment(metadata, config_path, args, dns_manager)
if success or args.dry_run:
console.print("\n[bold green]╔══════════════════════════════════════════════╗[/bold green]")
if args.dry_run:
console.print("[bold green]║[/bold green] [bold white]✓ Dry-Run Complete![/bold white] [bold green]║[/bold green]")
else:
console.print("[bold green]║[/bold green] [bold white]✓ Destruction Successful![/bold white] [bold green]║[/bold green]")
console.print("[bold green]╚══════════════════════════════════════════════╝[/bold green]\n")
sys.exit(0)
else:
console.print("\n[bold yellow]╔══════════════════════════════════════════════╗[/bold yellow]")
console.print("[bold yellow]║[/bold yellow] [bold white]⚠ Destruction Partially Failed[/bold white] [bold yellow]║[/bold yellow]")
console.print("[bold yellow]╚══════════════════════════════════════════════╝[/bold yellow]\n")
console.print("[yellow]Some resources may not have been cleaned up.[/yellow]")
console.print("[yellow]Check the logs above for details.[/yellow]\n")
sys.exit(1)
except KeyboardInterrupt:
console.print("\n[bold yellow]✗ Destruction interrupted by user[/bold yellow]\n")
sys.exit(130)
except Exception as e:
console.print(f"\n[bold red]✗ Unexpected error: {e}[/bold red]\n")
logging.exception("Unexpected error")
sys.exit(1)
if __name__ == "__main__":
main()

57
gitea/docker-compose.yml Normal file
View File

@ -0,0 +1,57 @@
services:
postgres:
image: postgres:${POSTGRES_VERSION}
container_name: ${SUBDOMAIN}_db
restart: unless-stopped
environment:
POSTGRES_DB: ${DB_NAME}
POSTGRES_USER: ${DB_USER}
POSTGRES_PASSWORD: ${DB_PASSWORD}
volumes:
- db_data:/var/lib/postgresql/data
networks:
- internal
gitea:
image: gitea/gitea:${GITEA_VERSION}
container_name: ${SUBDOMAIN}_gitea
restart: unless-stopped
depends_on:
- postgres
environment:
USER_UID: 1000
USER_GID: 1000
GITEA__database__DB_TYPE: postgres
GITEA__database__HOST: postgres:5432
GITEA__database__NAME: ${DB_NAME}
GITEA__database__USER: ${DB_USER}
GITEA__database__PASSWD: ${DB_PASSWORD}
GITEA__server__DOMAIN: ${URL}
GITEA__server__SSH_DOMAIN: ${URL}
GITEA__server__ROOT_URL: https://${URL}/
GITEA__security__INSTALL_LOCK: true
GITEA__service__DISABLE_REGISTRATION: ${DISABLE_REGISTRATION}
volumes:
- gitea_data:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
labels:
- "traefik.enable=true"
- "traefik.http.routers.${SUBDOMAIN}.rule=Host(`${URL}`)"
- "traefik.http.routers.${SUBDOMAIN}.entrypoints=https"
- "traefik.http.routers.${SUBDOMAIN}.tls=true"
- "traefik.http.routers.${SUBDOMAIN}.tls.certresolver=letsencrypt"
- "traefik.http.services.${SUBDOMAIN}.loadbalancer.server.port=3000"
networks:
- proxy
- internal
volumes:
db_data:
gitea_data:
networks:
proxy:
external: true
internal:
internal: true

View File

@ -0,0 +1,8 @@
"""
Gitea Deployment Automation
Production-ready deployment system for Gitea with automated DNS,
environment generation, and health checking.
"""
__version__ = "1.0.0"

Binary file not shown.

Binary file not shown.

View File

@ -0,0 +1,187 @@
"""
Configuration module for deployment settings
Centralized configuration with validation from environment variables and CLI arguments
"""
import logging
import os
from dataclasses import dataclass, field
from pathlib import Path
from typing import Optional
logger = logging.getLogger(__name__)
class ConfigurationError(Exception):
"""Raised when configuration is invalid"""
pass
@dataclass
class DeploymentConfig:
"""Main deployment configuration loaded from environment and CLI args"""
# File paths (required - no defaults)
env_file: Path
docker_compose_file: Path
# Cloudflare credentials (required - no defaults)
cloudflare_api_token: str = field(repr=False) # Hide in logs
cloudflare_zone_id: str
# File paths (with defaults)
dict_file: Path = Path("/usr/share/dict/words")
# Domain settings
base_domain: str = "merakit.my"
app_name: Optional[str] = None
# Deployment options
dry_run: bool = False
max_retries: int = 3
healthcheck_timeout: int = 60 # seconds
healthcheck_interval: int = 10 # seconds
verify_ssl: bool = False
# Webhook settings (optional)
webhook_url: Optional[str] = None
webhook_timeout: int = 10 # seconds
webhook_retries: int = 3
# Logging
log_level: str = "INFO"
@classmethod
def from_env_and_args(cls, args) -> "DeploymentConfig":
"""
Factory method to create config from environment and CLI args
Args:
args: argparse.Namespace with CLI arguments
Returns:
DeploymentConfig instance
Raises:
ConfigurationError: If required configuration is missing
"""
logger.debug("Loading configuration from environment and arguments")
# Get Cloudflare credentials from environment
cloudflare_api_token = os.getenv('CLOUDFLARE_API_TOKEN')
cloudflare_zone_id = os.getenv('CLOUDFLARE_ZONE_ID')
if not cloudflare_api_token:
raise ConfigurationError(
"CLOUDFLARE_API_TOKEN environment variable is required"
)
if not cloudflare_zone_id:
raise ConfigurationError(
"CLOUDFLARE_ZONE_ID environment variable is required"
)
# Get optional webhook URL from environment or args
webhook_url = (
getattr(args, 'webhook_url', None)
or os.getenv('DEPLOYMENT_WEBHOOK_URL')
)
# Get optional settings from environment with defaults
max_retries = int(os.getenv('DEPLOYMENT_MAX_RETRIES', args.max_retries))
healthcheck_timeout = int(
os.getenv('DEPLOYMENT_HEALTHCHECK_TIMEOUT', '60')
)
healthcheck_interval = int(
os.getenv('DEPLOYMENT_HEALTHCHECK_INTERVAL', '10')
)
config = cls(
env_file=args.env_file,
docker_compose_file=args.compose_file,
dict_file=Path("/usr/share/dict/words"),
cloudflare_api_token=cloudflare_api_token,
cloudflare_zone_id=cloudflare_zone_id,
base_domain="merakit.my",
app_name=None,
dry_run=args.dry_run,
max_retries=max_retries,
healthcheck_timeout=healthcheck_timeout,
healthcheck_interval=healthcheck_interval,
verify_ssl=not args.no_verify_ssl,
webhook_url=webhook_url,
webhook_timeout=10,
webhook_retries=3,
log_level=args.log_level
)
logger.debug(f"Configuration loaded: {config}")
return config
def validate(self) -> None:
"""
Validate configuration completeness and correctness
Raises:
ConfigurationError: If configuration is invalid
"""
logger.debug("Validating configuration")
# Validate file paths exist
if not self.env_file.exists():
raise ConfigurationError(f"Env file not found: {self.env_file}")
if not self.docker_compose_file.exists():
raise ConfigurationError(
f"Docker compose file not found: {self.docker_compose_file}"
)
if not self.dict_file.exists():
raise ConfigurationError(
f"Dictionary file not found: {self.dict_file}. "
"Install 'words' package or ensure /usr/share/dict/words exists."
)
# Validate numeric ranges
if self.max_retries < 1:
raise ConfigurationError(
f"max_retries must be >= 1, got: {self.max_retries}"
)
if self.healthcheck_timeout < 1:
raise ConfigurationError(
f"healthcheck_timeout must be >= 1, got: {self.healthcheck_timeout}"
)
if self.healthcheck_interval < 1:
raise ConfigurationError(
f"healthcheck_interval must be >= 1, got: {self.healthcheck_interval}"
)
if self.healthcheck_interval >= self.healthcheck_timeout:
raise ConfigurationError(
f"healthcheck_interval ({self.healthcheck_interval}) must be < "
f"healthcheck_timeout ({self.healthcheck_timeout})"
)
# Validate log level
valid_log_levels = ["DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL"]
if self.log_level.upper() not in valid_log_levels:
raise ConfigurationError(
f"Invalid log_level: {self.log_level}. "
f"Must be one of: {', '.join(valid_log_levels)}"
)
logger.debug("Configuration validation successful")
def __repr__(self) -> str:
"""String representation with masked sensitive values"""
return (
f"DeploymentConfig("
f"env_file={self.env_file}, "
f"dry_run={self.dry_run}, "
f"max_retries={self.max_retries}, "
f"cloudflare_api_token=*****, "
f"webhook_url={self.webhook_url})"
)

View File

@ -0,0 +1,153 @@
"""
Deployment Configuration Manager
Manages saving and loading deployment configurations for tracking and cleanup
"""
import json
import logging
from dataclasses import asdict, dataclass
from datetime import datetime
from pathlib import Path
from typing import Dict, List, Optional
logger = logging.getLogger(__name__)
@dataclass
class DeploymentMetadata:
"""Metadata for a single deployment"""
subdomain: str
url: str
domain: str
compose_project_name: str
db_name: str
db_user: str
deployment_timestamp: str
dns_record_id: Optional[str] = None
dns_ip: Optional[str] = None
containers: Optional[List[str]] = None
volumes: Optional[List[str]] = None
networks: Optional[List[str]] = None
env_file_path: Optional[str] = None
class DeploymentConfigManager:
"""Manages deployment configuration persistence"""
def __init__(self, config_dir: Path = Path("deployments")):
"""
Initialize deployment config manager
Args:
config_dir: Directory to store deployment configs
"""
self.config_dir = config_dir
self.config_dir.mkdir(exist_ok=True)
self._logger = logging.getLogger(f"{__name__}.DeploymentConfigManager")
def save_deployment(self, metadata: DeploymentMetadata) -> Path:
"""
Save deployment configuration to disk
Args:
metadata: DeploymentMetadata instance
Returns:
Path to saved config file
"""
# Create filename based on subdomain and timestamp
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
filename = f"{metadata.subdomain}_{timestamp}.json"
config_path = self.config_dir / filename
# Convert to dict and save as JSON
config_data = asdict(metadata)
with open(config_path, 'w') as f:
json.dump(config_data, f, indent=2)
self._logger.info(f"Saved deployment config: {config_path}")
return config_path
def load_deployment(self, config_file: Path) -> DeploymentMetadata:
"""
Load deployment configuration from disk
Args:
config_file: Path to config file
Returns:
DeploymentMetadata instance
Raises:
FileNotFoundError: If config file doesn't exist
ValueError: If config file is invalid
"""
if not config_file.exists():
raise FileNotFoundError(f"Config file not found: {config_file}")
with open(config_file, 'r') as f:
config_data = json.load(f)
return DeploymentMetadata(**config_data)
def list_deployments(self) -> List[Path]:
"""
List all deployment config files
Returns:
List of config file paths sorted by modification time (newest first)
"""
config_files = list(self.config_dir.glob("*.json"))
return sorted(config_files, key=lambda p: p.stat().st_mtime, reverse=True)
def find_deployment_by_subdomain(self, subdomain: str) -> Optional[Path]:
"""
Find the most recent deployment config for a subdomain
Args:
subdomain: Subdomain to search for
Returns:
Path to config file or None if not found
"""
matching_files = list(self.config_dir.glob(f"{subdomain}_*.json"))
if not matching_files:
return None
# Return most recent
return max(matching_files, key=lambda p: p.stat().st_mtime)
def find_deployment_by_url(self, url: str) -> Optional[Path]:
"""
Find deployment config by URL
Args:
url: Full URL to search for
Returns:
Path to config file or None if not found
"""
for config_file in self.list_deployments():
try:
metadata = self.load_deployment(config_file)
if metadata.url == url:
return config_file
except (ValueError, json.JSONDecodeError) as e:
self._logger.warning(f"Failed to load config {config_file}: {e}")
continue
return None
def delete_deployment_config(self, config_file: Path) -> None:
"""
Delete deployment config file
Args:
config_file: Path to config file
"""
if config_file.exists():
config_file.unlink()
self._logger.info(f"Deleted deployment config: {config_file}")

View File

@ -0,0 +1,218 @@
"""
Deployment logging module
Handles writing deployment logs to success/failed directories
"""
import logging
from datetime import datetime
from pathlib import Path
from typing import Optional
logger = logging.getLogger(__name__)
class DeploymentFileLogger:
"""Logs deployment results to files"""
def __init__(self, logs_dir: Path = Path("logs")):
"""
Initialize deployment file logger
Args:
logs_dir: Base directory for logs (default: logs/)
"""
self._logs_dir = logs_dir
self._success_dir = logs_dir / "success"
self._failed_dir = logs_dir / "failed"
self._logger = logging.getLogger(f"{__name__}.DeploymentFileLogger")
# Ensure directories exist
self._ensure_directories()
def _ensure_directories(self) -> None:
"""Create log directories if they don't exist"""
for directory in [self._success_dir, self._failed_dir]:
directory.mkdir(parents=True, exist_ok=True)
self._logger.debug(f"Ensured directory exists: {directory}")
def _sanitize_url(self, url: str) -> str:
"""
Sanitize URL for use in filename
Args:
url: URL to sanitize
Returns:
Sanitized URL safe for filename
"""
# Remove protocol if present
url = url.replace("https://", "").replace("http://", "")
# Replace invalid filename characters
return url.replace("/", "_").replace(":", "_")
def _generate_filename(self, status: str, url: str, timestamp: datetime) -> str:
"""
Generate log filename
Format: success_url_date.txt or failed_url_date.txt
Args:
status: 'success' or 'failed'
url: Deployment URL
timestamp: Deployment timestamp
Returns:
Filename string
"""
sanitized_url = self._sanitize_url(url)
date_str = timestamp.strftime("%Y%m%d_%H%M%S")
return f"{status}_{sanitized_url}_{date_str}.txt"
def log_success(
self,
url: str,
subdomain: str,
duration: float,
timestamp: Optional[datetime] = None
) -> Path:
"""
Log successful deployment
Args:
url: Deployment URL
subdomain: Subdomain used
duration: Deployment duration in seconds
timestamp: Deployment timestamp (default: now)
Returns:
Path to created log file
"""
if timestamp is None:
timestamp = datetime.now()
filename = self._generate_filename("success", url, timestamp)
log_file = self._success_dir / filename
log_content = self._format_success_log(
url, subdomain, duration, timestamp
)
log_file.write_text(log_content)
self._logger.info(f"✓ Success log written: {log_file}")
return log_file
def log_failure(
self,
url: str,
subdomain: str,
error: str,
timestamp: Optional[datetime] = None
) -> Path:
"""
Log failed deployment
Args:
url: Deployment URL (may be empty if failed early)
subdomain: Subdomain used (may be empty if failed early)
error: Error message
timestamp: Deployment timestamp (default: now)
Returns:
Path to created log file
"""
if timestamp is None:
timestamp = datetime.now()
# Handle case where URL is empty (failed before URL generation)
log_url = url if url else "unknown"
filename = self._generate_filename("failed", log_url, timestamp)
log_file = self._failed_dir / filename
log_content = self._format_failure_log(
url, subdomain, error, timestamp
)
log_file.write_text(log_content)
self._logger.info(f"✓ Failure log written: {log_file}")
return log_file
def _format_success_log(
self,
url: str,
subdomain: str,
duration: float,
timestamp: datetime
) -> str:
"""
Format success log content
Args:
url: Deployment URL
subdomain: Subdomain used
duration: Deployment duration in seconds
timestamp: Deployment timestamp
Returns:
Formatted log content
"""
return f"""╔══════════════════════════════════════════════╗
DEPLOYMENT SUCCESS LOG
Timestamp: {timestamp.strftime("%Y-%m-%d %H:%M:%S")}
Status: SUCCESS
URL: https://{url}
Subdomain: {subdomain}
Duration: {duration:.2f} seconds
Deployment completed successfully.
All services are running and health checks passed.
"""
def _format_failure_log(
self,
url: str,
subdomain: str,
error: str,
timestamp: datetime
) -> str:
"""
Format failure log content
Args:
url: Deployment URL (may be empty)
subdomain: Subdomain used (may be empty)
error: Error message
timestamp: Deployment timestamp
Returns:
Formatted log content
"""
url_display = f"https://{url}" if url else "N/A (failed before URL generation)"
subdomain_display = subdomain if subdomain else "N/A"
return f"""╔══════════════════════════════════════════════╗
DEPLOYMENT FAILURE LOG
Timestamp: {timestamp.strftime("%Y-%m-%d %H:%M:%S")}
Status: FAILED
URL: {url_display}
Subdomain: {subdomain_display}
ERROR:
{error}
Deployment failed. See error details above.
All changes have been rolled back.
"""

View File

@ -0,0 +1,286 @@
"""
DNS management module with Cloudflare API integration
Direct Python API calls replacing cloudflare-add.sh and cloudflare-remove.sh
"""
import logging
from dataclasses import dataclass
from typing import Dict, Optional
import requests
logger = logging.getLogger(__name__)
class DNSError(Exception):
"""Raised when DNS operations fail"""
pass
@dataclass
class DNSRecord:
"""Represents a DNS record"""
record_id: str
hostname: str
ip: str
record_type: str
class DNSManager:
"""Python wrapper for Cloudflare DNS operations"""
def __init__(self, api_token: str, zone_id: str):
"""
Initialize DNS manager
Args:
api_token: Cloudflare API token
zone_id: Cloudflare zone ID
"""
self._api_token = api_token
self._zone_id = zone_id
self._base_url = f"https://api.cloudflare.com/client/v4/zones/{zone_id}/dns_records"
self._headers = {
"Authorization": f"Bearer {api_token}",
"Content-Type": "application/json"
}
self._logger = logging.getLogger(f"{__name__}.DNSManager")
def check_record_exists(self, hostname: str) -> bool:
"""
Check if DNS record exists using Cloudflare API
Args:
hostname: Fully qualified domain name
Returns:
True if record exists, False otherwise
Raises:
DNSError: If API call fails
"""
self._logger.debug(f"Checking if DNS record exists: {hostname}")
try:
params = {"name": hostname}
response = requests.get(
self._base_url,
headers=self._headers,
params=params,
timeout=30
)
response.raise_for_status()
data = response.json()
if not data.get("success", False):
errors = data.get("errors", [])
raise DNSError(f"Cloudflare API error: {errors}")
records = data.get("result", [])
exists = len(records) > 0
if exists:
self._logger.debug(f"DNS record exists: {hostname}")
else:
self._logger.debug(f"DNS record does not exist: {hostname}")
return exists
except requests.RequestException as e:
raise DNSError(f"Failed to check DNS record existence: {e}") from e
def add_record(
self,
hostname: str,
ip: str,
dry_run: bool = False
) -> DNSRecord:
"""
Add DNS A record
Args:
hostname: Fully qualified domain name
ip: IP address for A record
dry_run: If True, only log what would be done
Returns:
DNSRecord with record_id for rollback
Raises:
DNSError: If API call fails
"""
if dry_run:
self._logger.info(
f"[DRY-RUN] Would add DNS record: {hostname} -> {ip}"
)
return DNSRecord(
record_id="dry-run-id",
hostname=hostname,
ip=ip,
record_type="A"
)
self._logger.info(f"Adding DNS record: {hostname} -> {ip}")
try:
payload = {
"type": "A",
"name": hostname,
"content": ip,
"ttl": 1, # Automatic TTL
"proxied": False # DNS only, not proxied through Cloudflare
}
response = requests.post(
self._base_url,
headers=self._headers,
json=payload,
timeout=30
)
response.raise_for_status()
data = response.json()
if not data.get("success", False):
errors = data.get("errors", [])
raise DNSError(f"Cloudflare API error: {errors}")
result = data.get("result", {})
record_id = result.get("id")
if not record_id:
raise DNSError("No record ID returned from Cloudflare API")
self._logger.info(f"DNS record added successfully: {record_id}")
return DNSRecord(
record_id=record_id,
hostname=hostname,
ip=ip,
record_type="A"
)
except requests.RequestException as e:
raise DNSError(f"Failed to add DNS record: {e}") from e
def remove_record(self, hostname: str, dry_run: bool = False) -> None:
"""
Remove DNS record by hostname
Args:
hostname: Fully qualified domain name
dry_run: If True, only log what would be done
Raises:
DNSError: If API call fails
"""
if dry_run:
self._logger.info(f"[DRY-RUN] Would remove DNS record: {hostname}")
return
self._logger.info(f"Removing DNS record: {hostname}")
try:
# First, get the record ID
params = {"name": hostname}
response = requests.get(
self._base_url,
headers=self._headers,
params=params,
timeout=30
)
response.raise_for_status()
data = response.json()
if not data.get("success", False):
errors = data.get("errors", [])
raise DNSError(f"Cloudflare API error: {errors}")
records = data.get("result", [])
if not records:
self._logger.warning(f"No DNS record found for: {hostname}")
return
# Remove all matching records (typically just one)
for record in records:
record_id = record.get("id")
if record_id:
self.remove_record_by_id(record_id, dry_run=False)
except requests.RequestException as e:
raise DNSError(f"Failed to remove DNS record: {e}") from e
def remove_record_by_id(self, record_id: str, dry_run: bool = False) -> None:
"""
Remove DNS record by ID (more reliable for rollback)
Args:
record_id: Cloudflare DNS record ID
dry_run: If True, only log what would be done
Raises:
DNSError: If API call fails
"""
if dry_run:
self._logger.info(
f"[DRY-RUN] Would remove DNS record by ID: {record_id}"
)
return
self._logger.info(f"Removing DNS record by ID: {record_id}")
try:
url = f"{self._base_url}/{record_id}"
response = requests.delete(
url,
headers=self._headers,
timeout=30
)
# Handle 404/405 gracefully - record doesn't exist or can't be deleted
if response.status_code in [404, 405]:
self._logger.warning(
f"DNS record {record_id} not found or cannot be deleted (may already be removed)"
)
return
response.raise_for_status()
data = response.json()
if not data.get("success", False):
errors = data.get("errors", [])
raise DNSError(f"Cloudflare API error: {errors}")
self._logger.info(f"DNS record removed successfully: {record_id}")
except requests.RequestException as e:
raise DNSError(f"Failed to remove DNS record: {e}") from e
def get_public_ip(self) -> str:
"""
Get public IP address from external service
Returns:
Public IP address as string
Raises:
DNSError: If IP retrieval fails
"""
self._logger.debug("Retrieving public IP address")
try:
response = requests.get("https://ipv4.icanhazip.com", timeout=10)
response.raise_for_status()
ip = response.text.strip()
self._logger.debug(f"Public IP: {ip}")
return ip
except requests.RequestException as e:
raise DNSError(f"Failed to retrieve public IP: {e}") from e

View File

@ -0,0 +1,276 @@
"""
Docker management module
Wrapper for Docker Compose operations with validation and error handling
"""
import logging
import subprocess
from dataclasses import dataclass
from pathlib import Path
from typing import List
logger = logging.getLogger(__name__)
class DockerError(Exception):
"""Raised when Docker operations fail"""
pass
@dataclass
class ContainerInfo:
"""Information about a running container"""
container_id: str
name: str
status: str
class DockerManager:
"""Docker Compose operations wrapper"""
def __init__(self, compose_file: Path, env_file: Path):
"""
Initialize Docker manager
Args:
compose_file: Path to docker-compose.yml
env_file: Path to .env file
"""
self._compose_file = compose_file
self._env_file = env_file
self._logger = logging.getLogger(f"{__name__}.DockerManager")
def _run_command(
self,
cmd: List[str],
check: bool = True,
capture_output: bool = True
) -> subprocess.CompletedProcess:
"""
Run docker compose command
Args:
cmd: Command list to execute
check: Whether to raise on non-zero exit
capture_output: Whether to capture stdout/stderr
Returns:
CompletedProcess instance
Raises:
DockerError: If command fails and check=True
"""
self._logger.debug(f"Running: {' '.join(cmd)}")
try:
result = subprocess.run(
cmd,
check=check,
capture_output=capture_output,
text=True,
cwd=self._compose_file.parent
)
return result
except subprocess.CalledProcessError as e:
error_msg = f"Docker command failed: {e.stderr or e.stdout or str(e)}"
self._logger.error(error_msg)
raise DockerError(error_msg) from e
except FileNotFoundError as e:
raise DockerError(
f"Docker command not found. Is Docker installed? {e}"
) from e
def validate_compose_file(self) -> None:
"""
Validate docker-compose.yml syntax
Raises:
DockerError: If compose file is invalid
"""
self._logger.debug("Validating docker-compose.yml")
cmd = [
"docker", "compose",
"-f", str(self._compose_file),
"--env-file", str(self._env_file),
"config", "--quiet"
]
try:
self._run_command(cmd)
self._logger.debug("docker-compose.yml is valid")
except DockerError as e:
raise DockerError(f"Invalid docker-compose.yml: {e}") from e
def pull_images(self, dry_run: bool = False) -> None:
"""
Pull required Docker images
Args:
dry_run: If True, only log what would be done
Raises:
DockerError: If pull fails
"""
if dry_run:
self._logger.info("[DRY-RUN] Would pull Docker images")
return
self._logger.info("Pulling Docker images")
cmd = [
"docker", "compose",
"-f", str(self._compose_file),
"--env-file", str(self._env_file),
"pull"
]
self._run_command(cmd)
self._logger.info("Docker images pulled successfully")
def start_services(self, dry_run: bool = False) -> List[ContainerInfo]:
"""
Start Docker Compose services
Args:
dry_run: If True, only log what would be done
Returns:
List of created containers for rollback
Raises:
DockerError: If start fails
"""
if dry_run:
self._logger.info("[DRY-RUN] Would start Docker services")
return []
self._logger.info("Starting Docker services")
cmd = [
"docker", "compose",
"-f", str(self._compose_file),
"--env-file", str(self._env_file),
"up", "-d"
]
self._run_command(cmd)
# Get container info for rollback
containers = self.get_container_status()
self._logger.info(
f"Docker services started successfully: {len(containers)} containers"
)
return containers
def stop_services(self, dry_run: bool = False) -> None:
"""
Stop Docker Compose services
Args:
dry_run: If True, only log what would be done
Raises:
DockerError: If stop fails
"""
if dry_run:
self._logger.info("[DRY-RUN] Would stop Docker services")
return
self._logger.info("Stopping Docker services")
cmd = [
"docker", "compose",
"-f", str(self._compose_file),
"--env-file", str(self._env_file),
"down"
]
self._run_command(cmd)
self._logger.info("Docker services stopped successfully")
def stop_services_and_remove_volumes(self, dry_run: bool = False) -> None:
"""
Stop services and remove volumes (full cleanup)
Args:
dry_run: If True, only log what would be done
Raises:
DockerError: If stop fails
"""
if dry_run:
self._logger.info("[DRY-RUN] Would stop Docker services and remove volumes")
return
self._logger.info("Stopping Docker services and removing volumes")
cmd = [
"docker", "compose",
"-f", str(self._compose_file),
"--env-file", str(self._env_file),
"down", "-v"
]
self._run_command(cmd)
self._logger.info("Docker services stopped and volumes removed")
def get_container_status(self) -> List[ContainerInfo]:
"""
Get status of containers for this project
Returns:
List of ContainerInfo objects
Raises:
DockerError: If status check fails
"""
self._logger.debug("Getting container status")
cmd = [
"docker", "compose",
"-f", str(self._compose_file),
"--env-file", str(self._env_file),
"ps", "-q"
]
result = self._run_command(cmd)
container_ids = [
cid.strip()
for cid in result.stdout.strip().split('\n')
if cid.strip()
]
containers = []
for container_id in container_ids:
# Get container details
inspect_cmd = ["docker", "inspect", container_id, "--format", "{{.Name}}:{{.State.Status}}"]
try:
inspect_result = self._run_command(inspect_cmd)
name_status = inspect_result.stdout.strip()
if ':' in name_status:
name, status = name_status.split(':', 1)
# Remove leading slash from container name
name = name.lstrip('/')
containers.append(ContainerInfo(
container_id=container_id,
name=name,
status=status
))
except DockerError:
# If inspect fails, just record the ID
containers.append(ContainerInfo(
container_id=container_id,
name="unknown",
status="unknown"
))
self._logger.debug(f"Found {len(containers)} containers")
return containers

View File

@ -0,0 +1,390 @@
"""
Environment generation module - replaces generate-env.sh
Provides pure Python implementations for:
- Random word selection from dictionary
- Memorable password generation
- Environment file generation and manipulation
"""
import logging
import os
import random
import re
import secrets
import shutil
from dataclasses import asdict, dataclass
from datetime import datetime
from pathlib import Path
from typing import Dict, List, Optional
logger = logging.getLogger(__name__)
@dataclass
class EnvValues:
"""Container for generated environment values"""
subdomain: str
domain: str
url: str
db_name: str
db_user: str
db_password: str
compose_project_name: str
class WordGenerator:
"""Pure Python implementation of dictionary word selection"""
def __init__(self, dict_file: Path):
"""
Initialize word generator
Args:
dict_file: Path to dictionary file (e.g., /usr/share/dict/words)
"""
self._dict_file = dict_file
self._words_cache: Optional[List[str]] = None
self._logger = logging.getLogger(f"{__name__}.WordGenerator")
def _load_and_filter_words(self) -> List[str]:
"""
Load dictionary and filter to 4-10 char lowercase words
Returns:
List of filtered words
Raises:
FileNotFoundError: If dictionary file doesn't exist
ValueError: If no valid words found
"""
if not self._dict_file.exists():
raise FileNotFoundError(f"Dictionary file not found: {self._dict_file}")
self._logger.debug(f"Loading words from {self._dict_file}")
# Read and filter words matching pattern: ^[a-z]{4,10}$
pattern = re.compile(r'^[a-z]{4,10}$')
words = []
with open(self._dict_file, 'r', encoding='utf-8') as f:
for line in f:
word = line.strip()
if pattern.match(word):
words.append(word)
if not words:
raise ValueError(f"No valid words found in {self._dict_file}")
self._logger.debug(f"Loaded {len(words)} valid words")
return words
def get_random_word(self) -> str:
"""
Get single random word from filtered list
Returns:
Random word (4-10 chars, lowercase)
"""
# Load and cache words on first use
if self._words_cache is None:
self._words_cache = self._load_and_filter_words()
return random.choice(self._words_cache)
def get_random_words(self, count: int) -> List[str]:
"""
Get multiple random words efficiently
Args:
count: Number of words to retrieve
Returns:
List of random words
"""
# Load and cache words on first use
if self._words_cache is None:
self._words_cache = self._load_and_filter_words()
return random.choices(self._words_cache, k=count)
class PasswordGenerator:
"""Generate memorable passwords from dictionary words"""
def __init__(self, word_generator: WordGenerator):
"""
Initialize password generator
Args:
word_generator: WordGenerator instance for word selection
"""
self._word_generator = word_generator
self._logger = logging.getLogger(f"{__name__}.PasswordGenerator")
def generate_memorable_password(self, word_count: int = 3) -> str:
"""
Generate password from N random nouns joined by hyphens
Args:
word_count: Number of words to use (default: 3)
Returns:
Password string like "templon-infantly-yielding"
"""
words = self._word_generator.get_random_words(word_count)
password = '-'.join(words)
self._logger.debug(f"Generated {word_count}-word password")
return password
def generate_random_string(self, length: int = 8) -> str:
"""
Generate alphanumeric random string using secrets module
Args:
length: Length of string to generate (default: 8)
Returns:
Random alphanumeric string
"""
# Use secrets for cryptographically secure random generation
# Generate hex and convert to lowercase alphanumeric
return secrets.token_hex(length // 2 + 1)[:length]
class EnvFileGenerator:
"""Pure Python .env file manipulation (replaces bash sed logic)"""
def __init__(
self,
env_file: Path,
word_generator: WordGenerator,
password_generator: PasswordGenerator,
base_domain: str = "merakit.my",
app_name: Optional[str] = None
):
"""
Initialize environment file generator
Args:
env_file: Path to .env file
word_generator: WordGenerator instance
password_generator: PasswordGenerator instance
base_domain: Base domain for URL generation (default: "merakit.my")
app_name: Application name (default: read from .env or "gitea")
"""
self._env_file = env_file
self._word_generator = word_generator
self._password_generator = password_generator
self._base_domain = base_domain
self._app_name = app_name
self._logger = logging.getLogger(f"{__name__}.EnvFileGenerator")
def generate_values(self) -> EnvValues:
"""
Generate all environment values
Returns:
EnvValues dataclass with all generated values
"""
self._logger.info("Generating environment values")
# Read current .env to get app_name if not provided
current_env = self.read_current_env()
app_name = self._app_name or current_env.get('APP_NAME', 'gitea')
# 1. Generate subdomain: two random words
word1 = self._word_generator.get_random_word()
word2 = self._word_generator.get_random_word()
subdomain = f"{word1}-{word2}"
# 2. Construct URL
url = f"{subdomain}.{self._base_domain}"
# 3. Generate random string for DB identifiers
random_str = self._password_generator.generate_random_string(8)
# 4. Generate DB identifiers with truncation logic
db_name = self._generate_db_name(random_str, app_name, subdomain)
db_user = self._generate_db_user(random_str, app_name, subdomain)
# 5. Generate password
db_password = self._password_generator.generate_memorable_password(3)
self._logger.info(f"Generated values for subdomain: {subdomain}")
self._logger.debug(f"URL: {url}")
self._logger.debug(f"DB_NAME: {db_name}")
self._logger.debug(f"DB_USER: {db_user}")
return EnvValues(
subdomain=subdomain,
domain=self._base_domain,
url=url,
db_name=db_name,
db_user=db_user,
db_password=db_password,
compose_project_name=subdomain
)
def _generate_db_name(self, random_str: str, app_name: str, subdomain: str) -> str:
"""
Format: angali_{random8}_{app}_{subdomain}, truncate to 64 chars
Args:
random_str: Random 8-char string
app_name: Application name
subdomain: Subdomain with hyphens
Returns:
Database name (max 64 chars)
"""
# Replace hyphens with underscores for DB compatibility
subdomain_safe = subdomain.replace('-', '_')
db_name = f"angali_{random_str}_{app_name}_{subdomain_safe}"
# Truncate to PostgreSQL limit of 63 chars (64 - 1 for null terminator)
return db_name[:63]
def _generate_db_user(self, random_str: str, app_name: str, subdomain: str) -> str:
"""
Format: angali_{random8}_{app}_{subdomain}, truncate to 63 chars
Args:
random_str: Random 8-char string
app_name: Application name
subdomain: Subdomain with hyphens
Returns:
Database username (max 63 chars)
"""
# Replace hyphens with underscores for DB compatibility
subdomain_safe = subdomain.replace('-', '_')
db_user = f"angali_{random_str}_{app_name}_{subdomain_safe}"
# Truncate to PostgreSQL limit of 63 chars
return db_user[:63]
def read_current_env(self) -> Dict[str, str]:
"""
Parse existing .env file into dict
Returns:
Dictionary of environment variables
"""
env_dict = {}
if not self._env_file.exists():
self._logger.warning(f"Env file not found: {self._env_file}")
return env_dict
with open(self._env_file, 'r') as f:
for line in f:
line = line.strip()
# Skip empty lines and comments
if not line or line.startswith('#'):
continue
# Parse KEY=VALUE format
if '=' in line:
key, value = line.split('=', 1)
# Remove quotes if present
value = value.strip('"').strip("'")
env_dict[key.strip()] = value
self._logger.debug(f"Read {len(env_dict)} variables from {self._env_file}")
return env_dict
def backup_env_file(self) -> Path:
"""
Create timestamped backup of .env file
Returns:
Path to backup file
Raises:
FileNotFoundError: If .env file doesn't exist
"""
if not self._env_file.exists():
raise FileNotFoundError(f"Cannot backup non-existent file: {self._env_file}")
# Create backup with timestamp
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
backup_path = self._env_file.parent / f"{self._env_file.name}.backup.{timestamp}"
shutil.copy2(self._env_file, backup_path)
self._logger.info(f"Created backup: {backup_path}")
return backup_path
def update_env_file(self, values: EnvValues, dry_run: bool = False) -> None:
"""
Update .env file with new values (Python dict manipulation)
Uses atomic write pattern: write to temp file, then rename
Args:
values: EnvValues to write
dry_run: If True, only log what would be done
Raises:
FileNotFoundError: If .env file doesn't exist
"""
if not self._env_file.exists():
raise FileNotFoundError(f"Env file not found: {self._env_file}")
if dry_run:
self._logger.info(f"[DRY-RUN] Would update {self._env_file} with:")
for key, value in asdict(values).items():
if 'password' in key.lower():
self._logger.info(f" {key.upper()}=********")
else:
self._logger.info(f" {key.upper()}={value}")
return
# Read current env
current_env = self.read_current_env()
# Update with new values
current_env.update({
'COMPOSE_PROJECT_NAME': values.compose_project_name,
'SUBDOMAIN': values.subdomain,
'DOMAIN': values.domain,
'URL': values.url,
'DB_NAME': values.db_name,
'DB_USER': values.db_user,
'DB_PASSWORD': values.db_password
})
# Write atomically: write to temp file, then rename
temp_file = self._env_file.parent / f"{self._env_file.name}.tmp"
try:
with open(temp_file, 'w') as f:
for key, value in current_env.items():
f.write(f"{key}={value}\n")
# Atomic rename
os.replace(temp_file, self._env_file)
self._logger.info(f"Updated {self._env_file} successfully")
except Exception as e:
# Cleanup temp file on error
if temp_file.exists():
temp_file.unlink()
raise RuntimeError(f"Failed to update env file: {e}") from e
def restore_env_file(self, backup_path: Path) -> None:
"""
Restore .env from backup
Args:
backup_path: Path to backup file
Raises:
FileNotFoundError: If backup file doesn't exist
"""
if not backup_path.exists():
raise FileNotFoundError(f"Backup file not found: {backup_path}")
shutil.copy2(backup_path, self._env_file)
self._logger.info(f"Restored {self._env_file} from {backup_path}")

View File

@ -0,0 +1,128 @@
"""
Health check module
HTTP health checking with retry logic and progress indicators
"""
import logging
import time
import requests
logger = logging.getLogger(__name__)
class HealthCheckError(Exception):
"""Raised when health check fails"""
pass
class HealthChecker:
"""HTTP health check with retry logic"""
def __init__(
self,
timeout: int,
interval: int,
verify_ssl: bool
):
"""
Initialize health checker
Args:
timeout: Total timeout in seconds
interval: Check interval in seconds
verify_ssl: Whether to verify SSL certificates
"""
self._timeout = timeout
self._interval = interval
self._verify_ssl = verify_ssl
self._logger = logging.getLogger(f"{__name__}.HealthChecker")
def check_health(self, url: str, dry_run: bool = False) -> bool:
"""
Perform health check with retries
Args:
url: URL to check (e.g., https://example.com)
dry_run: If True, only log what would be done
Returns:
True if health check passed, False otherwise
"""
if dry_run:
self._logger.info(f"[DRY-RUN] Would check health of {url}")
return True
self._logger.info(
f"Checking health of {url} for up to {self._timeout} seconds"
)
start_time = time.time()
attempt = 0
while True:
attempt += 1
elapsed = time.time() - start_time
if elapsed > self._timeout:
self._logger.error(
f"Health check timed out after {elapsed:.1f} seconds "
f"({attempt} attempts)"
)
return False
# Perform single check
if self._single_check(url):
self._logger.info(
f"Health check passed after {elapsed:.1f} seconds "
f"({attempt} attempts)"
)
return True
# Wait before next attempt
remaining = self._timeout - elapsed
if remaining > 0:
wait_time = min(self._interval, remaining)
self._logger.debug(
f"Attempt {attempt} failed, retrying in {wait_time:.1f}s "
f"(elapsed: {elapsed:.1f}s, timeout: {self._timeout}s)"
)
time.sleep(wait_time)
else:
# No time remaining
self._logger.error(f"Health check timed out after {attempt} attempts")
return False
def _single_check(self, url: str) -> bool:
"""
Single health check attempt
Args:
url: URL to check
Returns:
True if valid HTTP response (2xx or 3xx) received, False otherwise
"""
try:
response = requests.get(
url,
timeout=5,
verify=self._verify_ssl,
allow_redirects=True
)
# Accept any 2xx or 3xx status code as valid
if 200 <= response.status_code < 400:
self._logger.debug(f"Health check successful: HTTP {response.status_code}")
return True
else:
self._logger.debug(
f"Health check failed: HTTP {response.status_code}"
)
return False
except requests.RequestException as e:
self._logger.debug(f"Health check failed: {type(e).__name__}: {e}")
return False

View File

@ -0,0 +1,626 @@
"""
Deployment orchestration module
Main deployment workflow with rollback tracking and execution
"""
import logging
import shutil
import time
from dataclasses import asdict, dataclass
from datetime import datetime
from pathlib import Path
from typing import Any, Dict, List
from .config import DeploymentConfig
from .deployment_config_manager import DeploymentConfigManager, DeploymentMetadata
from .deployment_logger import DeploymentFileLogger
from .dns_manager import DNSError, DNSManager, DNSRecord
from .docker_manager import DockerError, DockerManager
from .env_generator import EnvFileGenerator, EnvValues, PasswordGenerator, WordGenerator
from .health import HealthCheckError, HealthChecker
from .webhooks import WebhookNotifier
logger = logging.getLogger(__name__)
class DeploymentError(Exception):
"""Base exception for deployment errors"""
pass
class ValidationError(DeploymentError):
"""Validation failed"""
pass
@dataclass
class DeploymentAction:
"""Represents a single deployment action"""
action_type: str # 'dns_added', 'containers_started', 'env_updated'
timestamp: datetime
details: Dict[str, Any]
rollback_data: Dict[str, Any]
class DeploymentTracker:
"""Track deployment actions for rollback"""
def __init__(self):
"""Initialize deployment tracker"""
self._actions: List[DeploymentAction] = []
self._logger = logging.getLogger(f"{__name__}.DeploymentTracker")
def record_action(self, action: DeploymentAction) -> None:
"""
Record a deployment action
Args:
action: DeploymentAction to record
"""
self._actions.append(action)
self._logger.debug(f"Recorded action: {action.action_type}")
def get_actions(self) -> List[DeploymentAction]:
"""
Get all recorded actions
Returns:
List of DeploymentAction objects
"""
return self._actions.copy()
def clear(self) -> None:
"""Clear tracking history"""
self._actions.clear()
self._logger.debug("Cleared action history")
class DeploymentOrchestrator:
"""Main orchestrator coordinating all deployment steps"""
def __init__(self, config: DeploymentConfig):
"""
Initialize deployment orchestrator
Args:
config: DeploymentConfig instance
"""
self._config = config
self._logger = logging.getLogger(f"{__name__}.DeploymentOrchestrator")
# Initialize components
self._word_generator = WordGenerator(config.dict_file)
self._password_generator = PasswordGenerator(self._word_generator)
self._env_generator = EnvFileGenerator(
config.env_file,
self._word_generator,
self._password_generator,
config.base_domain,
config.app_name
)
self._dns_manager = DNSManager(
config.cloudflare_api_token,
config.cloudflare_zone_id
)
self._docker_manager = DockerManager(
config.docker_compose_file,
config.env_file
)
self._webhook_notifier = WebhookNotifier(
config.webhook_url,
config.webhook_timeout,
config.webhook_retries
)
self._health_checker = HealthChecker(
config.healthcheck_timeout,
config.healthcheck_interval,
config.verify_ssl
)
self._tracker = DeploymentTracker()
self._deployment_logger = DeploymentFileLogger()
self._config_manager = DeploymentConfigManager()
def deploy(self) -> None:
"""
Main deployment workflow
Raises:
DeploymentError: If deployment fails
"""
start_time = time.time()
env_values = None
dns_record_id = None
dns_ip = None
containers = []
try:
# Phase 1: Validation
self._phase_validate()
# Phase 2: Environment Generation (with retry on DNS conflicts)
env_values = self._phase_generate_env_with_retries()
# Send deployment_started webhook
self._webhook_notifier.deployment_started(
env_values.subdomain,
env_values.url
)
# Phase 3: DNS Setup
dns_record_id, dns_ip = self._phase_setup_dns(env_values)
# Phase 4: Container Deployment
containers = self._phase_deploy_containers()
# Phase 5: Health Check
self._phase_health_check(env_values.url)
# Success
duration = time.time() - start_time
self._webhook_notifier.deployment_success(
env_values.subdomain,
env_values.url,
duration
)
self._logger.info(
f"✓ Deployment successful! URL: https://{env_values.url} "
f"(took {duration:.1f}s)"
)
# Log success to file
self._deployment_logger.log_success(
env_values.url,
env_values.subdomain,
duration
)
# Save deployment configuration
self._save_deployment_config(
env_values,
dns_record_id,
dns_ip,
containers
)
except Exception as e:
self._logger.error(f"✗ Deployment failed: {e}")
# Send failure webhook
if env_values:
self._webhook_notifier.deployment_failed(
env_values.subdomain,
str(e),
env_values.url
)
else:
self._webhook_notifier.deployment_failed("", str(e), "")
# Log failure to file
if env_values:
self._deployment_logger.log_failure(
env_values.url,
env_values.subdomain,
str(e)
)
else:
self._deployment_logger.log_failure(
"",
"",
str(e)
)
# Rollback
self._logger.info("Starting rollback...")
self._rollback_all()
raise DeploymentError(f"Deployment failed: {e}") from e
def _phase_validate(self) -> None:
"""
Phase 1: Pre-deployment validation
Raises:
ValidationError: If validation fails
"""
self._logger.info("═══ Phase 1: Validation ═══")
# Check system dependencies
self._validate_dependencies()
# Validate environment file
if not self._config.env_file.exists():
raise ValidationError(f"Env file not found: {self._config.env_file}")
# Validate Docker Compose file
try:
self._docker_manager.validate_compose_file()
except DockerError as e:
raise ValidationError(f"Invalid docker-compose.yml: {e}") from e
# Check external Docker network exists
self._validate_docker_network("proxy")
self._logger.info("✓ Validation complete")
def _validate_dependencies(self) -> None:
"""
Validate system dependencies
Raises:
ValidationError: If dependencies are missing
"""
import shutil as sh
required_commands = ["docker", "curl"]
for cmd in required_commands:
if not sh.which(cmd):
raise ValidationError(
f"Required command not found: {cmd}. "
f"Please install {cmd} and try again."
)
# Check Docker daemon is running
try:
import subprocess
result = subprocess.run(
["docker", "info"],
capture_output=True,
timeout=5
)
if result.returncode != 0:
raise ValidationError(
"Docker daemon is not running. Please start Docker."
)
except (subprocess.TimeoutExpired, FileNotFoundError) as e:
raise ValidationError(f"Failed to check Docker daemon: {e}") from e
def _validate_docker_network(self, network_name: str) -> None:
"""
Check external Docker network exists
Args:
network_name: Network name to check
Raises:
ValidationError: If network doesn't exist
"""
import subprocess
try:
result = subprocess.run(
["docker", "network", "inspect", network_name],
capture_output=True,
timeout=5
)
if result.returncode != 0:
raise ValidationError(
f"Docker network '{network_name}' not found. "
f"Please create it with: docker network create {network_name}"
)
except (subprocess.TimeoutExpired, FileNotFoundError) as e:
raise ValidationError(
f"Failed to check Docker network: {e}"
) from e
def _phase_generate_env_with_retries(self) -> EnvValues:
"""
Phase 2: Generate environment with DNS conflict retry
Returns:
EnvValues with generated values
Raises:
DeploymentError: If unable to generate unique subdomain
"""
self._logger.info("═══ Phase 2: Environment Generation ═══")
for attempt in range(1, self._config.max_retries + 1):
# Generate new values
env_values = self._env_generator.generate_values()
self._logger.info(f"Generated subdomain: {env_values.subdomain}")
# Check DNS conflict
try:
if not self._dns_manager.check_record_exists(env_values.url):
# No conflict, proceed
self._logger.info(f"✓ Subdomain available: {env_values.subdomain}")
# Create backup
backup_path = self._env_generator.backup_env_file()
# Update .env file
self._env_generator.update_env_file(
env_values,
dry_run=self._config.dry_run
)
# Track for rollback
self._tracker.record_action(DeploymentAction(
action_type="env_updated",
timestamp=datetime.now(),
details={"env_values": asdict(env_values)},
rollback_data={"backup_path": str(backup_path)}
))
return env_values
else:
self._logger.warning(
f"✗ DNS conflict for {env_values.url}, "
f"regenerating... (attempt {attempt}/{self._config.max_retries})"
)
except DNSError as e:
self._logger.warning(
f"DNS check failed: {e}. "
f"Assuming no conflict and proceeding..."
)
# If DNS check fails, proceed anyway (fail open)
backup_path = self._env_generator.backup_env_file()
self._env_generator.update_env_file(
env_values,
dry_run=self._config.dry_run
)
self._tracker.record_action(DeploymentAction(
action_type="env_updated",
timestamp=datetime.now(),
details={"env_values": asdict(env_values)},
rollback_data={"backup_path": str(backup_path)}
))
return env_values
raise DeploymentError(
f"Failed to generate unique subdomain after {self._config.max_retries} attempts"
)
def _phase_setup_dns(self, env_values: EnvValues) -> tuple:
"""
Phase 3: Add DNS record
Args:
env_values: EnvValues with subdomain and URL
Returns:
Tuple of (record_id, ip)
Raises:
DNSError: If DNS setup fails
"""
self._logger.info("═══ Phase 3: DNS Setup ═══")
# Get public IP
ip = self._dns_manager.get_public_ip()
self._logger.info(f"Public IP: {ip}")
# Add DNS record
dns_record = self._dns_manager.add_record(
env_values.url,
ip,
dry_run=self._config.dry_run
)
self._logger.info(f"✓ DNS record added: {env_values.url} -> {ip}")
# Track for rollback
self._tracker.record_action(DeploymentAction(
action_type="dns_added",
timestamp=datetime.now(),
details={"hostname": env_values.url, "ip": ip},
rollback_data={"record_id": dns_record.record_id}
))
# Send webhook notification
self._webhook_notifier.dns_added(env_values.url, ip)
return dns_record.record_id, ip
def _phase_deploy_containers(self) -> List:
"""
Phase 4: Start Docker containers
Returns:
List of container information
Raises:
DockerError: If container deployment fails
"""
self._logger.info("═══ Phase 4: Container Deployment ═══")
# Pull images
self._logger.info("Pulling Docker images...")
self._docker_manager.pull_images(dry_run=self._config.dry_run)
# Start services
self._logger.info("Starting Docker services...")
containers = self._docker_manager.start_services(
dry_run=self._config.dry_run
)
self._logger.info(
f"✓ Docker services started: {len(containers)} containers"
)
# Track for rollback
self._tracker.record_action(DeploymentAction(
action_type="containers_started",
timestamp=datetime.now(),
details={"containers": [asdict(c) for c in containers]},
rollback_data={}
))
return containers
def _phase_health_check(self, url: str) -> None:
"""
Phase 5: Health check
Args:
url: URL to check (without https://)
Raises:
HealthCheckError: If health check fails
"""
self._logger.info("═══ Phase 5: Health Check ═══")
health_url = f"https://{url}"
start_time = time.time()
if not self._health_checker.check_health(
health_url,
dry_run=self._config.dry_run
):
raise HealthCheckError(f"Health check failed for {health_url}")
duration = time.time() - start_time
self._logger.info(f"✓ Health check passed (took {duration:.1f}s)")
# Send webhook notification
self._webhook_notifier.health_check_passed(url, duration)
def _rollback_all(self) -> None:
"""Rollback all tracked actions in reverse order"""
actions = list(reversed(self._tracker.get_actions()))
if not actions:
self._logger.info("No actions to rollback")
return
self._logger.info(f"Rolling back {len(actions)} actions...")
for action in actions:
try:
self._rollback_action(action)
except Exception as e:
# Log but don't fail rollback
self._logger.error(
f"Failed to rollback action {action.action_type}: {e}"
)
self._logger.info("Rollback complete")
def _rollback_action(self, action: DeploymentAction) -> None:
"""
Rollback single action based on type
Args:
action: DeploymentAction to rollback
"""
if action.action_type == "dns_added":
self._rollback_dns(action)
elif action.action_type == "containers_started":
self._rollback_containers(action)
elif action.action_type == "env_updated":
self._rollback_env(action)
else:
self._logger.warning(f"Unknown action type: {action.action_type}")
def _rollback_dns(self, action: DeploymentAction) -> None:
"""
Rollback DNS changes
Args:
action: DeploymentAction with DNS details
"""
record_id = action.rollback_data.get("record_id")
if record_id:
self._logger.info(f"Rolling back DNS record: {record_id}")
try:
self._dns_manager.remove_record_by_id(
record_id,
dry_run=self._config.dry_run
)
self._logger.info("✓ DNS record removed")
except DNSError as e:
self._logger.error(f"Failed to remove DNS record: {e}")
def _rollback_containers(self, action: DeploymentAction) -> None:
"""
Stop and remove containers
Args:
action: DeploymentAction with container details
"""
self._logger.info("Rolling back Docker containers")
try:
self._docker_manager.stop_services(dry_run=self._config.dry_run)
self._logger.info("✓ Docker services stopped")
except DockerError as e:
self._logger.error(f"Failed to stop Docker services: {e}")
def _rollback_env(self, action: DeploymentAction) -> None:
"""
Restore .env file from backup
Args:
action: DeploymentAction with backup path
"""
backup_path_str = action.rollback_data.get("backup_path")
if backup_path_str:
backup_path = Path(backup_path_str)
if backup_path.exists():
self._logger.info(f"Rolling back .env file from {backup_path}")
try:
self._env_generator.restore_env_file(backup_path)
self._logger.info("✓ .env file restored")
except Exception as e:
self._logger.error(f"Failed to restore .env file: {e}")
else:
self._logger.warning(f"Backup file not found: {backup_path}")
def _save_deployment_config(
self,
env_values: EnvValues,
dns_record_id: str,
dns_ip: str,
containers: List
) -> None:
"""
Save deployment configuration for later cleanup
Args:
env_values: EnvValues with deployment info
dns_record_id: Cloudflare DNS record ID
dns_ip: IP address used in DNS
containers: List of container information
"""
try:
# Extract container names, volumes, and networks
container_names = [c.name for c in containers if hasattr(c, 'name')]
# Get volumes and networks from docker-compose
volumes = [
f"{env_values.compose_project_name}_db_data",
f"{env_values.compose_project_name}_gitea_data"
]
networks = [
f"{env_values.compose_project_name}_internal"
]
# Create metadata
metadata = DeploymentMetadata(
subdomain=env_values.subdomain,
url=env_values.url,
domain=env_values.domain,
compose_project_name=env_values.compose_project_name,
db_name=env_values.db_name,
db_user=env_values.db_user,
deployment_timestamp=datetime.now().isoformat(),
dns_record_id=dns_record_id,
dns_ip=dns_ip,
containers=container_names,
volumes=volumes,
networks=networks,
env_file_path=str(self._config.env_file.absolute())
)
# Save configuration
config_path = self._config_manager.save_deployment(metadata)
self._logger.info(f"✓ Deployment config saved: {config_path}")
except Exception as e:
self._logger.warning(f"Failed to save deployment config: {e}")

View File

@ -0,0 +1,199 @@
"""
Webhook notifications module
Send deployment event notifications with retry logic
"""
import logging
import time
from dataclasses import asdict, dataclass
from datetime import datetime
from typing import Any, Dict, Optional
import requests
logger = logging.getLogger(__name__)
@dataclass
class WebhookEvent:
"""Webhook event data"""
event_type: str # deployment_started, deployment_success, etc.
timestamp: str
subdomain: str
url: str
message: str
metadata: Dict[str, Any]
class WebhookNotifier:
"""Send webhook notifications with retry logic"""
def __init__(
self,
webhook_url: Optional[str],
timeout: int,
max_retries: int
):
"""
Initialize webhook notifier
Args:
webhook_url: Webhook URL to send notifications to (None to disable)
timeout: Request timeout in seconds
max_retries: Maximum number of retry attempts
"""
self._webhook_url = webhook_url
self._timeout = timeout
self._max_retries = max_retries
self._logger = logging.getLogger(f"{__name__}.WebhookNotifier")
if not webhook_url:
self._logger.debug("Webhook notifications disabled (no URL configured)")
def notify(self, event: WebhookEvent) -> None:
"""
Send webhook notification with retry
Args:
event: WebhookEvent to send
Note:
Failures are logged but don't raise exceptions to avoid
failing deployments due to webhook issues
"""
if not self._webhook_url:
return
payload = asdict(event)
self._logger.debug(f"Sending webhook: {event.event_type}")
for attempt in range(1, self._max_retries + 1):
try:
response = requests.post(
self._webhook_url,
json=payload,
timeout=self._timeout
)
response.raise_for_status()
self._logger.debug(
f"Webhook sent successfully: {event.event_type} "
f"(attempt {attempt})"
)
return
except requests.RequestException as e:
self._logger.warning(
f"Webhook delivery failed (attempt {attempt}/{self._max_retries}): {e}"
)
if attempt < self._max_retries:
# Exponential backoff: 1s, 2s, 4s, etc.
backoff = 2 ** (attempt - 1)
self._logger.debug(f"Retrying in {backoff}s...")
time.sleep(backoff)
self._logger.error(
f"Failed to deliver webhook after {self._max_retries} attempts: "
f"{event.event_type}"
)
def deployment_started(self, subdomain: str, url: str) -> None:
"""
Convenience method for deployment_started event
Args:
subdomain: Subdomain being deployed
url: Full URL being deployed
"""
event = WebhookEvent(
event_type="deployment_started",
timestamp=datetime.utcnow().isoformat() + "Z",
subdomain=subdomain,
url=url,
message=f"Deployment started for {url}",
metadata={}
)
self.notify(event)
def deployment_success(
self,
subdomain: str,
url: str,
duration: float
) -> None:
"""
Convenience method for deployment_success event
Args:
subdomain: Subdomain that was deployed
url: Full URL that was deployed
duration: Deployment duration in seconds
"""
event = WebhookEvent(
event_type="deployment_success",
timestamp=datetime.utcnow().isoformat() + "Z",
subdomain=subdomain,
url=url,
message=f"Deployment successful for {url}",
metadata={"duration": round(duration, 2)}
)
self.notify(event)
def deployment_failed(self, subdomain: str, error: str, url: str = "") -> None:
"""
Convenience method for deployment_failed event
Args:
subdomain: Subdomain that failed to deploy
error: Error message
url: Full URL (may be empty if deployment failed early)
"""
event = WebhookEvent(
event_type="deployment_failed",
timestamp=datetime.utcnow().isoformat() + "Z",
subdomain=subdomain,
url=url,
message=f"Deployment failed: {error}",
metadata={"error": error}
)
self.notify(event)
def dns_added(self, hostname: str, ip: str) -> None:
"""
Convenience method for dns_added event
Args:
hostname: Hostname that was added to DNS
ip: IP address the hostname points to
"""
event = WebhookEvent(
event_type="dns_added",
timestamp=datetime.utcnow().isoformat() + "Z",
subdomain=hostname.split('.')[0], # Extract subdomain
url=hostname,
message=f"DNS record added for {hostname}",
metadata={"ip": ip}
)
self.notify(event)
def health_check_passed(self, url: str, duration: float) -> None:
"""
Convenience method for health_check_passed event
Args:
url: URL that passed health check
duration: Time taken for health check in seconds
"""
event = WebhookEvent(
event_type="health_check_passed",
timestamp=datetime.utcnow().isoformat() + "Z",
subdomain=url.split('.')[0].replace('https://', '').replace('http://', ''),
url=url,
message=f"Health check passed for {url}",
metadata={"duration": round(duration, 2)}
)
self.notify(event)

View File

@ -0,0 +1,14 @@
╔══════════════════════════════════════════════╗
║ DEPLOYMENT SUCCESS LOG ║
╚══════════════════════════════════════════════╝
Timestamp: 2025-12-17 09:21:43
Status: SUCCESS
URL: https://artfully-copious.merakit.my
Subdomain: artfully-copious
Duration: 56.29 seconds
═══════════════════════════════════════════════
Deployment completed successfully.
All services are running and health checks passed.

View File

@ -0,0 +1,14 @@
╔══════════════════════════════════════════════╗
║ DEPLOYMENT SUCCESS LOG ║
╚══════════════════════════════════════════════╝
Timestamp: 2025-12-17 16:01:55
Status: SUCCESS
URL: https://ascidiia-bridoon.merakit.my
Subdomain: ascidiia-bridoon
Duration: 56.02 seconds
═══════════════════════════════════════════════
Deployment completed successfully.
All services are running and health checks passed.

4
gitea/requirements.txt Normal file
View File

@ -0,0 +1,4 @@
# Core dependencies
requests>=2.31.0
rich>=13.7.0
python-dotenv>=1.0.0

View File

@ -0,0 +1,18 @@
{
"permissions": {
"allow": [
"Bash(chmod:*)",
"Bash(echo:*)",
"Bash(curl:*)",
"Bash(if [ -z \"$CLOUDFLARE_API_TOKEN\" ])",
"Bash(then echo \"Token is empty\")",
"Bash(else echo \"Token exists with length: $#CLOUDFLARE_API_TOKEN\")",
"Bash(fi)",
"Bash(tee:*)",
"Bash(printf:*)",
"Bash(env)",
"Bash(./cloudflare-remove.sh:*)",
"Bash(bash:*)"
]
}
}

229
scripts/cloudflare-add.sh Executable file
View File

@ -0,0 +1,229 @@
#!/bin/bash
set -euo pipefail
# Cloudflare API credentials
CF_API_TOKEN="${CLOUDFLARE_API_TOKEN:-}"
CF_ZONE_ID="${CLOUDFLARE_ZONE_ID:-}"
# Dictionary files
DICT_FILE="/usr/share/dict/words"
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m'
usage() {
echo "Usage: $0 --hostname <hostname> --ip <ip_address>"
echo " $0 --random --domain <domain> --ip <ip_address>"
echo ""
echo "Options:"
echo " --hostname Specific hostname to add (e.g., test.example.com)"
echo " --random Generate random hostname"
echo " --domain Base domain for random hostname (e.g., example.org)"
echo " --ip IP address for A record"
echo ""
echo "Environment variables required:"
echo " CLOUDFLARE_API_TOKEN"
echo " CLOUDFLARE_ZONE_ID"
exit 1
}
log_error() {
echo -e "${RED}[ERROR]${NC} $1" >&2
}
log_success() {
echo -e "${GREEN}[SUCCESS]${NC} $1" >&2
}
log_info() {
echo -e "${YELLOW}[INFO]${NC} $1" >&2
}
check_requirements() {
if [[ -z "$CF_API_TOKEN" ]]; then
log_error "CLOUDFLARE_API_TOKEN environment variable not set"
exit 1
fi
if [[ -z "$CF_ZONE_ID" ]]; then
log_error "CLOUDFLARE_ZONE_ID environment variable not set"
exit 1
fi
if ! command -v curl &> /dev/null; then
log_error "curl is required but not installed"
exit 1
fi
if ! command -v jq &> /dev/null; then
log_error "jq is required but not installed"
exit 1
fi
}
get_random_word() {
if [[ ! -f "$DICT_FILE" ]]; then
log_error "Dictionary file not found: $DICT_FILE"
exit 1
fi
# Get random word: lowercase, letters only, 3-10 characters
grep -E '^[a-z]{3,10}$' "$DICT_FILE" | shuf -n 1
}
generate_random_hostname() {
local domain=$1
local word1=$(get_random_word)
local word2=$(get_random_word)
echo "${word1}-${word2}.${domain}"
}
check_dns_exists() {
local hostname=$1
log_info "Checking if DNS record exists for: $hostname"
local response=$(curl -s -X GET \
"https://api.cloudflare.com/client/v4/zones/${CF_ZONE_ID}/dns_records?name=${hostname}" \
-H "Authorization: Bearer ${CF_API_TOKEN}" \
-H "Content-Type: application/json")
local success=$(echo "$response" | jq -r '.success')
if [[ "$success" != "true" ]]; then
log_error "Cloudflare API request failed"
echo "$response" | jq '.'
exit 1
fi
local count=$(echo "$response" | jq -r '.result | length')
if [[ "$count" -gt 0 ]]; then
return 0 # Record exists
else
return 1 # Record does not exist
fi
}
add_dns_record() {
local hostname=$1
local ip=$2
log_info "Adding DNS record: $hostname -> $ip"
local response=$(curl -s -X POST \
"https://api.cloudflare.com/client/v4/zones/${CF_ZONE_ID}/dns_records" \
-H "Authorization: Bearer ${CF_API_TOKEN}" \
-H "Content-Type: application/json" \
--data "{
\"type\": \"A\",
\"name\": \"${hostname}\",
\"content\": \"${ip}\",
\"ttl\": 1,
\"proxied\": false
}")
local success=$(echo "$response" | jq -r '.success')
if [[ "$success" == "true" ]]; then
log_success "DNS record added successfully: $hostname -> $ip"
echo "$response" | jq -r '.result | "Record ID: \(.id)"'
return 0
else
log_error "Failed to add DNS record"
echo "$response" | jq '.'
return 1
fi
}
# Parse arguments
HOSTNAME=""
IP=""
RANDOM_MODE=false
DOMAIN=""
while [[ $# -gt 0 ]]; do
case $1 in
--hostname)
HOSTNAME="$2"
shift 2
;;
--ip)
IP="$2"
shift 2
;;
--random)
RANDOM_MODE=true
shift
;;
--domain)
DOMAIN="$2"
shift 2
;;
-h|--help)
usage
;;
*)
log_error "Unknown option: $1"
usage
;;
esac
done
# Validate arguments
if [[ -z "$IP" ]]; then
log_error "IP address is required"
usage
fi
if [[ "$RANDOM_MODE" == true ]]; then
if [[ -z "$DOMAIN" ]]; then
log_error "Domain is required when using --random mode"
usage
fi
else
if [[ -z "$HOSTNAME" ]]; then
log_error "Hostname is required"
usage
fi
fi
# Check requirements
check_requirements
# Generate or use provided hostname
if [[ "$RANDOM_MODE" == true ]]; then
MAX_ATTEMPTS=50
attempt=1
while [[ $attempt -le $MAX_ATTEMPTS ]]; do
HOSTNAME=$(generate_random_hostname "$DOMAIN")
log_info "Generated hostname (attempt $attempt): $HOSTNAME"
if ! check_dns_exists "$HOSTNAME"; then
log_success "Hostname is available: $HOSTNAME"
break
else
log_info "Hostname already exists, generating new one..."
attempt=$((attempt + 1))
fi
done
if [[ $attempt -gt $MAX_ATTEMPTS ]]; then
log_error "Failed to generate unique hostname after $MAX_ATTEMPTS attempts"
exit 1
fi
else
if check_dns_exists "$HOSTNAME"; then
log_error "DNS record already exists for: $HOSTNAME"
exit 1
fi
fi
# Add the DNS record
add_dns_record "$HOSTNAME" "$IP"

327
scripts/cloudflare-remove.sh Executable file
View File

@ -0,0 +1,327 @@
#!/bin/bash
set -euo pipefail
# Cloudflare API credentials
CF_API_TOKEN="${CLOUDFLARE_API_TOKEN:-}"
CF_ZONE_ID="${CLOUDFLARE_ZONE_ID:-}"
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m'
usage() {
echo "Usage: $0 --hostname <hostname>"
echo " $0 --record-id <record_id>"
echo " $0 --all-matching <pattern>"
echo ""
echo "Options:"
echo " --hostname Remove DNS record by hostname (e.g., test.example.com)"
echo " --record-id Remove DNS record by Cloudflare record ID"
echo " --all-matching Remove all DNS records matching pattern (e.g., '*.example.com')"
echo ""
echo "Environment variables required:"
echo " CLOUDFLARE_API_TOKEN"
echo " CLOUDFLARE_ZONE_ID"
exit 1
}
log_error() {
echo -e "${RED}[ERROR]${NC} $1" >&2
}
log_success() {
echo -e "${GREEN}[SUCCESS]${NC} $1" >&2
}
log_info() {
echo -e "${YELLOW}[INFO]${NC} $1" >&2
}
check_requirements() {
if [[ -z "$CF_API_TOKEN" ]]; then
log_error "CLOUDFLARE_API_TOKEN environment variable not set"
exit 1
fi
if [[ -z "$CF_ZONE_ID" ]]; then
log_error "CLOUDFLARE_ZONE_ID environment variable not set"
exit 1
fi
if ! command -v curl &> /dev/null; then
log_error "curl is required but not installed"
exit 1
fi
if ! command -v jq &> /dev/null; then
log_error "jq is required but not installed"
exit 1
fi
}
get_dns_records_by_hostname() {
local hostname=$1
log_info "Looking up DNS records for: $hostname"
local response=$(curl -s -X GET \
"https://api.cloudflare.com/client/v4/zones/${CF_ZONE_ID}/dns_records?name=${hostname}" \
-H "Authorization: Bearer ${CF_API_TOKEN}" \
-H "Content-Type: application/json")
local success=$(echo "$response" | jq -r '.success')
if [[ "$success" != "true" ]]; then
log_error "Cloudflare API request failed"
echo "$response" | jq '.'
exit 1
fi
echo "$response"
}
get_all_dns_records() {
log_info "Fetching all DNS records in zone"
local response=$(curl -s -X GET \
"https://api.cloudflare.com/client/v4/zones/${CF_ZONE_ID}/dns_records?per_page=1000" \
-H "Authorization: Bearer ${CF_API_TOKEN}" \
-H "Content-Type: application/json")
local success=$(echo "$response" | jq -r '.success')
if [[ "$success" != "true" ]]; then
log_error "Cloudflare API request failed"
echo "$response" | jq '.'
exit 1
fi
echo "$response"
}
delete_dns_record() {
local record_id=$1
local hostname=$2
log_info "Deleting DNS record: $hostname (ID: $record_id)"
local response=$(curl -s -X DELETE \
"https://api.cloudflare.com/client/v4/zones/${CF_ZONE_ID}/dns_records/${record_id}" \
-H "Authorization: Bearer ${CF_API_TOKEN}" \
-H "Content-Type: application/json")
local success=$(echo "$response" | jq -r '.success')
if [[ "$success" == "true" ]]; then
log_success "DNS record deleted successfully: $hostname (ID: $record_id)"
return 0
else
log_error "Failed to delete DNS record: $hostname (ID: $record_id)"
echo "$response" | jq '.'
return 1
fi
}
delete_by_hostname() {
local hostname=$1
local response=$(get_dns_records_by_hostname "$hostname")
local count=$(echo "$response" | jq -r '.result | length')
if [[ "$count" -eq 0 ]]; then
log_error "No DNS records found for: $hostname"
exit 1
fi
log_info "Found $count record(s) for: $hostname"
local deleted=0
local failed=0
while IFS= read -r record; do
local record_id=$(echo "$record" | jq -r '.id')
local record_name=$(echo "$record" | jq -r '.name')
local record_type=$(echo "$record" | jq -r '.type')
local record_content=$(echo "$record" | jq -r '.content')
log_info "Found: $record_name ($record_type) -> $record_content"
if delete_dns_record "$record_id" "$record_name"; then
deleted=$((deleted + 1))
else
failed=$((failed + 1))
fi
done < <(echo "$response" | jq -c '.result[]')
log_info "Summary: $deleted deleted, $failed failed"
if [[ $failed -gt 0 ]]; then
exit 1
fi
}
delete_by_record_id() {
local record_id=$1
# First, get the record details
log_info "Fetching record details for ID: $record_id"
local response=$(curl -s -X GET \
"https://api.cloudflare.com/client/v4/zones/${CF_ZONE_ID}/dns_records/${record_id}" \
-H "Authorization: Bearer ${CF_API_TOKEN}" \
-H "Content-Type: application/json")
local success=$(echo "$response" | jq -r '.success')
if [[ "$success" != "true" ]]; then
log_error "Record not found or API request failed"
echo "$response" | jq '.'
exit 1
fi
local hostname=$(echo "$response" | jq -r '.result.name')
local record_type=$(echo "$response" | jq -r '.result.type')
local content=$(echo "$response" | jq -r '.result.content')
log_info "Record found: $hostname ($record_type) -> $content"
delete_dns_record "$record_id" "$hostname"
}
delete_all_matching() {
local pattern=$1
log_info "Searching for records matching pattern: $pattern"
local response=$(get_all_dns_records)
local all_records=$(echo "$response" | jq -c '.result[]')
local matching_records=()
while IFS= read -r record; do
local record_name=$(echo "$record" | jq -r '.name')
# Simple pattern matching (supports * wildcard)
if [[ "$pattern" == *"*"* ]]; then
# Convert pattern to regex
local regex="${pattern//\*/.*}"
if [[ "$record_name" =~ ^${regex}$ ]]; then
matching_records+=("$record")
fi
else
# Exact match
if [[ "$record_name" == "$pattern" ]]; then
matching_records+=("$record")
fi
fi
done < <(echo "$all_records")
local count=${#matching_records[@]}
if [[ $count -eq 0 ]]; then
log_error "No DNS records found matching pattern: $pattern"
exit 1
fi
log_info "Found $count record(s) matching pattern: $pattern"
# List matching records
for record in "${matching_records[@]}"; do
local record_name=$(echo "$record" | jq -r '.name')
local record_type=$(echo "$record" | jq -r '.type')
local content=$(echo "$record" | jq -r '.content')
log_info " - $record_name ($record_type) -> $content"
done
# Confirm deletion
echo ""
read -p "Delete all $count record(s)? [y/N] " -n 1 -r
echo ""
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
log_info "Deletion cancelled"
exit 0
fi
local deleted=0
local failed=0
for record in "${matching_records[@]}"; do
local record_id=$(echo "$record" | jq -r '.id')
local record_name=$(echo "$record" | jq -r '.name')
if delete_dns_record "$record_id" "$record_name"; then
deleted=$((deleted + 1))
else
failed=$((failed + 1))
fi
done
log_info "Summary: $deleted deleted, $failed failed"
if [[ $failed -gt 0 ]]; then
exit 1
fi
}
# Parse arguments
HOSTNAME=""
RECORD_ID=""
PATTERN=""
MODE=""
while [[ $# -gt 0 ]]; do
case $1 in
--hostname)
HOSTNAME="$2"
MODE="hostname"
shift 2
;;
--record-id)
RECORD_ID="$2"
MODE="record-id"
shift 2
;;
--all-matching)
PATTERN="$2"
MODE="pattern"
shift 2
;;
-h|--help)
usage
;;
*)
log_error "Unknown option: $1"
usage
;;
esac
done
# Validate arguments
if [[ -z "$MODE" ]]; then
log_error "No deletion mode specified"
usage
fi
# Check requirements
check_requirements
# Execute based on mode
case $MODE in
hostname)
delete_by_hostname "$HOSTNAME"
;;
record-id)
delete_by_record_id "$RECORD_ID"
;;
pattern)
delete_all_matching "$PATTERN"
;;
*)
log_error "Invalid mode: $MODE"
exit 1
;;
esac

View File

@ -0,0 +1,37 @@
{
"permissions": {
"allow": [
"Bash(chmod:*)",
"Bash(test:*)",
"Bash(python3:*)",
"Bash(docker network create:*)",
"Bash(bash:*)",
"Bash(cat:*)",
"Bash(docker compose config:*)",
"Bash(docker compose:*)",
"Bash(docker ps:*)",
"Bash(docker volume:*)",
"Bash(docker network:*)",
"Bash(docker exec:*)",
"Bash(docker inspect:*)",
"Bash(curl:*)",
"Bash(nslookup:*)",
"Bash(dig:*)",
"Bash(tree:*)",
"Bash(ls:*)",
"Bash(pip3 install:*)",
"Bash(find:*)",
"Bash(pip install:*)",
"Bash(python -m json.tool:*)",
"Bash(pkill:*)",
"Bash(python test_integration.py:*)",
"Bash(docker run:*)",
"Bash(redis-cli ping:*)",
"Bash(mkdir:*)",
"Bash(./destroy.py:*)",
"Bash(lsof:*)",
"Bash(netstat:*)",
"Bash(kill:*)"
]
}
}

14
wordpress/.env Normal file
View File

@ -0,0 +1,14 @@
COMPOSE_PROJECT_NAME=daidle-allotrylic
APP_NAME=wordpress
SUBDOMAIN=daidle-allotrylic
DOMAIN=merakit.my
URL=daidle-allotrylic.merakit.my
WORDPRESS_VERSION=6.5-php8.2-apache
MARIADB_VERSION=11.3
DB_NAME=angali_ddc6c26a_wordpress_daidle_allotrylic
DB_USER=angali_ddc6c26a_wordpress_daidle
DB_PASSWORD=emblazer-stairway-sweety
DB_ROOT_PASSWORD=idaein-silkgrower-tariffism
WP_TABLE_PREFIX=wp_
WP_MEMORY_LIMIT=256M
WP_MAX_MEMORY_LIMIT=256M

22
wordpress/.env.backup Normal file
View File

@ -0,0 +1,22 @@
# App
COMPOSE_PROJECT_NAME=emyd-tartarian
APP_NAME=wordpress
SUBDOMAIN=emyd-tartarian
DOMAIN=merakit.my
URL=emyd-tartarian.merakit.my
# Versions
WORDPRESS_VERSION=6.5-php8.2-apache
MARIADB_VERSION=11.3
# Database
DB_NAME=angali_guzagmpc_wordpress_emyd_tartarian
DB_USER=angali_guzagmpc_wordpress_emyd_t
DB_PASSWORD=creditrix-lutein-discolors
DB_ROOT_PASSWORD=sixtieths-murines-rabbling
# WordPress
WP_TABLE_PREFIX=wp_
WP_MEMORY_LIMIT=256M
WP_MAX_MEMORY_LIMIT=256M

View File

@ -0,0 +1,22 @@
# App
COMPOSE_PROJECT_NAME=litterers-apotropaic
APP_NAME=wordpress
SUBDOMAIN=litterers-apotropaic
DOMAIN=merakit.my
URL=litterers-apotropaic.merakit.my
# Versions
WORDPRESS_VERSION=6.5-php8.2-apache
MARIADB_VERSION=11.3
# Database
DB_NAME=angali_xewzeu15_wordpress_litterers_apotropaic
DB_USER=angali_xewzeu15_wordpress_litter
DB_PASSWORD=templon-infantly-yielding
DB_ROOT_PASSWORD=beplumed-falus-tendry
# WordPress
WP_TABLE_PREFIX=wp_
WP_MEMORY_LIMIT=256M
WP_MAX_MEMORY_LIMIT=256M

View File

@ -0,0 +1,14 @@
COMPOSE_PROJECT_NAME=modif-sporidial
APP_NAME=wordpress
SUBDOMAIN=modif-sporidial
DOMAIN=merakit.my
URL=modif-sporidial.merakit.my
WORDPRESS_VERSION=6.5-php8.2-apache
MARIADB_VERSION=11.3
DB_NAME=angali_a08f84d9_wordpress_modif_sporidial
DB_USER=angali_a08f84d9_wordpress_modif_
DB_PASSWORD=fumeroot-rummest-tiltboard
DB_ROOT_PASSWORD=unalike-prologizer-axonic
WP_TABLE_PREFIX=wp_
WP_MEMORY_LIMIT=256M
WP_MAX_MEMORY_LIMIT=256M

View File

@ -0,0 +1,14 @@
COMPOSE_PROJECT_NAME=modif-sporidial
APP_NAME=wordpress
SUBDOMAIN=modif-sporidial
DOMAIN=merakit.my
URL=modif-sporidial.merakit.my
WORDPRESS_VERSION=6.5-php8.2-apache
MARIADB_VERSION=11.3
DB_NAME=angali_a08f84d9_wordpress_modif_sporidial
DB_USER=angali_a08f84d9_wordpress_modif_
DB_PASSWORD=fumeroot-rummest-tiltboard
DB_ROOT_PASSWORD=unalike-prologizer-axonic
WP_TABLE_PREFIX=wp_
WP_MEMORY_LIMIT=256M
WP_MAX_MEMORY_LIMIT=256M

View File

@ -0,0 +1,14 @@
COMPOSE_PROJECT_NAME=modif-sporidial
APP_NAME=wordpress
SUBDOMAIN=modif-sporidial
DOMAIN=merakit.my
URL=modif-sporidial.merakit.my
WORDPRESS_VERSION=6.5-php8.2-apache
MARIADB_VERSION=11.3
DB_NAME=angali_a08f84d9_wordpress_modif_sporidial
DB_USER=angali_a08f84d9_wordpress_modif_
DB_PASSWORD=fumeroot-rummest-tiltboard
DB_ROOT_PASSWORD=unalike-prologizer-axonic
WP_TABLE_PREFIX=wp_
WP_MEMORY_LIMIT=256M
WP_MAX_MEMORY_LIMIT=256M

View File

@ -0,0 +1,14 @@
COMPOSE_PROJECT_NAME=dtente-yali
APP_NAME=wordpress
SUBDOMAIN=dtente-yali
DOMAIN=merakit.my
URL=dtente-yali.merakit.my
WORDPRESS_VERSION=6.5-php8.2-apache
MARIADB_VERSION=11.3
DB_NAME=angali_1fc30955_wordpress_dtente_yali
DB_USER=angali_1fc30955_wordpress_dtente
DB_PASSWORD=chronic-urophanic-subminimal
DB_ROOT_PASSWORD=determiner-reaks-cochleated
WP_TABLE_PREFIX=wp_
WP_MEMORY_LIMIT=256M
WP_MAX_MEMORY_LIMIT=256M

View File

@ -0,0 +1,14 @@
COMPOSE_PROJECT_NAME=rappini-misseated
APP_NAME=wordpress
SUBDOMAIN=rappini-misseated
DOMAIN=merakit.my
URL=rappini-misseated.merakit.my
WORDPRESS_VERSION=6.5-php8.2-apache
MARIADB_VERSION=11.3
DB_NAME=angali_d6646fab_wordpress_rappini_misseated
DB_USER=angali_d6646fab_wordpress_rappin
DB_PASSWORD=painterish-tayir-mentalist
DB_ROOT_PASSWORD=venemous-haymow-overbend
WP_TABLE_PREFIX=wp_
WP_MEMORY_LIMIT=256M
WP_MAX_MEMORY_LIMIT=256M

View File

@ -0,0 +1,14 @@
COMPOSE_PROJECT_NAME=emetic-fuglemen
APP_NAME=wordpress
SUBDOMAIN=emetic-fuglemen
DOMAIN=merakit.my
URL=emetic-fuglemen.merakit.my
WORDPRESS_VERSION=6.5-php8.2-apache
MARIADB_VERSION=11.3
DB_NAME=angali_a8c12895_wordpress_emetic_fuglemen
DB_USER=angali_a8c12895_wordpress_emetic
DB_PASSWORD=heteroside-budder-chipyard
DB_ROOT_PASSWORD=overkeen-gangliated-describer
WP_TABLE_PREFIX=wp_
WP_MEMORY_LIMIT=256M
WP_MAX_MEMORY_LIMIT=256M

View File

@ -0,0 +1,14 @@
COMPOSE_PROJECT_NAME=exing-calcinator
APP_NAME=wordpress
SUBDOMAIN=exing-calcinator
DOMAIN=merakit.my
URL=exing-calcinator.merakit.my
WORDPRESS_VERSION=6.5-php8.2-apache
MARIADB_VERSION=11.3
DB_NAME=angali_f9404c19_wordpress_exing_calcinator
DB_USER=angali_f9404c19_wordpress_exing_
DB_PASSWORD=blencorn-raniform-sectism
DB_ROOT_PASSWORD=florilege-haya-thin
WP_TABLE_PREFIX=wp_
WP_MEMORY_LIMIT=256M
WP_MAX_MEMORY_LIMIT=256M

View File

@ -0,0 +1,14 @@
COMPOSE_PROJECT_NAME=exing-calcinator
APP_NAME=wordpress
SUBDOMAIN=exing-calcinator
DOMAIN=merakit.my
URL=exing-calcinator.merakit.my
WORDPRESS_VERSION=6.5-php8.2-apache
MARIADB_VERSION=11.3
DB_NAME=angali_f9404c19_wordpress_exing_calcinator
DB_USER=angali_f9404c19_wordpress_exing_
DB_PASSWORD=blencorn-raniform-sectism
DB_ROOT_PASSWORD=florilege-haya-thin
WP_TABLE_PREFIX=wp_
WP_MEMORY_LIMIT=256M
WP_MAX_MEMORY_LIMIT=256M

View File

@ -0,0 +1,14 @@
COMPOSE_PROJECT_NAME=exing-calcinator
APP_NAME=wordpress
SUBDOMAIN=exing-calcinator
DOMAIN=merakit.my
URL=exing-calcinator.merakit.my
WORDPRESS_VERSION=6.5-php8.2-apache
MARIADB_VERSION=11.3
DB_NAME=angali_f9404c19_wordpress_exing_calcinator
DB_USER=angali_f9404c19_wordpress_exing_
DB_PASSWORD=blencorn-raniform-sectism
DB_ROOT_PASSWORD=florilege-haya-thin
WP_TABLE_PREFIX=wp_
WP_MEMORY_LIMIT=256M
WP_MAX_MEMORY_LIMIT=256M

View File

@ -0,0 +1,14 @@
COMPOSE_PROJECT_NAME=ankylotic-unactable
APP_NAME=wordpress
SUBDOMAIN=ankylotic-unactable
DOMAIN=merakit.my
URL=ankylotic-unactable.merakit.my
WORDPRESS_VERSION=6.5-php8.2-apache
MARIADB_VERSION=11.3
DB_NAME=angali_6aa981f6_wordpress_ankylotic_unactable
DB_USER=angali_6aa981f6_wordpress_ankylo
DB_PASSWORD=mesoskelic-leopard-libertines
DB_ROOT_PASSWORD=lavature-barmkin-slipsoles
WP_TABLE_PREFIX=wp_
WP_MEMORY_LIMIT=256M
WP_MAX_MEMORY_LIMIT=256M

View File

@ -0,0 +1,14 @@
COMPOSE_PROJECT_NAME=slenderly-spareable
APP_NAME=wordpress
SUBDOMAIN=slenderly-spareable
DOMAIN=merakit.my
URL=slenderly-spareable.merakit.my
WORDPRESS_VERSION=6.5-php8.2-apache
MARIADB_VERSION=11.3
DB_NAME=angali_94934db7_wordpress_slenderly_spareable
DB_USER=angali_94934db7_wordpress_slende
DB_PASSWORD=chaped-toothwort-transform
DB_ROOT_PASSWORD=outearn-testar-platinise
WP_TABLE_PREFIX=wp_
WP_MEMORY_LIMIT=256M
WP_MAX_MEMORY_LIMIT=256M

View File

@ -0,0 +1,14 @@
COMPOSE_PROJECT_NAME=slenderly-spareable
APP_NAME=wordpress
SUBDOMAIN=slenderly-spareable
DOMAIN=merakit.my
URL=slenderly-spareable.merakit.my
WORDPRESS_VERSION=6.5-php8.2-apache
MARIADB_VERSION=11.3
DB_NAME=angali_94934db7_wordpress_slenderly_spareable
DB_USER=angali_94934db7_wordpress_slende
DB_PASSWORD=chaped-toothwort-transform
DB_ROOT_PASSWORD=outearn-testar-platinise
WP_TABLE_PREFIX=wp_
WP_MEMORY_LIMIT=256M
WP_MAX_MEMORY_LIMIT=256M

354
wordpress/DESTROY.md Normal file
View File

@ -0,0 +1,354 @@
# WordPress Deployment Destruction Guide
This document explains how to destroy WordPress deployments using the config-based destruction system.
## Overview
The WordPress deployment system now automatically saves configuration for each successful deployment in the `deployments/` directory. These configurations can be used to cleanly destroy environments, removing all associated resources.
## Deployment Config Repository
Each successful deployment creates a JSON config file in `deployments/` containing:
- **Subdomain and URL**: Deployment identifiers
- **Docker Resources**: Container names, volumes, networks
- **DNS Information**: Cloudflare record ID and IP address
- **Database Details**: Database name and user
- **Timestamps**: When the deployment was created
Example config file: `deployments/my-site_20251217_120000.json`
```json
{
"subdomain": "my-site",
"url": "my-site.example.com",
"domain": "example.com",
"compose_project_name": "my-site",
"db_name": "wp_db_my_site",
"db_user": "wp_user_my_site",
"deployment_timestamp": "2025-12-17T12:00:00",
"dns_record_id": "abc123xyz",
"dns_ip": "203.0.113.1",
"containers": ["my-site_wp", "my-site_db"],
"volumes": ["my-site_db_data", "my-site_wp_data"],
"networks": ["my-site_internal"],
"env_file_path": "/path/to/.env"
}
```
## Using the Destroy Script
### Prerequisites
Set the following environment variables (required for DNS cleanup):
```bash
export CLOUDFLARE_API_TOKEN="your_token"
export CLOUDFLARE_ZONE_ID="your_zone_id"
```
If these are not set, the script will still work but DNS records won't be removed.
### List All Deployments
View all tracked deployments:
```bash
./destroy.py --list
```
This displays a table with:
- Subdomain
- URL
- Deployment timestamp
- Config file name
### Destroy a Deployment
#### By Subdomain (Recommended)
```bash
./destroy.py --subdomain my-site
```
#### By URL
```bash
./destroy.py --url my-site.example.com
```
#### By Config File
```bash
./destroy.py --config deployments/my-site_20251217_120000.json
```
### Options
#### Skip Confirmation
Use `-y` or `--yes` to skip the confirmation prompt:
```bash
./destroy.py --subdomain my-site --yes
```
#### Dry Run
Preview what would be destroyed without making changes:
```bash
./destroy.py --subdomain my-site --dry-run
```
#### Keep Config File
By default, the config file is deleted after destruction. To keep it:
```bash
./destroy.py --subdomain my-site --keep-config
```
#### Debug Mode
Enable verbose logging:
```bash
./destroy.py --subdomain my-site --log-level DEBUG
```
## What Gets Destroyed
The destroy script removes the following resources in order:
1. **Docker Containers**
- Stops all containers
- Removes containers forcefully
2. **Docker Volumes**
- Removes database volume (e.g., `project_db_data`)
- Removes WordPress volume (e.g., `project_wp_data`)
3. **Docker Networks**
- Removes internal networks
- Skips external networks like `proxy`
4. **DNS Records**
- Removes the Cloudflare DNS record using the saved record ID
- Requires Cloudflare credentials
5. **Config File**
- Deletes the deployment config file (unless `--keep-config` is used)
## Safety Features
### Confirmation Prompt
By default, the script asks for confirmation before destroying:
```
Are you sure you want to destroy my-site.example.com? [y/N]
```
### Dry-Run Mode
Test the destruction process without making changes:
```bash
./destroy.py --subdomain my-site --dry-run
```
This shows exactly what commands would be executed.
### Graceful Failures
- If DNS credentials are missing, the script continues and skips DNS cleanup
- If a resource doesn't exist, the script logs a warning and continues
- Partial failures are reported, allowing manual cleanup of remaining resources
## Exit Codes
- `0`: Success
- `1`: Failure (partial or complete)
- `2`: Deployment not found
- `130`: User cancelled (Ctrl+C)
## Examples
### Example 1: Clean Destruction
```bash
# List deployments
./destroy.py --list
# Destroy with confirmation
./destroy.py --subdomain test-site
# Output:
# Deployment Information:
# Subdomain: test-site
# URL: test-site.example.com
# Project: test-site
# Deployed: 2025-12-17T12:00:00
# Containers: 2
# DNS Record ID: abc123
#
# Are you sure you want to destroy test-site.example.com? [y/N]: y
#
# ═══ Destroying Containers ═══
# Stopping container: test-site_wp
# Removing container: test-site_wp
# ...
#
# ✓ Destruction Successful!
```
### Example 2: Batch Destruction
Destroy multiple deployments in one command:
```bash
#!/bin/bash
# destroy_all.sh - Destroy all test deployments
for subdomain in test-1 test-2 test-3; do
./destroy.py --subdomain "$subdomain" --yes
done
```
### Example 3: Conditional Destruction
Destroy deployments older than 7 days:
```bash
#!/bin/bash
# cleanup_old.sh
for config in deployments/*.json; do
age=$(( ($(date +%s) - $(stat -c %Y "$config")) / 86400 ))
if [ $age -gt 7 ]; then
echo "Destroying $config (age: $age days)"
./destroy.py --config "$config" --yes
fi
done
```
## Troubleshooting
### "Deployment not found"
The deployment config doesn't exist. Check available deployments:
```bash
./destroy.py --list
```
### "Failed to remove DNS record"
Possible causes:
- Cloudflare credentials not set
- DNS record already deleted
- Invalid record ID in config
The script will continue and clean up other resources.
### "Command failed: docker stop"
Container might already be stopped. The script continues with removal.
### Containers Still Running
If containers aren't removed, manually stop them:
```bash
docker ps | grep my-site
docker stop my-site_wp my-site_db
docker rm my-site_wp my-site_db
```
### Volumes Not Removed
Volumes may be in use by other containers:
```bash
docker volume ls | grep my-site
docker volume rm my-site_db_data my-site_wp_data
```
## Integration with Deployment
The deployment orchestrator automatically saves configs after successful deployments. The config is saved in `deployments/` with the format:
```
deployments/{subdomain}_{timestamp}.json
```
This happens automatically in `wordpress_deployer/orchestrator.py` after Phase 5 (Health Check) completes successfully.
## Advanced Usage
### Manual Config Creation
If you need to create a config manually for an existing deployment:
```python
from wordpress_deployer.deployment_config_manager import (
DeploymentConfigManager,
DeploymentMetadata
)
manager = DeploymentConfigManager()
metadata = DeploymentMetadata(
subdomain="my-site",
url="my-site.example.com",
domain="example.com",
compose_project_name="my-site",
db_name="wp_db",
db_user="wp_user",
deployment_timestamp="2025-12-17T12:00:00",
dns_record_id="abc123",
dns_ip="203.0.113.1",
containers=["my-site_wp", "my-site_db"],
volumes=["my-site_db_data", "my-site_wp_data"],
networks=["my-site_internal"],
env_file_path="/path/to/.env"
)
manager.save_deployment(metadata)
```
### Programmatic Destruction
Use the destroy script in Python:
```python
import subprocess
import sys
result = subprocess.run(
["./destroy.py", "--subdomain", "my-site", "--yes"],
capture_output=True,
text=True
)
if result.returncode == 0:
print("Destruction successful")
else:
print(f"Destruction failed: {result.stderr}")
sys.exit(1)
```
## Best Practices
1. **Always Test with Dry-Run**: Use `--dry-run` first to preview destruction
2. **Keep Config Backups**: Use `--keep-config` for audit trails
3. **Verify Before Batch Operations**: List deployments before bulk destruction
4. **Monitor Partial Failures**: Check logs for resources that weren't cleaned up
5. **Set Cloudflare Credentials**: Always configure DNS credentials to ensure complete cleanup
## See Also
- [Main README](README.md) - Deployment documentation
- [deploy.py](deploy.py) - Deployment script
- [wordpress_deployer/](wordpress_deployer/) - Core deployment modules

202
wordpress/deploy.py Executable file
View File

@ -0,0 +1,202 @@
#!/usr/bin/env python3
"""
Production-ready WordPress deployment script
Combines environment generation and deployment with:
- Configuration validation
- Rollback capability
- Dry-run mode
- Monitoring hooks
"""
import argparse
import logging
import sys
from pathlib import Path
from typing import NoReturn
from rich.console import Console
from rich.logging import RichHandler
from wordpress_deployer.config import ConfigurationError, DeploymentConfig
from wordpress_deployer.orchestrator import DeploymentError, DeploymentOrchestrator
console = Console()
def setup_logging(log_level: str) -> None:
"""
Setup rich logging with colored output
Args:
log_level: Logging level (DEBUG, INFO, WARNING, ERROR)
"""
logging.basicConfig(
level=log_level.upper(),
format="%(message)s",
datefmt="[%X]",
handlers=[RichHandler(console=console, rich_tracebacks=True, show_path=False)]
)
# Reduce noise from urllib3/requests
logging.getLogger("urllib3").setLevel(logging.WARNING)
logging.getLogger("requests").setLevel(logging.WARNING)
def parse_args() -> argparse.Namespace:
"""
Parse CLI arguments
Returns:
argparse.Namespace with parsed arguments
"""
parser = argparse.ArgumentParser(
description="Deploy WordPress with automatic environment generation",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog="""
Examples:
# Normal deployment
./deploy.py
# Dry-run mode (preview only)
./deploy.py --dry-run
# With webhook notifications
./deploy.py --webhook-url https://hooks.slack.com/xxx
# Debug mode
./deploy.py --log-level DEBUG
# Custom retry count
./deploy.py --max-retries 5
Environment Variables:
CLOUDFLARE_API_TOKEN Cloudflare API token (required)
CLOUDFLARE_ZONE_ID Cloudflare zone ID (required)
DEPLOYMENT_WEBHOOK_URL Webhook URL for notifications (optional)
DEPLOYMENT_MAX_RETRIES Max retries for DNS conflicts (default: 3)
For more information, see the documentation at:
/infra/templates/wordpress/README.md
"""
)
parser.add_argument(
"--dry-run",
action="store_true",
help="Preview deployment without making changes"
)
parser.add_argument(
"--env-file",
type=Path,
default=Path(".env"),
help="Path to .env file (default: .env)"
)
parser.add_argument(
"--compose-file",
type=Path,
default=Path("docker-compose.yml"),
help="Path to docker-compose.yml (default: docker-compose.yml)"
)
parser.add_argument(
"--max-retries",
type=int,
default=3,
help="Max retries for DNS conflicts (default: 3)"
)
parser.add_argument(
"--webhook-url",
type=str,
help="Webhook URL for deployment notifications"
)
parser.add_argument(
"--log-level",
choices=["DEBUG", "INFO", "WARNING", "ERROR"],
default="INFO",
help="Logging level (default: INFO)"
)
parser.add_argument(
"--no-verify-ssl",
action="store_true",
help="Skip SSL verification for health checks (not recommended for production)"
)
return parser.parse_args()
def print_banner() -> None:
"""Print deployment banner"""
console.print("\n[bold cyan]╔══════════════════════════════════════════════╗[/bold cyan]")
console.print("[bold cyan]║[/bold cyan] [bold white]WordPress Production Deployment[/bold white] [bold cyan]║[/bold cyan]")
console.print("[bold cyan]╚══════════════════════════════════════════════╝[/bold cyan]\n")
def main() -> NoReturn:
"""
Main entry point
Exit codes:
0: Success
1: Deployment failure
130: User interrupt (Ctrl+C)
"""
args = parse_args()
setup_logging(args.log_level)
logger = logging.getLogger(__name__)
print_banner()
try:
# Load configuration
logger.debug("Loading configuration...")
config = DeploymentConfig.from_env_and_args(args)
config.validate()
logger.debug("Configuration loaded successfully")
if config.dry_run:
console.print("[bold yellow]━━━ DRY-RUN MODE: No changes will be made ━━━[/bold yellow]\n")
# Create orchestrator and deploy
orchestrator = DeploymentOrchestrator(config)
orchestrator.deploy()
console.print("\n[bold green]╔══════════════════════════════════════════════╗[/bold green]")
console.print("[bold green]║[/bold green] [bold white]✓ Deployment Successful![/bold white] [bold green]║[/bold green]")
console.print("[bold green]╚══════════════════════════════════════════════╝[/bold green]\n")
sys.exit(0)
except ConfigurationError as e:
logger.error(f"Configuration error: {e}")
console.print(f"\n[bold red]✗ Configuration error: {e}[/bold red]\n")
console.print("[yellow]Please check your environment variables and configuration.[/yellow]")
console.print("[yellow]Required: CLOUDFLARE_API_TOKEN, CLOUDFLARE_ZONE_ID[/yellow]\n")
sys.exit(1)
except DeploymentError as e:
logger.error(f"Deployment failed: {e}")
console.print(f"\n[bold red]✗ Deployment failed: {e}[/bold red]\n")
sys.exit(1)
except KeyboardInterrupt:
logger.warning("Deployment interrupted by user")
console.print("\n[bold yellow]✗ Deployment interrupted by user[/bold yellow]\n")
sys.exit(130)
except Exception as e:
logger.exception("Unexpected error")
console.print(f"\n[bold red]✗ Unexpected error: {e}[/bold red]\n")
console.print("[yellow]Please check the logs above for more details.[/yellow]\n")
sys.exit(1)
if __name__ == "__main__":
main()

529
wordpress/destroy.py Executable file
View File

@ -0,0 +1,529 @@
#!/usr/bin/env python3
"""
WordPress Deployment Destroyer
Destroys WordPress deployments based on saved deployment configurations
"""
import argparse
import logging
import subprocess
import sys
from pathlib import Path
from typing import List, NoReturn, Optional
from rich.console import Console
from rich.logging import RichHandler
from rich.prompt import Confirm
from rich.table import Table
from wordpress_deployer.deployment_config_manager import (
DeploymentConfigManager,
DeploymentMetadata
)
from wordpress_deployer.dns_manager import DNSError, DNSManager
console = Console()
def setup_logging(log_level: str) -> None:
"""
Setup rich logging with colored output
Args:
log_level: Logging level (DEBUG, INFO, WARNING, ERROR)
"""
logging.basicConfig(
level=log_level.upper(),
format="%(message)s",
datefmt="[%X]",
handlers=[RichHandler(console=console, rich_tracebacks=True, show_path=False)]
)
def parse_args() -> argparse.Namespace:
"""
Parse CLI arguments
Returns:
argparse.Namespace with parsed arguments
"""
parser = argparse.ArgumentParser(
description="Destroy WordPress deployments",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog="""
Examples:
# List all deployments
./destroy.py --list
# Destroy by subdomain
./destroy.py --subdomain my-site
# Destroy by URL
./destroy.py --url my-site.example.com
# Destroy by config file
./destroy.py --config deployments/my-site_20231215_120000.json
# Destroy without confirmation
./destroy.py --subdomain my-site --yes
# Dry-run mode (preview only)
./destroy.py --subdomain my-site --dry-run
Environment Variables:
CLOUDFLARE_API_TOKEN Cloudflare API token (required)
CLOUDFLARE_ZONE_ID Cloudflare zone ID (required)
"""
)
# Action group - mutually exclusive
action_group = parser.add_mutually_exclusive_group(required=True)
action_group.add_argument(
"--list",
action="store_true",
help="List all deployments"
)
action_group.add_argument(
"--subdomain",
type=str,
help="Subdomain to destroy"
)
action_group.add_argument(
"--url",
type=str,
help="Full URL to destroy"
)
action_group.add_argument(
"--config",
type=Path,
help="Path to deployment config file"
)
# Options
parser.add_argument(
"--yes", "-y",
action="store_true",
help="Skip confirmation prompts"
)
parser.add_argument(
"--dry-run",
action="store_true",
help="Preview destruction without making changes"
)
parser.add_argument(
"--keep-config",
action="store_true",
help="Keep deployment config file after destruction"
)
parser.add_argument(
"--log-level",
choices=["DEBUG", "INFO", "WARNING", "ERROR"],
default="INFO",
help="Logging level (default: INFO)"
)
return parser.parse_args()
def print_banner() -> None:
"""Print destruction banner"""
console.print("\n[bold red]╔══════════════════════════════════════════════╗[/bold red]")
console.print("[bold red]║[/bold red] [bold white]WordPress Deployment Destroyer[/bold white] [bold red]║[/bold red]")
console.print("[bold red]╚══════════════════════════════════════════════╝[/bold red]\n")
def list_deployments(config_manager: DeploymentConfigManager) -> None:
"""
List all deployments
Args:
config_manager: DeploymentConfigManager instance
"""
deployments = config_manager.list_deployments()
if not deployments:
console.print("[yellow]No deployments found[/yellow]")
return
table = Table(title="Active Deployments")
table.add_column("Subdomain", style="cyan")
table.add_column("URL", style="green")
table.add_column("Deployed", style="yellow")
table.add_column("Config File", style="blue")
for config_file in deployments:
try:
metadata = config_manager.load_deployment(config_file)
table.add_row(
metadata.subdomain,
metadata.url,
metadata.deployment_timestamp,
config_file.name
)
except Exception as e:
console.print(f"[red]Error loading {config_file}: {e}[/red]")
console.print(table)
console.print(f"\n[bold]Total deployments: {len(deployments)}[/bold]\n")
def find_config(
args: argparse.Namespace,
config_manager: DeploymentConfigManager
) -> Optional[Path]:
"""
Find deployment config based on arguments
Args:
args: CLI arguments
config_manager: DeploymentConfigManager instance
Returns:
Path to config file or None
"""
if args.config:
return args.config if args.config.exists() else None
if args.subdomain:
return config_manager.find_deployment_by_subdomain(args.subdomain)
if args.url:
return config_manager.find_deployment_by_url(args.url)
return None
def run_command(cmd: List[str], dry_run: bool = False) -> bool:
"""
Run a shell command
Args:
cmd: Command and arguments
dry_run: If True, only print command
Returns:
True if successful, False otherwise
"""
cmd_str = " ".join(cmd)
if dry_run:
console.print(f"[dim]Would run: {cmd_str}[/dim]")
return True
try:
result = subprocess.run(
cmd,
capture_output=True,
text=True,
timeout=30
)
if result.returncode != 0:
logging.warning(f"Command failed: {cmd_str}")
logging.debug(f"Error: {result.stderr}")
return False
return True
except subprocess.TimeoutExpired:
logging.error(f"Command timed out: {cmd_str}")
return False
except Exception as e:
logging.error(f"Failed to run command: {e}")
return False
def destroy_containers(metadata: DeploymentMetadata, dry_run: bool = False) -> bool:
"""
Stop and remove containers
Args:
metadata: Deployment metadata
dry_run: If True, only preview
Returns:
True if successful
"""
console.print("\n[bold yellow]═══ Destroying Containers ═══[/bold yellow]")
success = True
if metadata.containers:
for container in metadata.containers:
console.print(f"Stopping container: [cyan]{container}[/cyan]")
if not run_command(["docker", "stop", container], dry_run):
success = False
console.print(f"Removing container: [cyan]{container}[/cyan]")
if not run_command(["docker", "rm", "-f", container], dry_run):
success = False
else:
# Try to stop by project name
console.print(f"Stopping docker-compose project: [cyan]{metadata.compose_project_name}[/cyan]")
if not run_command(
["docker", "compose", "-p", metadata.compose_project_name, "down"],
dry_run
):
success = False
return success
def destroy_volumes(metadata: DeploymentMetadata, dry_run: bool = False) -> bool:
"""
Remove Docker volumes
Args:
metadata: Deployment metadata
dry_run: If True, only preview
Returns:
True if successful
"""
console.print("\n[bold yellow]═══ Destroying Volumes ═══[/bold yellow]")
success = True
if metadata.volumes:
for volume in metadata.volumes:
console.print(f"Removing volume: [cyan]{volume}[/cyan]")
if not run_command(["docker", "volume", "rm", "-f", volume], dry_run):
success = False
else:
# Try with project name
volumes = [
f"{metadata.compose_project_name}_db_data",
f"{metadata.compose_project_name}_wp_data"
]
for volume in volumes:
console.print(f"Removing volume: [cyan]{volume}[/cyan]")
run_command(["docker", "volume", "rm", "-f", volume], dry_run)
return success
def destroy_networks(metadata: DeploymentMetadata, dry_run: bool = False) -> bool:
"""
Remove Docker networks (except external ones)
Args:
metadata: Deployment metadata
dry_run: If True, only preview
Returns:
True if successful
"""
console.print("\n[bold yellow]═══ Destroying Networks ═══[/bold yellow]")
success = True
if metadata.networks:
for network in metadata.networks:
# Skip external networks
if network == "proxy":
console.print(f"Skipping external network: [cyan]{network}[/cyan]")
continue
console.print(f"Removing network: [cyan]{network}[/cyan]")
if not run_command(["docker", "network", "rm", network], dry_run):
# Networks might not exist or be in use, don't fail
pass
return success
def destroy_dns(
metadata: DeploymentMetadata,
dns_manager: DNSManager,
dry_run: bool = False
) -> bool:
"""
Remove DNS record
Args:
metadata: Deployment metadata
dns_manager: DNSManager instance
dry_run: If True, only preview
Returns:
True if successful
"""
console.print("\n[bold yellow]═══ Destroying DNS Record ═══[/bold yellow]")
if not metadata.url:
console.print("[yellow]No URL found in metadata, skipping DNS cleanup[/yellow]")
return True
console.print(f"Looking up DNS record: [cyan]{metadata.url}[/cyan]")
if dry_run:
console.print("[dim]Would remove DNS record[/dim]")
return True
try:
# Look up and remove by hostname to get the real record ID from Cloudflare
# This ensures we don't rely on potentially stale/fake IDs from the config
dns_manager.remove_record(metadata.url, dry_run=False)
console.print("[green]✓ DNS record removed[/green]")
return True
except DNSError as e:
console.print(f"[red]✗ Failed to remove DNS record: {e}[/red]")
return False
def destroy_deployment(
metadata: DeploymentMetadata,
config_path: Path,
args: argparse.Namespace,
dns_manager: DNSManager
) -> bool:
"""
Destroy a deployment
Args:
metadata: Deployment metadata
config_path: Path to config file
args: CLI arguments
dns_manager: DNSManager instance
Returns:
True if successful
"""
# Show deployment info
console.print("\n[bold]Deployment Information:[/bold]")
console.print(f" Subdomain: [cyan]{metadata.subdomain}[/cyan]")
console.print(f" URL: [cyan]{metadata.url}[/cyan]")
console.print(f" Project: [cyan]{metadata.compose_project_name}[/cyan]")
console.print(f" Deployed: [cyan]{metadata.deployment_timestamp}[/cyan]")
console.print(f" Containers: [cyan]{len(metadata.containers or [])}[/cyan]")
console.print(f" DNS Record ID: [cyan]{metadata.dns_record_id or 'N/A'}[/cyan]")
if args.dry_run:
console.print("\n[bold yellow]━━━ DRY-RUN MODE: No changes will be made ━━━[/bold yellow]")
# Confirm destruction
if not args.yes and not args.dry_run:
console.print()
if not Confirm.ask(
f"[bold red]Are you sure you want to destroy {metadata.url}?[/bold red]",
default=False
):
console.print("\n[yellow]Destruction cancelled[/yellow]\n")
return False
# Execute destruction
success = True
# 1. Destroy containers
if not destroy_containers(metadata, args.dry_run):
success = False
# 2. Destroy volumes
if not destroy_volumes(metadata, args.dry_run):
success = False
# 3. Destroy networks
if not destroy_networks(metadata, args.dry_run):
success = False
# 4. Destroy DNS
if not destroy_dns(metadata, dns_manager, args.dry_run):
success = False
# 5. Delete config file
if not args.keep_config and not args.dry_run:
console.print("\n[bold yellow]═══ Deleting Config File ═══[/bold yellow]")
console.print(f"Deleting: [cyan]{config_path}[/cyan]")
try:
config_path.unlink()
console.print("[green]✓ Config file deleted[/green]")
except Exception as e:
console.print(f"[red]✗ Failed to delete config: {e}[/red]")
success = False
return success
def main() -> NoReturn:
"""
Main entry point
Exit codes:
0: Success
1: Failure
2: Not found
"""
args = parse_args()
setup_logging(args.log_level)
print_banner()
config_manager = DeploymentConfigManager()
# Handle list command
if args.list:
list_deployments(config_manager)
sys.exit(0)
# Find deployment config
config_path = find_config(args, config_manager)
if not config_path:
console.print("[red]✗ Deployment not found[/red]")
console.print("\nUse --list to see all deployments\n")
sys.exit(2)
# Load deployment metadata
try:
metadata = config_manager.load_deployment(config_path)
except Exception as e:
console.print(f"[red]✗ Failed to load deployment config: {e}[/red]\n")
sys.exit(1)
# Initialize DNS manager
import os
cloudflare_token = os.getenv("CLOUDFLARE_API_TOKEN")
cloudflare_zone = os.getenv("CLOUDFLARE_ZONE_ID")
if not cloudflare_token or not cloudflare_zone:
console.print("[yellow]⚠ Cloudflare credentials not found[/yellow]")
console.print("[yellow] DNS record will not be removed[/yellow]")
console.print("[yellow] Set CLOUDFLARE_API_TOKEN and CLOUDFLARE_ZONE_ID to enable DNS cleanup[/yellow]\n")
dns_manager = None
else:
dns_manager = DNSManager(cloudflare_token, cloudflare_zone)
# Destroy deployment
try:
success = destroy_deployment(metadata, config_path, args, dns_manager)
if success or args.dry_run:
console.print("\n[bold green]╔══════════════════════════════════════════════╗[/bold green]")
if args.dry_run:
console.print("[bold green]║[/bold green] [bold white]✓ Dry-Run Complete![/bold white] [bold green]║[/bold green]")
else:
console.print("[bold green]║[/bold green] [bold white]✓ Destruction Successful![/bold white] [bold green]║[/bold green]")
console.print("[bold green]╚══════════════════════════════════════════════╝[/bold green]\n")
sys.exit(0)
else:
console.print("\n[bold yellow]╔══════════════════════════════════════════════╗[/bold yellow]")
console.print("[bold yellow]║[/bold yellow] [bold white]⚠ Destruction Partially Failed[/bold white] [bold yellow]║[/bold yellow]")
console.print("[bold yellow]╚══════════════════════════════════════════════╝[/bold yellow]\n")
console.print("[yellow]Some resources may not have been cleaned up.[/yellow]")
console.print("[yellow]Check the logs above for details.[/yellow]\n")
sys.exit(1)
except KeyboardInterrupt:
console.print("\n[bold yellow]✗ Destruction interrupted by user[/bold yellow]\n")
sys.exit(130)
except Exception as e:
console.print(f"\n[bold red]✗ Unexpected error: {e}[/bold red]\n")
logging.exception("Unexpected error")
sys.exit(1)
if __name__ == "__main__":
main()

View File

@ -0,0 +1,56 @@
services:
mariadb:
image: mariadb:${MARIADB_VERSION}
container_name: ${SUBDOMAIN}_db
restart: unless-stopped
environment:
MYSQL_DATABASE: ${DB_NAME}
MYSQL_USER: ${DB_USER}
MYSQL_PASSWORD: ${DB_PASSWORD}
MYSQL_ROOT_PASSWORD: ${DB_ROOT_PASSWORD}
volumes:
- db_data:/var/lib/mysql
networks:
- internal
wordpress:
image: wordpress:${WORDPRESS_VERSION}
container_name: ${SUBDOMAIN}_wp
restart: unless-stopped
depends_on:
- mariadb
environment:
WORDPRESS_DB_HOST: mariadb:3306
WORDPRESS_DB_NAME: ${DB_NAME}
WORDPRESS_DB_USER: ${DB_USER}
WORDPRESS_DB_PASSWORD: ${DB_PASSWORD}
WORDPRESS_TABLE_PREFIX: ${WP_TABLE_PREFIX}
WORDPRESS_CONFIG_EXTRA: |
define('WP_MEMORY_LIMIT', '${WP_MEMORY_LIMIT}');
define('WP_MAX_MEMORY_LIMIT', '${WP_MAX_MEMORY_LIMIT}');
define('DISALLOW_FILE_EDIT', true);
define('AUTOMATIC_UPDATER_DISABLED', true);
define('FS_METHOD', 'direct');
volumes:
- wp_data:/var/www/html
labels:
- "traefik.enable=true"
- "traefik.http.routers.${SUBDOMAIN}.rule=Host(`${URL}`)"
- "traefik.http.routers.${SUBDOMAIN}.entrypoints=https"
- "traefik.http.routers.${SUBDOMAIN}.tls=true"
- "traefik.http.routers.${SUBDOMAIN}.tls.certresolver=letsencrypt"
- "traefik.http.services.${SUBDOMAIN}.loadbalancer.server.port=80"
networks:
- proxy
- internal
volumes:
db_data:
wp_data:
networks:
proxy:
external: true
internal:
internal: true

View File

@ -0,0 +1,18 @@
╔══════════════════════════════════════════════╗
║ DEPLOYMENT FAILURE LOG ║
╚══════════════════════════════════════════════╝
Timestamp: 2025-12-17 07:08:05
Status: FAILED
URL: https://caimito-hedgeless.merakit.my
Subdomain: caimito-hedgeless
═══════════════════════════════════════════════
ERROR:
Health check failed for https://caimito-hedgeless.merakit.my
═══════════════════════════════════════════════
Deployment failed. See error details above.
All changes have been rolled back.

View File

@ -0,0 +1,18 @@
╔══════════════════════════════════════════════╗
║ DEPLOYMENT FAILURE LOG ║
╚══════════════════════════════════════════════╝
Timestamp: 2025-12-17 06:12:37
Status: FAILED
URL: https://insuring-refocuses.merakit.my
Subdomain: insuring-refocuses
═══════════════════════════════════════════════
ERROR:
Failed to add DNS record: 401 Client Error: Unauthorized for url: https://api.cloudflare.com/client/v4/zones/7eb0d48b7e396e0cc8b06ac1a7fe667a/dns_records
═══════════════════════════════════════════════
Deployment failed. See error details above.
All changes have been rolled back.

View File

@ -0,0 +1,18 @@
╔══════════════════════════════════════════════╗
║ DEPLOYMENT FAILURE LOG ║
╚══════════════════════════════════════════════╝
Timestamp: 2025-12-17 06:12:13
Status: FAILED
URL: https://juslted-doodlebug.merakit.my
Subdomain: juslted-doodlebug
═══════════════════════════════════════════════
ERROR:
Failed to add DNS record: 401 Client Error: Unauthorized for url: https://api.cloudflare.com/client/v4/zones/7eb0d48b7e396e0cc8b06ac1a7fe667a/dns_records
═══════════════════════════════════════════════
Deployment failed. See error details above.
All changes have been rolled back.

View File

@ -0,0 +1,14 @@
╔══════════════════════════════════════════════╗
║ DEPLOYMENT SUCCESS LOG ║
╚══════════════════════════════════════════════╝
Timestamp: 2025-12-17 06:16:35
Status: SUCCESS
URL: https://ankylotic-unactable.merakit.my
Subdomain: ankylotic-unactable
Duration: 70.30 seconds
═══════════════════════════════════════════════
Deployment completed successfully.
All services are running and health checks passed.

View File

@ -0,0 +1,14 @@
╔══════════════════════════════════════════════╗
║ DEPLOYMENT SUCCESS LOG ║
╚══════════════════════════════════════════════╝
Timestamp: 2025-12-17 07:11:35
Status: SUCCESS
URL: https://daidle-allotrylic.merakit.my
Subdomain: daidle-allotrylic
Duration: 57.28 seconds
═══════════════════════════════════════════════
Deployment completed successfully.
All services are running and health checks passed.

View File

@ -0,0 +1,14 @@
╔══════════════════════════════════════════════╗
║ DEPLOYMENT SUCCESS LOG ║
╚══════════════════════════════════════════════╝
Timestamp: 2025-12-16 17:07:09
Status: SUCCESS
URL: https://emetic-fuglemen.merakit.my
Subdomain: emetic-fuglemen
Duration: 58.80 seconds
═══════════════════════════════════════════════
Deployment completed successfully.
All services are running and health checks passed.

View File

@ -0,0 +1,14 @@
╔══════════════════════════════════════════════╗
║ DEPLOYMENT SUCCESS LOG ║
╚══════════════════════════════════════════════╝
Timestamp: 2025-12-16 18:47:25
Status: SUCCESS
URL: https://exing-calcinator.merakit.my
Subdomain: exing-calcinator
Duration: 57.69 seconds
═══════════════════════════════════════════════
Deployment completed successfully.
All services are running and health checks passed.

View File

@ -0,0 +1,14 @@
╔══════════════════════════════════════════════╗
║ DEPLOYMENT SUCCESS LOG ║
╚══════════════════════════════════════════════╝
Timestamp: 2025-12-17 06:53:02
Status: SUCCESS
URL: https://slenderly-spareable.merakit.my
Subdomain: slenderly-spareable
Duration: 58.05 seconds
═══════════════════════════════════════════════
Deployment completed successfully.
All services are running and health checks passed.

View File

@ -0,0 +1,4 @@
# Core dependencies
requests>=2.31.0
rich>=13.7.0
python-dotenv>=1.0.0

View File

View File

@ -0,0 +1,187 @@
"""
Configuration module for deployment settings
Centralized configuration with validation from environment variables and CLI arguments
"""
import logging
import os
from dataclasses import dataclass, field
from pathlib import Path
from typing import Optional
logger = logging.getLogger(__name__)
class ConfigurationError(Exception):
"""Raised when configuration is invalid"""
pass
@dataclass
class DeploymentConfig:
"""Main deployment configuration loaded from environment and CLI args"""
# File paths (required - no defaults)
env_file: Path
docker_compose_file: Path
# Cloudflare credentials (required - no defaults)
cloudflare_api_token: str = field(repr=False) # Hide in logs
cloudflare_zone_id: str
# File paths (with defaults)
dict_file: Path = Path("/usr/share/dict/words")
# Domain settings
base_domain: str = "merakit.my"
app_name: Optional[str] = None
# Deployment options
dry_run: bool = False
max_retries: int = 3
healthcheck_timeout: int = 60 # seconds
healthcheck_interval: int = 10 # seconds
verify_ssl: bool = False
# Webhook settings (optional)
webhook_url: Optional[str] = None
webhook_timeout: int = 10 # seconds
webhook_retries: int = 3
# Logging
log_level: str = "INFO"
@classmethod
def from_env_and_args(cls, args) -> "DeploymentConfig":
"""
Factory method to create config from environment and CLI args
Args:
args: argparse.Namespace with CLI arguments
Returns:
DeploymentConfig instance
Raises:
ConfigurationError: If required configuration is missing
"""
logger.debug("Loading configuration from environment and arguments")
# Get Cloudflare credentials from environment
cloudflare_api_token = os.getenv('CLOUDFLARE_API_TOKEN')
cloudflare_zone_id = os.getenv('CLOUDFLARE_ZONE_ID')
if not cloudflare_api_token:
raise ConfigurationError(
"CLOUDFLARE_API_TOKEN environment variable is required"
)
if not cloudflare_zone_id:
raise ConfigurationError(
"CLOUDFLARE_ZONE_ID environment variable is required"
)
# Get optional webhook URL from environment or args
webhook_url = (
getattr(args, 'webhook_url', None)
or os.getenv('DEPLOYMENT_WEBHOOK_URL')
)
# Get optional settings from environment with defaults
max_retries = int(os.getenv('DEPLOYMENT_MAX_RETRIES', args.max_retries))
healthcheck_timeout = int(
os.getenv('DEPLOYMENT_HEALTHCHECK_TIMEOUT', '60')
)
healthcheck_interval = int(
os.getenv('DEPLOYMENT_HEALTHCHECK_INTERVAL', '10')
)
config = cls(
env_file=args.env_file,
docker_compose_file=args.compose_file,
dict_file=Path("/usr/share/dict/words"),
cloudflare_api_token=cloudflare_api_token,
cloudflare_zone_id=cloudflare_zone_id,
base_domain="merakit.my",
app_name=None,
dry_run=args.dry_run,
max_retries=max_retries,
healthcheck_timeout=healthcheck_timeout,
healthcheck_interval=healthcheck_interval,
verify_ssl=not args.no_verify_ssl,
webhook_url=webhook_url,
webhook_timeout=10,
webhook_retries=3,
log_level=args.log_level
)
logger.debug(f"Configuration loaded: {config}")
return config
def validate(self) -> None:
"""
Validate configuration completeness and correctness
Raises:
ConfigurationError: If configuration is invalid
"""
logger.debug("Validating configuration")
# Validate file paths exist
if not self.env_file.exists():
raise ConfigurationError(f"Env file not found: {self.env_file}")
if not self.docker_compose_file.exists():
raise ConfigurationError(
f"Docker compose file not found: {self.docker_compose_file}"
)
if not self.dict_file.exists():
raise ConfigurationError(
f"Dictionary file not found: {self.dict_file}. "
"Install 'words' package or ensure /usr/share/dict/words exists."
)
# Validate numeric ranges
if self.max_retries < 1:
raise ConfigurationError(
f"max_retries must be >= 1, got: {self.max_retries}"
)
if self.healthcheck_timeout < 1:
raise ConfigurationError(
f"healthcheck_timeout must be >= 1, got: {self.healthcheck_timeout}"
)
if self.healthcheck_interval < 1:
raise ConfigurationError(
f"healthcheck_interval must be >= 1, got: {self.healthcheck_interval}"
)
if self.healthcheck_interval >= self.healthcheck_timeout:
raise ConfigurationError(
f"healthcheck_interval ({self.healthcheck_interval}) must be < "
f"healthcheck_timeout ({self.healthcheck_timeout})"
)
# Validate log level
valid_log_levels = ["DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL"]
if self.log_level.upper() not in valid_log_levels:
raise ConfigurationError(
f"Invalid log_level: {self.log_level}. "
f"Must be one of: {', '.join(valid_log_levels)}"
)
logger.debug("Configuration validation successful")
def __repr__(self) -> str:
"""String representation with masked sensitive values"""
return (
f"DeploymentConfig("
f"env_file={self.env_file}, "
f"dry_run={self.dry_run}, "
f"max_retries={self.max_retries}, "
f"cloudflare_api_token=*****, "
f"webhook_url={self.webhook_url})"
)

View File

@ -0,0 +1,153 @@
"""
Deployment Configuration Manager
Manages saving and loading deployment configurations for tracking and cleanup
"""
import json
import logging
from dataclasses import asdict, dataclass
from datetime import datetime
from pathlib import Path
from typing import Dict, List, Optional
logger = logging.getLogger(__name__)
@dataclass
class DeploymentMetadata:
"""Metadata for a single deployment"""
subdomain: str
url: str
domain: str
compose_project_name: str
db_name: str
db_user: str
deployment_timestamp: str
dns_record_id: Optional[str] = None
dns_ip: Optional[str] = None
containers: Optional[List[str]] = None
volumes: Optional[List[str]] = None
networks: Optional[List[str]] = None
env_file_path: Optional[str] = None
class DeploymentConfigManager:
"""Manages deployment configuration persistence"""
def __init__(self, config_dir: Path = Path("deployments")):
"""
Initialize deployment config manager
Args:
config_dir: Directory to store deployment configs
"""
self.config_dir = config_dir
self.config_dir.mkdir(exist_ok=True)
self._logger = logging.getLogger(f"{__name__}.DeploymentConfigManager")
def save_deployment(self, metadata: DeploymentMetadata) -> Path:
"""
Save deployment configuration to disk
Args:
metadata: DeploymentMetadata instance
Returns:
Path to saved config file
"""
# Create filename based on subdomain and timestamp
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
filename = f"{metadata.subdomain}_{timestamp}.json"
config_path = self.config_dir / filename
# Convert to dict and save as JSON
config_data = asdict(metadata)
with open(config_path, 'w') as f:
json.dump(config_data, f, indent=2)
self._logger.info(f"Saved deployment config: {config_path}")
return config_path
def load_deployment(self, config_file: Path) -> DeploymentMetadata:
"""
Load deployment configuration from disk
Args:
config_file: Path to config file
Returns:
DeploymentMetadata instance
Raises:
FileNotFoundError: If config file doesn't exist
ValueError: If config file is invalid
"""
if not config_file.exists():
raise FileNotFoundError(f"Config file not found: {config_file}")
with open(config_file, 'r') as f:
config_data = json.load(f)
return DeploymentMetadata(**config_data)
def list_deployments(self) -> List[Path]:
"""
List all deployment config files
Returns:
List of config file paths sorted by modification time (newest first)
"""
config_files = list(self.config_dir.glob("*.json"))
return sorted(config_files, key=lambda p: p.stat().st_mtime, reverse=True)
def find_deployment_by_subdomain(self, subdomain: str) -> Optional[Path]:
"""
Find the most recent deployment config for a subdomain
Args:
subdomain: Subdomain to search for
Returns:
Path to config file or None if not found
"""
matching_files = list(self.config_dir.glob(f"{subdomain}_*.json"))
if not matching_files:
return None
# Return most recent
return max(matching_files, key=lambda p: p.stat().st_mtime)
def find_deployment_by_url(self, url: str) -> Optional[Path]:
"""
Find deployment config by URL
Args:
url: Full URL to search for
Returns:
Path to config file or None if not found
"""
for config_file in self.list_deployments():
try:
metadata = self.load_deployment(config_file)
if metadata.url == url:
return config_file
except (ValueError, json.JSONDecodeError) as e:
self._logger.warning(f"Failed to load config {config_file}: {e}")
continue
return None
def delete_deployment_config(self, config_file: Path) -> None:
"""
Delete deployment config file
Args:
config_file: Path to config file
"""
if config_file.exists():
config_file.unlink()
self._logger.info(f"Deleted deployment config: {config_file}")

View File

@ -0,0 +1,218 @@
"""
Deployment logging module
Handles writing deployment logs to success/failed directories
"""
import logging
from datetime import datetime
from pathlib import Path
from typing import Optional
logger = logging.getLogger(__name__)
class DeploymentFileLogger:
"""Logs deployment results to files"""
def __init__(self, logs_dir: Path = Path("logs")):
"""
Initialize deployment file logger
Args:
logs_dir: Base directory for logs (default: logs/)
"""
self._logs_dir = logs_dir
self._success_dir = logs_dir / "success"
self._failed_dir = logs_dir / "failed"
self._logger = logging.getLogger(f"{__name__}.DeploymentFileLogger")
# Ensure directories exist
self._ensure_directories()
def _ensure_directories(self) -> None:
"""Create log directories if they don't exist"""
for directory in [self._success_dir, self._failed_dir]:
directory.mkdir(parents=True, exist_ok=True)
self._logger.debug(f"Ensured directory exists: {directory}")
def _sanitize_url(self, url: str) -> str:
"""
Sanitize URL for use in filename
Args:
url: URL to sanitize
Returns:
Sanitized URL safe for filename
"""
# Remove protocol if present
url = url.replace("https://", "").replace("http://", "")
# Replace invalid filename characters
return url.replace("/", "_").replace(":", "_")
def _generate_filename(self, status: str, url: str, timestamp: datetime) -> str:
"""
Generate log filename
Format: success_url_date.txt or failed_url_date.txt
Args:
status: 'success' or 'failed'
url: Deployment URL
timestamp: Deployment timestamp
Returns:
Filename string
"""
sanitized_url = self._sanitize_url(url)
date_str = timestamp.strftime("%Y%m%d_%H%M%S")
return f"{status}_{sanitized_url}_{date_str}.txt"
def log_success(
self,
url: str,
subdomain: str,
duration: float,
timestamp: Optional[datetime] = None
) -> Path:
"""
Log successful deployment
Args:
url: Deployment URL
subdomain: Subdomain used
duration: Deployment duration in seconds
timestamp: Deployment timestamp (default: now)
Returns:
Path to created log file
"""
if timestamp is None:
timestamp = datetime.now()
filename = self._generate_filename("success", url, timestamp)
log_file = self._success_dir / filename
log_content = self._format_success_log(
url, subdomain, duration, timestamp
)
log_file.write_text(log_content)
self._logger.info(f"✓ Success log written: {log_file}")
return log_file
def log_failure(
self,
url: str,
subdomain: str,
error: str,
timestamp: Optional[datetime] = None
) -> Path:
"""
Log failed deployment
Args:
url: Deployment URL (may be empty if failed early)
subdomain: Subdomain used (may be empty if failed early)
error: Error message
timestamp: Deployment timestamp (default: now)
Returns:
Path to created log file
"""
if timestamp is None:
timestamp = datetime.now()
# Handle case where URL is empty (failed before URL generation)
log_url = url if url else "unknown"
filename = self._generate_filename("failed", log_url, timestamp)
log_file = self._failed_dir / filename
log_content = self._format_failure_log(
url, subdomain, error, timestamp
)
log_file.write_text(log_content)
self._logger.info(f"✓ Failure log written: {log_file}")
return log_file
def _format_success_log(
self,
url: str,
subdomain: str,
duration: float,
timestamp: datetime
) -> str:
"""
Format success log content
Args:
url: Deployment URL
subdomain: Subdomain used
duration: Deployment duration in seconds
timestamp: Deployment timestamp
Returns:
Formatted log content
"""
return f"""╔══════════════════════════════════════════════╗
DEPLOYMENT SUCCESS LOG
Timestamp: {timestamp.strftime("%Y-%m-%d %H:%M:%S")}
Status: SUCCESS
URL: https://{url}
Subdomain: {subdomain}
Duration: {duration:.2f} seconds
Deployment completed successfully.
All services are running and health checks passed.
"""
def _format_failure_log(
self,
url: str,
subdomain: str,
error: str,
timestamp: datetime
) -> str:
"""
Format failure log content
Args:
url: Deployment URL (may be empty)
subdomain: Subdomain used (may be empty)
error: Error message
timestamp: Deployment timestamp
Returns:
Formatted log content
"""
url_display = f"https://{url}" if url else "N/A (failed before URL generation)"
subdomain_display = subdomain if subdomain else "N/A"
return f"""╔══════════════════════════════════════════════╗
DEPLOYMENT FAILURE LOG
Timestamp: {timestamp.strftime("%Y-%m-%d %H:%M:%S")}
Status: FAILED
URL: {url_display}
Subdomain: {subdomain_display}
ERROR:
{error}
Deployment failed. See error details above.
All changes have been rolled back.
"""

View File

@ -0,0 +1,286 @@
"""
DNS management module with Cloudflare API integration
Direct Python API calls replacing cloudflare-add.sh and cloudflare-remove.sh
"""
import logging
from dataclasses import dataclass
from typing import Dict, Optional
import requests
logger = logging.getLogger(__name__)
class DNSError(Exception):
"""Raised when DNS operations fail"""
pass
@dataclass
class DNSRecord:
"""Represents a DNS record"""
record_id: str
hostname: str
ip: str
record_type: str
class DNSManager:
"""Python wrapper for Cloudflare DNS operations"""
def __init__(self, api_token: str, zone_id: str):
"""
Initialize DNS manager
Args:
api_token: Cloudflare API token
zone_id: Cloudflare zone ID
"""
self._api_token = api_token
self._zone_id = zone_id
self._base_url = f"https://api.cloudflare.com/client/v4/zones/{zone_id}/dns_records"
self._headers = {
"Authorization": f"Bearer {api_token}",
"Content-Type": "application/json"
}
self._logger = logging.getLogger(f"{__name__}.DNSManager")
def check_record_exists(self, hostname: str) -> bool:
"""
Check if DNS record exists using Cloudflare API
Args:
hostname: Fully qualified domain name
Returns:
True if record exists, False otherwise
Raises:
DNSError: If API call fails
"""
self._logger.debug(f"Checking if DNS record exists: {hostname}")
try:
params = {"name": hostname}
response = requests.get(
self._base_url,
headers=self._headers,
params=params,
timeout=30
)
response.raise_for_status()
data = response.json()
if not data.get("success", False):
errors = data.get("errors", [])
raise DNSError(f"Cloudflare API error: {errors}")
records = data.get("result", [])
exists = len(records) > 0
if exists:
self._logger.debug(f"DNS record exists: {hostname}")
else:
self._logger.debug(f"DNS record does not exist: {hostname}")
return exists
except requests.RequestException as e:
raise DNSError(f"Failed to check DNS record existence: {e}") from e
def add_record(
self,
hostname: str,
ip: str,
dry_run: bool = False
) -> DNSRecord:
"""
Add DNS A record
Args:
hostname: Fully qualified domain name
ip: IP address for A record
dry_run: If True, only log what would be done
Returns:
DNSRecord with record_id for rollback
Raises:
DNSError: If API call fails
"""
if dry_run:
self._logger.info(
f"[DRY-RUN] Would add DNS record: {hostname} -> {ip}"
)
return DNSRecord(
record_id="dry-run-id",
hostname=hostname,
ip=ip,
record_type="A"
)
self._logger.info(f"Adding DNS record: {hostname} -> {ip}")
try:
payload = {
"type": "A",
"name": hostname,
"content": ip,
"ttl": 1, # Automatic TTL
"proxied": False # DNS only, not proxied through Cloudflare
}
response = requests.post(
self._base_url,
headers=self._headers,
json=payload,
timeout=30
)
response.raise_for_status()
data = response.json()
if not data.get("success", False):
errors = data.get("errors", [])
raise DNSError(f"Cloudflare API error: {errors}")
result = data.get("result", {})
record_id = result.get("id")
if not record_id:
raise DNSError("No record ID returned from Cloudflare API")
self._logger.info(f"DNS record added successfully: {record_id}")
return DNSRecord(
record_id=record_id,
hostname=hostname,
ip=ip,
record_type="A"
)
except requests.RequestException as e:
raise DNSError(f"Failed to add DNS record: {e}") from e
def remove_record(self, hostname: str, dry_run: bool = False) -> None:
"""
Remove DNS record by hostname
Args:
hostname: Fully qualified domain name
dry_run: If True, only log what would be done
Raises:
DNSError: If API call fails
"""
if dry_run:
self._logger.info(f"[DRY-RUN] Would remove DNS record: {hostname}")
return
self._logger.info(f"Removing DNS record: {hostname}")
try:
# First, get the record ID
params = {"name": hostname}
response = requests.get(
self._base_url,
headers=self._headers,
params=params,
timeout=30
)
response.raise_for_status()
data = response.json()
if not data.get("success", False):
errors = data.get("errors", [])
raise DNSError(f"Cloudflare API error: {errors}")
records = data.get("result", [])
if not records:
self._logger.warning(f"No DNS record found for: {hostname}")
return
# Remove all matching records (typically just one)
for record in records:
record_id = record.get("id")
if record_id:
self.remove_record_by_id(record_id, dry_run=False)
except requests.RequestException as e:
raise DNSError(f"Failed to remove DNS record: {e}") from e
def remove_record_by_id(self, record_id: str, dry_run: bool = False) -> None:
"""
Remove DNS record by ID (more reliable for rollback)
Args:
record_id: Cloudflare DNS record ID
dry_run: If True, only log what would be done
Raises:
DNSError: If API call fails
"""
if dry_run:
self._logger.info(
f"[DRY-RUN] Would remove DNS record by ID: {record_id}"
)
return
self._logger.info(f"Removing DNS record by ID: {record_id}")
try:
url = f"{self._base_url}/{record_id}"
response = requests.delete(
url,
headers=self._headers,
timeout=30
)
# Handle 404/405 gracefully - record doesn't exist or can't be deleted
if response.status_code in [404, 405]:
self._logger.warning(
f"DNS record {record_id} not found or cannot be deleted (may already be removed)"
)
return
response.raise_for_status()
data = response.json()
if not data.get("success", False):
errors = data.get("errors", [])
raise DNSError(f"Cloudflare API error: {errors}")
self._logger.info(f"DNS record removed successfully: {record_id}")
except requests.RequestException as e:
raise DNSError(f"Failed to remove DNS record: {e}") from e
def get_public_ip(self) -> str:
"""
Get public IP address from external service
Returns:
Public IP address as string
Raises:
DNSError: If IP retrieval fails
"""
self._logger.debug("Retrieving public IP address")
try:
response = requests.get("https://ipv4.icanhazip.com", timeout=10)
response.raise_for_status()
ip = response.text.strip()
self._logger.debug(f"Public IP: {ip}")
return ip
except requests.RequestException as e:
raise DNSError(f"Failed to retrieve public IP: {e}") from e

View File

@ -0,0 +1,276 @@
"""
Docker management module
Wrapper for Docker Compose operations with validation and error handling
"""
import logging
import subprocess
from dataclasses import dataclass
from pathlib import Path
from typing import List
logger = logging.getLogger(__name__)
class DockerError(Exception):
"""Raised when Docker operations fail"""
pass
@dataclass
class ContainerInfo:
"""Information about a running container"""
container_id: str
name: str
status: str
class DockerManager:
"""Docker Compose operations wrapper"""
def __init__(self, compose_file: Path, env_file: Path):
"""
Initialize Docker manager
Args:
compose_file: Path to docker-compose.yml
env_file: Path to .env file
"""
self._compose_file = compose_file
self._env_file = env_file
self._logger = logging.getLogger(f"{__name__}.DockerManager")
def _run_command(
self,
cmd: List[str],
check: bool = True,
capture_output: bool = True
) -> subprocess.CompletedProcess:
"""
Run docker compose command
Args:
cmd: Command list to execute
check: Whether to raise on non-zero exit
capture_output: Whether to capture stdout/stderr
Returns:
CompletedProcess instance
Raises:
DockerError: If command fails and check=True
"""
self._logger.debug(f"Running: {' '.join(cmd)}")
try:
result = subprocess.run(
cmd,
check=check,
capture_output=capture_output,
text=True,
cwd=self._compose_file.parent
)
return result
except subprocess.CalledProcessError as e:
error_msg = f"Docker command failed: {e.stderr or e.stdout or str(e)}"
self._logger.error(error_msg)
raise DockerError(error_msg) from e
except FileNotFoundError as e:
raise DockerError(
f"Docker command not found. Is Docker installed? {e}"
) from e
def validate_compose_file(self) -> None:
"""
Validate docker-compose.yml syntax
Raises:
DockerError: If compose file is invalid
"""
self._logger.debug("Validating docker-compose.yml")
cmd = [
"docker", "compose",
"-f", str(self._compose_file),
"--env-file", str(self._env_file),
"config", "--quiet"
]
try:
self._run_command(cmd)
self._logger.debug("docker-compose.yml is valid")
except DockerError as e:
raise DockerError(f"Invalid docker-compose.yml: {e}") from e
def pull_images(self, dry_run: bool = False) -> None:
"""
Pull required Docker images
Args:
dry_run: If True, only log what would be done
Raises:
DockerError: If pull fails
"""
if dry_run:
self._logger.info("[DRY-RUN] Would pull Docker images")
return
self._logger.info("Pulling Docker images")
cmd = [
"docker", "compose",
"-f", str(self._compose_file),
"--env-file", str(self._env_file),
"pull"
]
self._run_command(cmd)
self._logger.info("Docker images pulled successfully")
def start_services(self, dry_run: bool = False) -> List[ContainerInfo]:
"""
Start Docker Compose services
Args:
dry_run: If True, only log what would be done
Returns:
List of created containers for rollback
Raises:
DockerError: If start fails
"""
if dry_run:
self._logger.info("[DRY-RUN] Would start Docker services")
return []
self._logger.info("Starting Docker services")
cmd = [
"docker", "compose",
"-f", str(self._compose_file),
"--env-file", str(self._env_file),
"up", "-d"
]
self._run_command(cmd)
# Get container info for rollback
containers = self.get_container_status()
self._logger.info(
f"Docker services started successfully: {len(containers)} containers"
)
return containers
def stop_services(self, dry_run: bool = False) -> None:
"""
Stop Docker Compose services
Args:
dry_run: If True, only log what would be done
Raises:
DockerError: If stop fails
"""
if dry_run:
self._logger.info("[DRY-RUN] Would stop Docker services")
return
self._logger.info("Stopping Docker services")
cmd = [
"docker", "compose",
"-f", str(self._compose_file),
"--env-file", str(self._env_file),
"down"
]
self._run_command(cmd)
self._logger.info("Docker services stopped successfully")
def stop_services_and_remove_volumes(self, dry_run: bool = False) -> None:
"""
Stop services and remove volumes (full cleanup)
Args:
dry_run: If True, only log what would be done
Raises:
DockerError: If stop fails
"""
if dry_run:
self._logger.info("[DRY-RUN] Would stop Docker services and remove volumes")
return
self._logger.info("Stopping Docker services and removing volumes")
cmd = [
"docker", "compose",
"-f", str(self._compose_file),
"--env-file", str(self._env_file),
"down", "-v"
]
self._run_command(cmd)
self._logger.info("Docker services stopped and volumes removed")
def get_container_status(self) -> List[ContainerInfo]:
"""
Get status of containers for this project
Returns:
List of ContainerInfo objects
Raises:
DockerError: If status check fails
"""
self._logger.debug("Getting container status")
cmd = [
"docker", "compose",
"-f", str(self._compose_file),
"--env-file", str(self._env_file),
"ps", "-q"
]
result = self._run_command(cmd)
container_ids = [
cid.strip()
for cid in result.stdout.strip().split('\n')
if cid.strip()
]
containers = []
for container_id in container_ids:
# Get container details
inspect_cmd = ["docker", "inspect", container_id, "--format", "{{.Name}}:{{.State.Status}}"]
try:
inspect_result = self._run_command(inspect_cmd)
name_status = inspect_result.stdout.strip()
if ':' in name_status:
name, status = name_status.split(':', 1)
# Remove leading slash from container name
name = name.lstrip('/')
containers.append(ContainerInfo(
container_id=container_id,
name=name,
status=status
))
except DockerError:
# If inspect fails, just record the ID
containers.append(ContainerInfo(
container_id=container_id,
name="unknown",
status="unknown"
))
self._logger.debug(f"Found {len(containers)} containers")
return containers

View File

@ -0,0 +1,394 @@
"""
Environment generation module - replaces generate-env.sh
Provides pure Python implementations for:
- Random word selection from dictionary
- Memorable password generation
- Environment file generation and manipulation
"""
import logging
import os
import random
import re
import secrets
import shutil
from dataclasses import asdict, dataclass
from datetime import datetime
from pathlib import Path
from typing import Dict, List, Optional
logger = logging.getLogger(__name__)
@dataclass
class EnvValues:
"""Container for generated environment values"""
subdomain: str
domain: str
url: str
db_name: str
db_user: str
db_password: str
db_root_password: str
compose_project_name: str
class WordGenerator:
"""Pure Python implementation of dictionary word selection"""
def __init__(self, dict_file: Path):
"""
Initialize word generator
Args:
dict_file: Path to dictionary file (e.g., /usr/share/dict/words)
"""
self._dict_file = dict_file
self._words_cache: Optional[List[str]] = None
self._logger = logging.getLogger(f"{__name__}.WordGenerator")
def _load_and_filter_words(self) -> List[str]:
"""
Load dictionary and filter to 4-10 char lowercase words
Returns:
List of filtered words
Raises:
FileNotFoundError: If dictionary file doesn't exist
ValueError: If no valid words found
"""
if not self._dict_file.exists():
raise FileNotFoundError(f"Dictionary file not found: {self._dict_file}")
self._logger.debug(f"Loading words from {self._dict_file}")
# Read and filter words matching pattern: ^[a-z]{4,10}$
pattern = re.compile(r'^[a-z]{4,10}$')
words = []
with open(self._dict_file, 'r', encoding='utf-8') as f:
for line in f:
word = line.strip()
if pattern.match(word):
words.append(word)
if not words:
raise ValueError(f"No valid words found in {self._dict_file}")
self._logger.debug(f"Loaded {len(words)} valid words")
return words
def get_random_word(self) -> str:
"""
Get single random word from filtered list
Returns:
Random word (4-10 chars, lowercase)
"""
# Load and cache words on first use
if self._words_cache is None:
self._words_cache = self._load_and_filter_words()
return random.choice(self._words_cache)
def get_random_words(self, count: int) -> List[str]:
"""
Get multiple random words efficiently
Args:
count: Number of words to retrieve
Returns:
List of random words
"""
# Load and cache words on first use
if self._words_cache is None:
self._words_cache = self._load_and_filter_words()
return random.choices(self._words_cache, k=count)
class PasswordGenerator:
"""Generate memorable passwords from dictionary words"""
def __init__(self, word_generator: WordGenerator):
"""
Initialize password generator
Args:
word_generator: WordGenerator instance for word selection
"""
self._word_generator = word_generator
self._logger = logging.getLogger(f"{__name__}.PasswordGenerator")
def generate_memorable_password(self, word_count: int = 3) -> str:
"""
Generate password from N random nouns joined by hyphens
Args:
word_count: Number of words to use (default: 3)
Returns:
Password string like "templon-infantly-yielding"
"""
words = self._word_generator.get_random_words(word_count)
password = '-'.join(words)
self._logger.debug(f"Generated {word_count}-word password")
return password
def generate_random_string(self, length: int = 8) -> str:
"""
Generate alphanumeric random string using secrets module
Args:
length: Length of string to generate (default: 8)
Returns:
Random alphanumeric string
"""
# Use secrets for cryptographically secure random generation
# Generate hex and convert to lowercase alphanumeric
return secrets.token_hex(length // 2 + 1)[:length]
class EnvFileGenerator:
"""Pure Python .env file manipulation (replaces bash sed logic)"""
def __init__(
self,
env_file: Path,
word_generator: WordGenerator,
password_generator: PasswordGenerator,
base_domain: str = "merakit.my",
app_name: Optional[str] = None
):
"""
Initialize environment file generator
Args:
env_file: Path to .env file
word_generator: WordGenerator instance
password_generator: PasswordGenerator instance
base_domain: Base domain for URL generation (default: "merakit.my")
app_name: Application name (default: read from .env or "wordpress")
"""
self._env_file = env_file
self._word_generator = word_generator
self._password_generator = password_generator
self._base_domain = base_domain
self._app_name = app_name
self._logger = logging.getLogger(f"{__name__}.EnvFileGenerator")
def generate_values(self) -> EnvValues:
"""
Generate all environment values
Returns:
EnvValues dataclass with all generated values
"""
self._logger.info("Generating environment values")
# Read current .env to get app_name if not provided
current_env = self.read_current_env()
app_name = self._app_name or current_env.get('APP_NAME', 'wordpress')
# 1. Generate subdomain: two random words
word1 = self._word_generator.get_random_word()
word2 = self._word_generator.get_random_word()
subdomain = f"{word1}-{word2}"
# 2. Construct URL
url = f"{subdomain}.{self._base_domain}"
# 3. Generate random string for DB identifiers
random_str = self._password_generator.generate_random_string(8)
# 4. Generate DB identifiers with truncation logic
db_name = self._generate_db_name(random_str, app_name, subdomain)
db_user = self._generate_db_user(random_str, app_name, subdomain)
# 5. Generate passwords
db_password = self._password_generator.generate_memorable_password(3)
db_root_password = self._password_generator.generate_memorable_password(3)
self._logger.info(f"Generated values for subdomain: {subdomain}")
self._logger.debug(f"URL: {url}")
self._logger.debug(f"DB_NAME: {db_name}")
self._logger.debug(f"DB_USER: {db_user}")
return EnvValues(
subdomain=subdomain,
domain=self._base_domain,
url=url,
db_name=db_name,
db_user=db_user,
db_password=db_password,
db_root_password=db_root_password,
compose_project_name=subdomain
)
def _generate_db_name(self, random_str: str, app_name: str, subdomain: str) -> str:
"""
Format: angali_{random8}_{app}_{subdomain}, truncate to 64 chars
Args:
random_str: Random 8-char string
app_name: Application name
subdomain: Subdomain with hyphens
Returns:
Database name (max 64 chars)
"""
# Replace hyphens with underscores for DB compatibility
subdomain_safe = subdomain.replace('-', '_')
db_name = f"angali_{random_str}_{app_name}_{subdomain_safe}"
# Truncate to MySQL limit of 64 chars
return db_name[:64]
def _generate_db_user(self, random_str: str, app_name: str, subdomain: str) -> str:
"""
Format: angali_{random8}_{app}_{subdomain}, truncate to 32 chars
Args:
random_str: Random 8-char string
app_name: Application name
subdomain: Subdomain with hyphens
Returns:
Database username (max 32 chars)
"""
# Replace hyphens with underscores for DB compatibility
subdomain_safe = subdomain.replace('-', '_')
db_user = f"angali_{random_str}_{app_name}_{subdomain_safe}"
# Truncate to MySQL limit of 32 chars for usernames
return db_user[:32]
def read_current_env(self) -> Dict[str, str]:
"""
Parse existing .env file into dict
Returns:
Dictionary of environment variables
"""
env_dict = {}
if not self._env_file.exists():
self._logger.warning(f"Env file not found: {self._env_file}")
return env_dict
with open(self._env_file, 'r') as f:
for line in f:
line = line.strip()
# Skip empty lines and comments
if not line or line.startswith('#'):
continue
# Parse KEY=VALUE format
if '=' in line:
key, value = line.split('=', 1)
# Remove quotes if present
value = value.strip('"').strip("'")
env_dict[key.strip()] = value
self._logger.debug(f"Read {len(env_dict)} variables from {self._env_file}")
return env_dict
def backup_env_file(self) -> Path:
"""
Create timestamped backup of .env file
Returns:
Path to backup file
Raises:
FileNotFoundError: If .env file doesn't exist
"""
if not self._env_file.exists():
raise FileNotFoundError(f"Cannot backup non-existent file: {self._env_file}")
# Create backup with timestamp
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
backup_path = self._env_file.parent / f"{self._env_file.name}.backup.{timestamp}"
shutil.copy2(self._env_file, backup_path)
self._logger.info(f"Created backup: {backup_path}")
return backup_path
def update_env_file(self, values: EnvValues, dry_run: bool = False) -> None:
"""
Update .env file with new values (Python dict manipulation)
Uses atomic write pattern: write to temp file, then rename
Args:
values: EnvValues to write
dry_run: If True, only log what would be done
Raises:
FileNotFoundError: If .env file doesn't exist
"""
if not self._env_file.exists():
raise FileNotFoundError(f"Env file not found: {self._env_file}")
if dry_run:
self._logger.info(f"[DRY-RUN] Would update {self._env_file} with:")
for key, value in asdict(values).items():
if 'password' in key.lower():
self._logger.info(f" {key.upper()}=********")
else:
self._logger.info(f" {key.upper()}={value}")
return
# Read current env
current_env = self.read_current_env()
# Update with new values
current_env.update({
'COMPOSE_PROJECT_NAME': values.compose_project_name,
'SUBDOMAIN': values.subdomain,
'DOMAIN': values.domain,
'URL': values.url,
'DB_NAME': values.db_name,
'DB_USER': values.db_user,
'DB_PASSWORD': values.db_password,
'DB_ROOT_PASSWORD': values.db_root_password
})
# Write atomically: write to temp file, then rename
temp_file = self._env_file.parent / f"{self._env_file.name}.tmp"
try:
with open(temp_file, 'w') as f:
for key, value in current_env.items():
f.write(f"{key}={value}\n")
# Atomic rename
os.replace(temp_file, self._env_file)
self._logger.info(f"Updated {self._env_file} successfully")
except Exception as e:
# Cleanup temp file on error
if temp_file.exists():
temp_file.unlink()
raise RuntimeError(f"Failed to update env file: {e}") from e
def restore_env_file(self, backup_path: Path) -> None:
"""
Restore .env from backup
Args:
backup_path: Path to backup file
Raises:
FileNotFoundError: If backup file doesn't exist
"""
if not backup_path.exists():
raise FileNotFoundError(f"Backup file not found: {backup_path}")
shutil.copy2(backup_path, self._env_file)
self._logger.info(f"Restored {self._env_file} from {backup_path}")

View File

@ -0,0 +1,128 @@
"""
Health check module
HTTP health checking with retry logic and progress indicators
"""
import logging
import time
import requests
logger = logging.getLogger(__name__)
class HealthCheckError(Exception):
"""Raised when health check fails"""
pass
class HealthChecker:
"""HTTP health check with retry logic"""
def __init__(
self,
timeout: int,
interval: int,
verify_ssl: bool
):
"""
Initialize health checker
Args:
timeout: Total timeout in seconds
interval: Check interval in seconds
verify_ssl: Whether to verify SSL certificates
"""
self._timeout = timeout
self._interval = interval
self._verify_ssl = verify_ssl
self._logger = logging.getLogger(f"{__name__}.HealthChecker")
def check_health(self, url: str, dry_run: bool = False) -> bool:
"""
Perform health check with retries
Args:
url: URL to check (e.g., https://example.com)
dry_run: If True, only log what would be done
Returns:
True if health check passed, False otherwise
"""
if dry_run:
self._logger.info(f"[DRY-RUN] Would check health of {url}")
return True
self._logger.info(
f"Checking health of {url} for up to {self._timeout} seconds"
)
start_time = time.time()
attempt = 0
while True:
attempt += 1
elapsed = time.time() - start_time
if elapsed > self._timeout:
self._logger.error(
f"Health check timed out after {elapsed:.1f} seconds "
f"({attempt} attempts)"
)
return False
# Perform single check
if self._single_check(url):
self._logger.info(
f"Health check passed after {elapsed:.1f} seconds "
f"({attempt} attempts)"
)
return True
# Wait before next attempt
remaining = self._timeout - elapsed
if remaining > 0:
wait_time = min(self._interval, remaining)
self._logger.debug(
f"Attempt {attempt} failed, retrying in {wait_time:.1f}s "
f"(elapsed: {elapsed:.1f}s, timeout: {self._timeout}s)"
)
time.sleep(wait_time)
else:
# No time remaining
self._logger.error(f"Health check timed out after {attempt} attempts")
return False
def _single_check(self, url: str) -> bool:
"""
Single health check attempt
Args:
url: URL to check
Returns:
True if valid HTTP response (2xx or 3xx) received, False otherwise
"""
try:
response = requests.get(
url,
timeout=5,
verify=self._verify_ssl,
allow_redirects=True
)
# Accept any 2xx or 3xx status code as valid
if 200 <= response.status_code < 400:
self._logger.debug(f"Health check successful: HTTP {response.status_code}")
return True
else:
self._logger.debug(
f"Health check failed: HTTP {response.status_code}"
)
return False
except requests.RequestException as e:
self._logger.debug(f"Health check failed: {type(e).__name__}: {e}")
return False

View File

@ -0,0 +1,626 @@
"""
Deployment orchestration module
Main deployment workflow with rollback tracking and execution
"""
import logging
import shutil
import time
from dataclasses import asdict, dataclass
from datetime import datetime
from pathlib import Path
from typing import Any, Dict, List
from .config import DeploymentConfig
from .deployment_config_manager import DeploymentConfigManager, DeploymentMetadata
from .deployment_logger import DeploymentFileLogger
from .dns_manager import DNSError, DNSManager, DNSRecord
from .docker_manager import DockerError, DockerManager
from .env_generator import EnvFileGenerator, EnvValues, PasswordGenerator, WordGenerator
from .health import HealthCheckError, HealthChecker
from .webhooks import WebhookNotifier
logger = logging.getLogger(__name__)
class DeploymentError(Exception):
"""Base exception for deployment errors"""
pass
class ValidationError(DeploymentError):
"""Validation failed"""
pass
@dataclass
class DeploymentAction:
"""Represents a single deployment action"""
action_type: str # 'dns_added', 'containers_started', 'env_updated'
timestamp: datetime
details: Dict[str, Any]
rollback_data: Dict[str, Any]
class DeploymentTracker:
"""Track deployment actions for rollback"""
def __init__(self):
"""Initialize deployment tracker"""
self._actions: List[DeploymentAction] = []
self._logger = logging.getLogger(f"{__name__}.DeploymentTracker")
def record_action(self, action: DeploymentAction) -> None:
"""
Record a deployment action
Args:
action: DeploymentAction to record
"""
self._actions.append(action)
self._logger.debug(f"Recorded action: {action.action_type}")
def get_actions(self) -> List[DeploymentAction]:
"""
Get all recorded actions
Returns:
List of DeploymentAction objects
"""
return self._actions.copy()
def clear(self) -> None:
"""Clear tracking history"""
self._actions.clear()
self._logger.debug("Cleared action history")
class DeploymentOrchestrator:
"""Main orchestrator coordinating all deployment steps"""
def __init__(self, config: DeploymentConfig):
"""
Initialize deployment orchestrator
Args:
config: DeploymentConfig instance
"""
self._config = config
self._logger = logging.getLogger(f"{__name__}.DeploymentOrchestrator")
# Initialize components
self._word_generator = WordGenerator(config.dict_file)
self._password_generator = PasswordGenerator(self._word_generator)
self._env_generator = EnvFileGenerator(
config.env_file,
self._word_generator,
self._password_generator,
config.base_domain,
config.app_name
)
self._dns_manager = DNSManager(
config.cloudflare_api_token,
config.cloudflare_zone_id
)
self._docker_manager = DockerManager(
config.docker_compose_file,
config.env_file
)
self._webhook_notifier = WebhookNotifier(
config.webhook_url,
config.webhook_timeout,
config.webhook_retries
)
self._health_checker = HealthChecker(
config.healthcheck_timeout,
config.healthcheck_interval,
config.verify_ssl
)
self._tracker = DeploymentTracker()
self._deployment_logger = DeploymentFileLogger()
self._config_manager = DeploymentConfigManager()
def deploy(self) -> None:
"""
Main deployment workflow
Raises:
DeploymentError: If deployment fails
"""
start_time = time.time()
env_values = None
dns_record_id = None
dns_ip = None
containers = []
try:
# Phase 1: Validation
self._phase_validate()
# Phase 2: Environment Generation (with retry on DNS conflicts)
env_values = self._phase_generate_env_with_retries()
# Send deployment_started webhook
self._webhook_notifier.deployment_started(
env_values.subdomain,
env_values.url
)
# Phase 3: DNS Setup
dns_record_id, dns_ip = self._phase_setup_dns(env_values)
# Phase 4: Container Deployment
containers = self._phase_deploy_containers()
# Phase 5: Health Check
self._phase_health_check(env_values.url)
# Success
duration = time.time() - start_time
self._webhook_notifier.deployment_success(
env_values.subdomain,
env_values.url,
duration
)
self._logger.info(
f"✓ Deployment successful! URL: https://{env_values.url} "
f"(took {duration:.1f}s)"
)
# Log success to file
self._deployment_logger.log_success(
env_values.url,
env_values.subdomain,
duration
)
# Save deployment configuration
self._save_deployment_config(
env_values,
dns_record_id,
dns_ip,
containers
)
except Exception as e:
self._logger.error(f"✗ Deployment failed: {e}")
# Send failure webhook
if env_values:
self._webhook_notifier.deployment_failed(
env_values.subdomain,
str(e),
env_values.url
)
else:
self._webhook_notifier.deployment_failed("", str(e), "")
# Log failure to file
if env_values:
self._deployment_logger.log_failure(
env_values.url,
env_values.subdomain,
str(e)
)
else:
self._deployment_logger.log_failure(
"",
"",
str(e)
)
# Rollback
self._logger.info("Starting rollback...")
self._rollback_all()
raise DeploymentError(f"Deployment failed: {e}") from e
def _phase_validate(self) -> None:
"""
Phase 1: Pre-deployment validation
Raises:
ValidationError: If validation fails
"""
self._logger.info("═══ Phase 1: Validation ═══")
# Check system dependencies
self._validate_dependencies()
# Validate environment file
if not self._config.env_file.exists():
raise ValidationError(f"Env file not found: {self._config.env_file}")
# Validate Docker Compose file
try:
self._docker_manager.validate_compose_file()
except DockerError as e:
raise ValidationError(f"Invalid docker-compose.yml: {e}") from e
# Check external Docker network exists
self._validate_docker_network("proxy")
self._logger.info("✓ Validation complete")
def _validate_dependencies(self) -> None:
"""
Validate system dependencies
Raises:
ValidationError: If dependencies are missing
"""
import shutil as sh
required_commands = ["docker", "curl"]
for cmd in required_commands:
if not sh.which(cmd):
raise ValidationError(
f"Required command not found: {cmd}. "
f"Please install {cmd} and try again."
)
# Check Docker daemon is running
try:
import subprocess
result = subprocess.run(
["docker", "info"],
capture_output=True,
timeout=5
)
if result.returncode != 0:
raise ValidationError(
"Docker daemon is not running. Please start Docker."
)
except (subprocess.TimeoutExpired, FileNotFoundError) as e:
raise ValidationError(f"Failed to check Docker daemon: {e}") from e
def _validate_docker_network(self, network_name: str) -> None:
"""
Check external Docker network exists
Args:
network_name: Network name to check
Raises:
ValidationError: If network doesn't exist
"""
import subprocess
try:
result = subprocess.run(
["docker", "network", "inspect", network_name],
capture_output=True,
timeout=5
)
if result.returncode != 0:
raise ValidationError(
f"Docker network '{network_name}' not found. "
f"Please create it with: docker network create {network_name}"
)
except (subprocess.TimeoutExpired, FileNotFoundError) as e:
raise ValidationError(
f"Failed to check Docker network: {e}"
) from e
def _phase_generate_env_with_retries(self) -> EnvValues:
"""
Phase 2: Generate environment with DNS conflict retry
Returns:
EnvValues with generated values
Raises:
DeploymentError: If unable to generate unique subdomain
"""
self._logger.info("═══ Phase 2: Environment Generation ═══")
for attempt in range(1, self._config.max_retries + 1):
# Generate new values
env_values = self._env_generator.generate_values()
self._logger.info(f"Generated subdomain: {env_values.subdomain}")
# Check DNS conflict
try:
if not self._dns_manager.check_record_exists(env_values.url):
# No conflict, proceed
self._logger.info(f"✓ Subdomain available: {env_values.subdomain}")
# Create backup
backup_path = self._env_generator.backup_env_file()
# Update .env file
self._env_generator.update_env_file(
env_values,
dry_run=self._config.dry_run
)
# Track for rollback
self._tracker.record_action(DeploymentAction(
action_type="env_updated",
timestamp=datetime.now(),
details={"env_values": asdict(env_values)},
rollback_data={"backup_path": str(backup_path)}
))
return env_values
else:
self._logger.warning(
f"✗ DNS conflict for {env_values.url}, "
f"regenerating... (attempt {attempt}/{self._config.max_retries})"
)
except DNSError as e:
self._logger.warning(
f"DNS check failed: {e}. "
f"Assuming no conflict and proceeding..."
)
# If DNS check fails, proceed anyway (fail open)
backup_path = self._env_generator.backup_env_file()
self._env_generator.update_env_file(
env_values,
dry_run=self._config.dry_run
)
self._tracker.record_action(DeploymentAction(
action_type="env_updated",
timestamp=datetime.now(),
details={"env_values": asdict(env_values)},
rollback_data={"backup_path": str(backup_path)}
))
return env_values
raise DeploymentError(
f"Failed to generate unique subdomain after {self._config.max_retries} attempts"
)
def _phase_setup_dns(self, env_values: EnvValues) -> tuple:
"""
Phase 3: Add DNS record
Args:
env_values: EnvValues with subdomain and URL
Returns:
Tuple of (record_id, ip)
Raises:
DNSError: If DNS setup fails
"""
self._logger.info("═══ Phase 3: DNS Setup ═══")
# Get public IP
ip = self._dns_manager.get_public_ip()
self._logger.info(f"Public IP: {ip}")
# Add DNS record
dns_record = self._dns_manager.add_record(
env_values.url,
ip,
dry_run=self._config.dry_run
)
self._logger.info(f"✓ DNS record added: {env_values.url} -> {ip}")
# Track for rollback
self._tracker.record_action(DeploymentAction(
action_type="dns_added",
timestamp=datetime.now(),
details={"hostname": env_values.url, "ip": ip},
rollback_data={"record_id": dns_record.record_id}
))
# Send webhook notification
self._webhook_notifier.dns_added(env_values.url, ip)
return dns_record.record_id, ip
def _phase_deploy_containers(self) -> List:
"""
Phase 4: Start Docker containers
Returns:
List of container information
Raises:
DockerError: If container deployment fails
"""
self._logger.info("═══ Phase 4: Container Deployment ═══")
# Pull images
self._logger.info("Pulling Docker images...")
self._docker_manager.pull_images(dry_run=self._config.dry_run)
# Start services
self._logger.info("Starting Docker services...")
containers = self._docker_manager.start_services(
dry_run=self._config.dry_run
)
self._logger.info(
f"✓ Docker services started: {len(containers)} containers"
)
# Track for rollback
self._tracker.record_action(DeploymentAction(
action_type="containers_started",
timestamp=datetime.now(),
details={"containers": [asdict(c) for c in containers]},
rollback_data={}
))
return containers
def _phase_health_check(self, url: str) -> None:
"""
Phase 5: Health check
Args:
url: URL to check (without https://)
Raises:
HealthCheckError: If health check fails
"""
self._logger.info("═══ Phase 5: Health Check ═══")
health_url = f"https://{url}"
start_time = time.time()
if not self._health_checker.check_health(
health_url,
dry_run=self._config.dry_run
):
raise HealthCheckError(f"Health check failed for {health_url}")
duration = time.time() - start_time
self._logger.info(f"✓ Health check passed (took {duration:.1f}s)")
# Send webhook notification
self._webhook_notifier.health_check_passed(url, duration)
def _rollback_all(self) -> None:
"""Rollback all tracked actions in reverse order"""
actions = list(reversed(self._tracker.get_actions()))
if not actions:
self._logger.info("No actions to rollback")
return
self._logger.info(f"Rolling back {len(actions)} actions...")
for action in actions:
try:
self._rollback_action(action)
except Exception as e:
# Log but don't fail rollback
self._logger.error(
f"Failed to rollback action {action.action_type}: {e}"
)
self._logger.info("Rollback complete")
def _rollback_action(self, action: DeploymentAction) -> None:
"""
Rollback single action based on type
Args:
action: DeploymentAction to rollback
"""
if action.action_type == "dns_added":
self._rollback_dns(action)
elif action.action_type == "containers_started":
self._rollback_containers(action)
elif action.action_type == "env_updated":
self._rollback_env(action)
else:
self._logger.warning(f"Unknown action type: {action.action_type}")
def _rollback_dns(self, action: DeploymentAction) -> None:
"""
Rollback DNS changes
Args:
action: DeploymentAction with DNS details
"""
record_id = action.rollback_data.get("record_id")
if record_id:
self._logger.info(f"Rolling back DNS record: {record_id}")
try:
self._dns_manager.remove_record_by_id(
record_id,
dry_run=self._config.dry_run
)
self._logger.info("✓ DNS record removed")
except DNSError as e:
self._logger.error(f"Failed to remove DNS record: {e}")
def _rollback_containers(self, action: DeploymentAction) -> None:
"""
Stop and remove containers
Args:
action: DeploymentAction with container details
"""
self._logger.info("Rolling back Docker containers")
try:
self._docker_manager.stop_services(dry_run=self._config.dry_run)
self._logger.info("✓ Docker services stopped")
except DockerError as e:
self._logger.error(f"Failed to stop Docker services: {e}")
def _rollback_env(self, action: DeploymentAction) -> None:
"""
Restore .env file from backup
Args:
action: DeploymentAction with backup path
"""
backup_path_str = action.rollback_data.get("backup_path")
if backup_path_str:
backup_path = Path(backup_path_str)
if backup_path.exists():
self._logger.info(f"Rolling back .env file from {backup_path}")
try:
self._env_generator.restore_env_file(backup_path)
self._logger.info("✓ .env file restored")
except Exception as e:
self._logger.error(f"Failed to restore .env file: {e}")
else:
self._logger.warning(f"Backup file not found: {backup_path}")
def _save_deployment_config(
self,
env_values: EnvValues,
dns_record_id: str,
dns_ip: str,
containers: List
) -> None:
"""
Save deployment configuration for later cleanup
Args:
env_values: EnvValues with deployment info
dns_record_id: Cloudflare DNS record ID
dns_ip: IP address used in DNS
containers: List of container information
"""
try:
# Extract container names, volumes, and networks
container_names = [c.name for c in containers if hasattr(c, 'name')]
# Get volumes and networks from docker-compose
volumes = [
f"{env_values.compose_project_name}_db_data",
f"{env_values.compose_project_name}_wp_data"
]
networks = [
f"{env_values.compose_project_name}_internal"
]
# Create metadata
metadata = DeploymentMetadata(
subdomain=env_values.subdomain,
url=env_values.url,
domain=env_values.domain,
compose_project_name=env_values.compose_project_name,
db_name=env_values.db_name,
db_user=env_values.db_user,
deployment_timestamp=datetime.now().isoformat(),
dns_record_id=dns_record_id,
dns_ip=dns_ip,
containers=container_names,
volumes=volumes,
networks=networks,
env_file_path=str(self._config.env_file.absolute())
)
# Save configuration
config_path = self._config_manager.save_deployment(metadata)
self._logger.info(f"✓ Deployment config saved: {config_path}")
except Exception as e:
self._logger.warning(f"Failed to save deployment config: {e}")

View File

@ -0,0 +1,199 @@
"""
Webhook notifications module
Send deployment event notifications with retry logic
"""
import logging
import time
from dataclasses import asdict, dataclass
from datetime import datetime
from typing import Any, Dict, Optional
import requests
logger = logging.getLogger(__name__)
@dataclass
class WebhookEvent:
"""Webhook event data"""
event_type: str # deployment_started, deployment_success, etc.
timestamp: str
subdomain: str
url: str
message: str
metadata: Dict[str, Any]
class WebhookNotifier:
"""Send webhook notifications with retry logic"""
def __init__(
self,
webhook_url: Optional[str],
timeout: int,
max_retries: int
):
"""
Initialize webhook notifier
Args:
webhook_url: Webhook URL to send notifications to (None to disable)
timeout: Request timeout in seconds
max_retries: Maximum number of retry attempts
"""
self._webhook_url = webhook_url
self._timeout = timeout
self._max_retries = max_retries
self._logger = logging.getLogger(f"{__name__}.WebhookNotifier")
if not webhook_url:
self._logger.debug("Webhook notifications disabled (no URL configured)")
def notify(self, event: WebhookEvent) -> None:
"""
Send webhook notification with retry
Args:
event: WebhookEvent to send
Note:
Failures are logged but don't raise exceptions to avoid
failing deployments due to webhook issues
"""
if not self._webhook_url:
return
payload = asdict(event)
self._logger.debug(f"Sending webhook: {event.event_type}")
for attempt in range(1, self._max_retries + 1):
try:
response = requests.post(
self._webhook_url,
json=payload,
timeout=self._timeout
)
response.raise_for_status()
self._logger.debug(
f"Webhook sent successfully: {event.event_type} "
f"(attempt {attempt})"
)
return
except requests.RequestException as e:
self._logger.warning(
f"Webhook delivery failed (attempt {attempt}/{self._max_retries}): {e}"
)
if attempt < self._max_retries:
# Exponential backoff: 1s, 2s, 4s, etc.
backoff = 2 ** (attempt - 1)
self._logger.debug(f"Retrying in {backoff}s...")
time.sleep(backoff)
self._logger.error(
f"Failed to deliver webhook after {self._max_retries} attempts: "
f"{event.event_type}"
)
def deployment_started(self, subdomain: str, url: str) -> None:
"""
Convenience method for deployment_started event
Args:
subdomain: Subdomain being deployed
url: Full URL being deployed
"""
event = WebhookEvent(
event_type="deployment_started",
timestamp=datetime.utcnow().isoformat() + "Z",
subdomain=subdomain,
url=url,
message=f"Deployment started for {url}",
metadata={}
)
self.notify(event)
def deployment_success(
self,
subdomain: str,
url: str,
duration: float
) -> None:
"""
Convenience method for deployment_success event
Args:
subdomain: Subdomain that was deployed
url: Full URL that was deployed
duration: Deployment duration in seconds
"""
event = WebhookEvent(
event_type="deployment_success",
timestamp=datetime.utcnow().isoformat() + "Z",
subdomain=subdomain,
url=url,
message=f"Deployment successful for {url}",
metadata={"duration": round(duration, 2)}
)
self.notify(event)
def deployment_failed(self, subdomain: str, error: str, url: str = "") -> None:
"""
Convenience method for deployment_failed event
Args:
subdomain: Subdomain that failed to deploy
error: Error message
url: Full URL (may be empty if deployment failed early)
"""
event = WebhookEvent(
event_type="deployment_failed",
timestamp=datetime.utcnow().isoformat() + "Z",
subdomain=subdomain,
url=url,
message=f"Deployment failed: {error}",
metadata={"error": error}
)
self.notify(event)
def dns_added(self, hostname: str, ip: str) -> None:
"""
Convenience method for dns_added event
Args:
hostname: Hostname that was added to DNS
ip: IP address the hostname points to
"""
event = WebhookEvent(
event_type="dns_added",
timestamp=datetime.utcnow().isoformat() + "Z",
subdomain=hostname.split('.')[0], # Extract subdomain
url=hostname,
message=f"DNS record added for {hostname}",
metadata={"ip": ip}
)
self.notify(event)
def health_check_passed(self, url: str, duration: float) -> None:
"""
Convenience method for health_check_passed event
Args:
url: URL that passed health check
duration: Time taken for health check in seconds
"""
event = WebhookEvent(
event_type="health_check_passed",
timestamp=datetime.utcnow().isoformat() + "Z",
subdomain=url.split('.')[0].replace('https://', '').replace('http://', ''),
url=url,
message=f"Health check passed for {url}",
metadata={"duration": round(duration, 2)}
)
self.notify(event)