Environment: Zone Universe Main Dev on web-03
"{\"env\": \"dev\", \"zone\": \"universe\", \"branch\": \"main\", \"db_app\": \"postgresql\", \"service\": \"zone\", \"es_nodes\": 1, \"db_enabled\": true, \"pg_standby\": 1, \"pg_workers\": 1, \"search_app\": \"elasticsearch\", \"description\": \"\", \"iam_enabled\": false, \"worker_1_ip\": \"10.100.1.42\", \"eventbus_app\": \"kafka\", \"es_https_mode\": \"direct\", \"service_es_ip\": \"10.100.1.4\", \"worker_1_fqdn\": \"db-zone-universe-main-dev-postgresql-worker-01.fastorder.com\", \"search_enabled\": true, \"service_app_ip\": \"10.100.1.2\", \"service_obs_ip\": \"10.100.1.18\", \"service_es_fqdn\": \"search-zone-universe-main-dev-elasticsearch-coordinator.fastorder.com\", \"service_otlp_ip\": \"10.100.1.30\", \"eventbus_enabled\": true, \"service_app_fqdn\": \"app-zone-universe-main-dev.fastorder.com\", \"service_audit_ip\": \"10.100.1.32\", \"service_obs_fqdn\": \"obs-zone-universe-main-dev.fastorder.com\", \"service_tempo_ip\": \"10.100.1.28\", \"service_endpoints\": \"[{\\\"ip\\\":\\\"10.100.1.3\\\",\\\"fqdn\\\":\\\"app-zone-universe-main-dev.fastorder.com\\\",\\\"service\\\":\\\"app\\\"},{\\\"ip\\\":\\\"10.100.1.5\\\",\\\"fqdn\\\":\\\"search-zone-universe-main-dev-elasticsearch-coordinator.fastorder.com\\\",\\\"service\\\":\\\"es_coordinator\\\"},{\\\"ip\\\":\\\"10.100.1.7\\\",\\\"fqdn\\\":\\\"search-zone-universe-main-dev-elasticsearch-node-01.fastorder.com\\\",\\\"service\\\":\\\"es_node_1\\\"},{\\\"ip\\\":\\\"10.100.1.9\\\",\\\"fqdn\\\":\\\"eventbus-zone-universe-main-dev-kafka-broker-01.fastorder.com\\\",\\\"service\\\":\\\"kafka_broker_1\\\"},{\\\"ip\\\":\\\"10.100.1.11\\\",\\\"fqdn\\\":\\\"eventbus-zone-universe-main-dev-kafka-connect.fastorder.com\\\",\\\"service\\\":\\\"kafka_connect\\\"},{\\\"ip\\\":\\\"10.100.1.13\\\",\\\"fqdn\\\":\\\"schema-zone-universe-main-dev-kafka-registry.fastorder.com\\\",\\\"service\\\":\\\"kafka_registry\\\"},{\\\"ip\\\":\\\"10.100.1.15\\\",\\\"fqdn\\\":\\\"db-zone-universe-main-dev-postgresql-coordinator.fastorder.com\\\",\\\"service\\\":\\\"pg_coordinator\\\"},{\\\"ip\\\":\\\"10.100.1.17\\\",\\\"fqdn\\\":\\\"db-zone-universe-main-dev-postgresql-bouncer.fastorder.com\\\",\\\"service\\\":\\\"pgbouncer\\\"},{\\\"ip\\\":\\\"10.100.1.19\\\",\\\"fqdn\\\":\\\"obs-zone-universe-main-dev.fastorder.com\\\",\\\"service\\\":\\\"obs\\\"},{\\\"ip\\\":\\\"10.100.1.21\\\",\\\"fqdn\\\":\\\"metrics-zone-universe-main-dev-prometheus.fastorder.com\\\",\\\"service\\\":\\\"metrics\\\"},{\\\"ip\\\":\\\"10.100.1.23\\\",\\\"fqdn\\\":\\\"dashboards-zone-universe-main-dev-grafana.fastorder.com\\\",\\\"service\\\":\\\"dashboards\\\"},{\\\"ip\\\":\\\"10.100.1.25\\\",\\\"fqdn\\\":\\\"alerts-zone-universe-main-dev-alertmanager.fastorder.com\\\",\\\"service\\\":\\\"alerts\\\"},{\\\"ip\\\":\\\"10.100.1.27\\\",\\\"fqdn\\\":\\\"logstore-zone-universe-main-dev-clickhouse.fastorder.com\\\",\\\"service\\\":\\\"logs\\\"},{\\\"ip\\\":\\\"10.100.1.29\\\",\\\"fqdn\\\":\\\"traces-zone-universe-main-dev-tempo.fastorder.com\\\",\\\"service\\\":\\\"traces\\\"},{\\\"ip\\\":\\\"10.100.1.31\\\",\\\"fqdn\\\":\\\"telemetry-zone-universe-main-dev-opentelemetry.fastorder.com\\\",\\\"service\\\":\\\"telemetry\\\"},{\\\"ip\\\":\\\"10.100.1.33\\\",\\\"fqdn\\\":\\\"audit-zone-universe-main-dev.fastorder.com\\\",\\\"service\\\":\\\"audit\\\"},{\\\"ip\\\":\\\"10.100.1.35\\\",\\\"fqdn\\\":\\\"backup-zone-universe-main-dev-db-postgresql.fastorder.com\\\",\\\"service\\\":\\\"backup_pg\\\"},{\\\"ip\\\":\\\"10.100.1.37\\\",\\\"fqdn\\\":\\\"backup-zone-universe-main-dev-eventbus-kafka.fastorder.com\\\",\\\"service\\\":\\\"backup_kafka\\\"},{\\\"ip\\\":\\\"10.100.1.39\\\",\\\"fqdn\\\":\\\"backup-zone-universe-main-dev-search-elasticsearch.fastorder.com\\\",\\\"service\\\":\\\"backup_es\\\"},{\\\"ip\\\":\\\"10.100.1.41\\\",\\\"fqdn\\\":\\\"backup-zone-universe-main-dev-orchestrator.fastorder.com\\\",\\\"service\\\":\\\"backup_orchestrator\\\"}]\", \"service_otlp_fqdn\": \"telemetry-zone-universe-main-dev-opentelemetry.fastorder.com\", \"postgresql_enabled\": true, \"service_audit_fqdn\": \"audit-zone-universe-main-dev.fastorder.com\", \"service_grafana_ip\": \"10.100.1.22\", \"service_tempo_fqdn\": \"traces-zone-universe-main-dev-tempo.fastorder.com\", \"service_backup_es_ip\": \"10.100.1.38\", \"service_backup_pg_ip\": \"10.100.1.34\", \"service_es_node_1_ip\": \"10.100.1.6\", \"service_grafana_fqdn\": \"dashboards-zone-universe-main-dev-grafana.fastorder.com\", \"service_pgbouncer_ip\": \"10.100.1.16\", \"service_prometheus_ip\": \"10.100.1.20\", \"worker_1_standby_1_ip\": \"10.100.1.43\", \"service_backup_es_fqdn\": \"backup-zone-universe-main-dev-search-elasticsearch.fastorder.com\", \"service_backup_pg_fqdn\": \"backup-zone-universe-main-dev-db-postgresql.fastorder.com\", \"service_es_node_1_fqdn\": \"search-zone-universe-main-dev-elasticsearch-node-01.fastorder.com\", \"service_log_backend_ip\": \"10.100.1.26\", \"service_pgbouncer_fqdn\": \"db-zone-universe-main-dev-postgresql-bouncer.fastorder.com\", \"service_alertmanager_ip\": \"10.100.1.24\", \"service_backup_kafka_ip\": \"10.100.1.36\", \"service_prometheus_fqdn\": \"metrics-zone-universe-main-dev-prometheus.fastorder.com\", \"worker_1_standby_1_fqdn\": \"db-zone-universe-main-dev-postgresql-worker-01-standby-01.fastorder.com\", \"service_kafka_connect_ip\": \"10.100.1.10\", \"service_log_backend_fqdn\": \"logstore-zone-universe-main-dev-clickhouse.fastorder.com\", \"service_alertmanager_fqdn\": \"alerts-zone-universe-main-dev-alertmanager.fastorder.com\", \"service_backup_kafka_fqdn\": \"backup-zone-universe-main-dev-eventbus-kafka.fastorder.com\", \"service_kafka_broker_1_ip\": \"10.100.1.8\", \"service_kafka_registry_ip\": \"10.100.1.12\", \"service_pg_coordinator_ip\": \"10.100.1.14\", \"service_kafka_connect_fqdn\": \"eventbus-zone-universe-main-dev-kafka-connect.fastorder.com\", \"postgresql_run_verification\": true, \"service_kafka_broker_1_fqdn\": \"eventbus-zone-universe-main-dev-kafka-broker-01.fastorder.com\", \"service_kafka_registry_fqdn\": \"schema-zone-universe-main-dev-kafka-registry.fastorder.com\", \"service_pg_coordinator_fqdn\": \"db-zone-universe-main-dev-postgresql-coordinator.fastorder.com\", \"service_backup_orchestrator_ip\": \"10.100.1.40\", \"service_backup_orchestrator_fqdn\": \"backup-zone-universe-main-dev-orchestrator.fastorder.com\"}"
This job encountered an error. You can restart from the failed step.
This job has been restarted. You are viewing an older attempt. The logs and status shown below are from the latest retry.
This job failed at one of the steps below. You can resume from where it failed to save time and avoid re-running successful steps.
[1mββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ[0m
[1m FastOrder Pre-Flight Validation Checks[0m
[1mββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ[0m
[0;34m[INFO][0m Checking SSH connectivity to target host...
[0;32m[β][0m Target is localhost, skipping SSH check
[0;34m[INFO][0m Checking available disk space...
[0;34m[INFO][0m Checking /data disk (mounted separately for data storage)
[0;32m[β][0m Disk space sufficient: 287GB available (required: 50GB)
[0;34m[INFO][0m Checking available memory...
[1;33m[β ][0m Memory limited: 15GB (recommended: 16GB)
β Consider reducing Elasticsearch nodes or PostgreSQL workers
[0;34m[INFO][0m Checking critical port availability...
[0;32m[β][0m Port 5432 in use on specific IP (10.100.1.189:5432) - OK, can use different IP
[0;32m[β][0m Port 9200 in use on specific IP ([::ffff:10.100.1.179]) - OK, can use different IP
[0;32m[β][0m Port 9300 in use on specific IP ([::ffff:10.100.1.186]) - OK, can use different IP
[0;32m[β][0m Port 9092 in use on specific IP ([::ffff:10.100.1.225]) - OK, can use different IP
[0;32m[β][0m Port 2181 available (Zookeeper)
[0;34m[INFO][0m Checking DNS resolution...
[0;32m[β][0m DNS resolution working: google.com
[0;32m[β][0m DNS resolution working: github.com
[0;32m[β][0m DNS resolution working: archive.ubuntu.com
[0;34m[INFO][0m Checking required system commands...
[0;32m[β][0m Command available: curl
[0;32m[β][0m Command available: wget
[0;32m[β][0m Command available: git
[0;32m[β][0m Command available: sudo
[0;32m[β][0m Command available: systemctl
[0;32m[β][0m Command available: apt-get
[0;34m[INFO][0m Checking current system load...
[1;33m[β ][0m System load elevated: 4.10 (4 CPUs)
β Provisioning may be slower than expected
[0;34m[INFO][0m Checking for existing environment conflicts...
[0;32m[β][0m No conflicting services found for: zone-uae-main-dev
[1mββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ[0m
[1m Pre-Flight Check Summary[0m
[1mββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ[0m
[1;33m[β ][0m 2 warning(s) detected
β οΈ Environment can proceed with caution
Review warnings above and consider remediation
[INFO] Using web-provided environment: zone-universe-main-dev
[INFO] Auto-creating state directory for zone-universe-main-dev...
[ OK ] Created topology.json for zone-universe-main-dev
[INFO] Loaded environment: zone-universe-main-dev (svc=zone zone=universe env=dev ip=10.100.1.51)
[0;36m[2026-02-05_09:27:30][0m Starting Terraform provisioning step
[0;36m[2026-02-05_09:27:30][0m Service: zone
[0;36m[2026-02-05_09:27:30][0m Zone: universe
[0;36m[2026-02-05_09:27:30][0m Environment: dev
[0;36m[2026-02-05_09:27:30][0m Resource: web-03
[0;36m[2026-02-05_09:27:30][0m Terraform binary: /home/ab/bin/terraform
[0;36m[2026-02-05_09:27:30][0m HOME: /home/www-data
[0;36m[2026-02-05_09:27:30][0m AWS Config: /home/ab/.aws/config
[0;36m[2026-02-05_09:27:30][0m AWS Credentials: /home/ab/.aws/credentials
[0;36m[2026-02-05_09:27:30][0m Terraform directory: /opt/fastorder/cli/terraform/examples/citus-production
[0;36m[2026-02-05_09:27:30][0m Running terraform init...
[0m[1mInitializing the backend...[0m
[0m[1mUpgrading modules...[0m
- citus_cluster in ../../modules/citus_cluster
[0m[1mInitializing provider plugins...[0m
- Finding hashicorp/aws versions matching "~> 5.0"...
- Using previously-installed hashicorp/aws v5.100.0
[0m[1m[32mTerraform has been successfully initialized![0m[32m[0m
[0m[32m
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.[0m
[0;32m[2026-02-05_09:27:36] β[0m Terraform init succeeded
[0;36m[2026-02-05_09:27:36][0m Running terraform validate...
[32m[1mSuccess![0m The configuration is valid.
[0m
[0;32m[2026-02-05_09:27:41] β[0m Terraform validate succeeded
[0;36m[2026-02-05_09:27:41][0m Running terraform plan...
[0m[1mmodule.citus_cluster.data.aws_caller_identity.current: Reading...[0m[0m
[0m[1mmodule.citus_cluster.data.aws_caller_identity.current: Read complete after 0s [id=464621692046][0m
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
[32m+[0m create[0m
Terraform will perform the following actions:
[1m # module.citus_cluster.aws_iam_instance_profile.citus[0m will be created
[0m [32m+[0m[0m resource "aws_iam_instance_profile" "citus" {
[32m+[0m[0m arn = (known after apply)
[32m+[0m[0m create_date = (known after apply)
[32m+[0m[0m id = (known after apply)
[32m+[0m[0m name = (known after apply)
[32m+[0m[0m name_prefix = "citus-prod-"
[32m+[0m[0m path = "/"
[32m+[0m[0m role = (known after apply)
[32m+[0m[0m tags = {
[32m+[0m[0m "Backup" = "Required"
[32m+[0m[0m "CostCenter" = "Platform"
[32m+[0m[0m "Environment" = "prod"
[32m+[0m[0m "Name" = "citus-prod"
}
[32m+[0m[0m tags_all = {
[32m+[0m[0m "Backup" = "Required"
[32m+[0m[0m "CostCenter" = "Platform"
[32m+[0m[0m "Environment" = "prod"
[32m+[0m[0m "ManagedBy" = "Terraform"
[32m+[0m[0m "Name" = "citus-prod"
[32m+[0m[0m "Owner" = "Platform Team"
[32m+[0m[0m "Project" = "FastOrder"
}
[32m+[0m[0m unique_id = (known after apply)
}
[1m # module.citus_cluster.aws_iam_role.citus[0m will be created
[0m [32m+[0m[0m resource "aws_iam_role" "citus" {
[32m+[0m[0m arn = (known after apply)
[32m+[0m[0m assume_role_policy = jsonencode(
{
[32m+[0m[0m Statement = [
[32m+[0m[0m {
[32m+[0m[0m Action = "sts:AssumeRole"
[32m+[0m[0m Effect = "Allow"
[32m+[0m[0m Principal = {
[32m+[0m[0m Service = "ec2.amazonaws.com"
}
},
]
[32m+[0m[0m Version = "2012-10-17"
}
)
[32m+[0m[0m create_date = (known after apply)
[32m+[0m[0m force_detach_policies = false
[32m+[0m[0m id = (known after apply)
[32m+[0m[0m managed_policy_arns = (known after apply)
[32m+[0m[0m max_session_duration = 3600
[32m+[0m[0m name = (known after apply)
[32m+[0m[0m name_prefix = "citus-prod-"
[32m+[0m[0m path = "/"
[32m+[0m[0m tags = {
[32m+[0m[0m "Backup" = "Required"
[32m+[0m[0m "CostCenter" = "Platform"
[32m+[0m[0m "Environment" = "prod"
[32m+[0m[0m "Name" = "citus-prod"
}
[32m+[0m[0m tags_all = {
[32m+[0m[0m "Backup" = "Required"
[32m+[0m[0m "CostCenter" = "Platform"
[32m+[0m[0m "Environment" = "prod"
[32m+[0m[0m "ManagedBy" = "Terraform"
[32m+[0m[0m "Name" = "citus-prod"
[32m+[0m[0m "Owner" = "Platform Team"
[32m+[0m[0m "Project" = "FastOrder"
}
[32m+[0m[0m unique_id = (known after apply)
}
[1m # module.citus_cluster.aws_iam_role_policy.secrets_manager[0][0m will be created
[0m [32m+[0m[0m resource "aws_iam_role_policy" "secrets_manager" {
[32m+[0m[0m id = (known after apply)
[32m+[0m[0m name = (known after apply)
[32m+[0m[0m name_prefix = "secrets-access-"
[32m+[0m[0m policy = jsonencode(
{
[32m+[0m[0m Statement = [
[32m+[0m[0m {
[32m+[0m[0m Action = [
[32m+[0m[0m "secretsmanager:GetSecretValue",
[32m+[0m[0m "secretsmanager:DescribeSecret",
]
[32m+[0m[0m Effect = "Allow"
[32m+[0m[0m Resource = "arn:aws:secretsmanager:me-central-1:464621692046:secret:fastorder/db/web/ksa/main/dev/postgresqladmin/ksa/prod*"
},
]
[32m+[0m[0m Version = "2012-10-17"
}
)
[32m+[0m[0m role = (known after apply)
}
[1m # module.citus_cluster.aws_iam_role_policy_attachment.cloudwatch[0m will be created
[0m [32m+[0m[0m resource "aws_iam_role_policy_attachment" "cloudwatch" {
[32m+[0m[0m id = (known after apply)
[32m+[0m[0m policy_arn = "arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy"
[32m+[0m[0m role = (known after apply)
}
[1m # module.citus_cluster.aws_iam_role_policy_attachment.ssm[0m will be created
[0m [32m+[0m[0m resource "aws_iam_role_policy_attachment" "ssm" {
[32m+[0m[0m id = (known after apply)
[32m+[0m[0m policy_arn = "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"
[32m+[0m[0m role = (known after apply)
}
[1m # module.citus_cluster.aws_instance.coordinator[0m will be created
[0m [32m+[0m[0m resource "aws_instance" "coordinator" {
[32m+[0m[0m ami = "ami-0b2aae5f4283c0df2"
[32m+[0m[0m arn = (known after apply)
[32m+[0m[0m associate_public_ip_address = (known after apply)
[32m+[0m[0m availability_zone = (known after apply)
[32m+[0m[0m cpu_core_count = (known after apply)
[32m+[0m[0m cpu_threads_per_core = (known after apply)
[32m+[0m[0m disable_api_stop = (known after apply)
[32m+[0m[0m disable_api_termination = (known after apply)
[32m+[0m[0m ebs_optimized = (known after apply)
[32m+[0m[0m enable_primary_ipv6 = (known after apply)
[32m+[0m[0m get_password_data = false
[32m+[0m[0m host_id = (known after apply)
[32m+[0m[0m host_resource_group_arn = (known after apply)
[32m+[0m[0m iam_instance_profile = (known after apply)
[32m+[0m[0m id = (known after apply)
[32m+[0m[0m instance_initiated_shutdown_behavior = (known after apply)
[32m+[0m[0m instance_lifecycle = (known after apply)
[32m+[0m[0m instance_state = (known after apply)
[32m+[0m[0m instance_type = "r6i.2xlarge"
[32m+[0m[0m ipv6_address_count = (known after apply)
[32m+[0m[0m ipv6_addresses = (known after apply)
[32m+[0m[0m key_name = (known after apply)
[32m+[0m[0m monitoring = (known after apply)
[32m+[0m[0m outpost_arn = (known after apply)
[32m+[0m[0m password_data = (known after apply)
[32m+[0m[0m placement_group = (known after apply)
[32m+[0m[0m placement_partition_number = (known after apply)
[32m+[0m[0m primary_network_interface_id = (known after apply)
[32m+[0m[0m private_dns = (known after apply)
[32m+[0m[0m private_ip = (known after apply)
[32m+[0m[0m public_dns = (known after apply)
[32m+[0m[0m public_ip = (known after apply)
[32m+[0m[0m secondary_private_ips = (known after apply)
[32m+[0m[0m security_groups = (known after apply)
[32m+[0m[0m source_dest_check = true
[32m+[0m[0m spot_instance_request_id = (known after apply)
[32m+[0m[0m subnet_id = "subnet-0a1f5a9a74ed030cf"
[32m+[0m[0m tags = {
[32m+[0m[0m "Backup" = "Required"
[32m+[0m[0m "CostCenter" = "Platform"
[32m+[0m[0m "Environment" = "prod"
[32m+[0m[0m "Name" = "citus-coordinator-prod"
[32m+[0m[0m "Role" = "coordinator"
[32m+[0m[0m "Service" = "citus"
}
[32m+[0m[0m tags_all = {
[32m+[0m[0m "Backup" = "Required"
[32m+[0m[0m "CostCenter" = "Platform"
[32m+[0m[0m "Environment" = "prod"
[32m+[0m[0m "ManagedBy" = "Terraform"
[32m+[0m[0m "Name" = "citus-coordinator-prod"
[32m+[0m[0m "Owner" = "Platform Team"
[32m+[0m[0m "Project" = "FastOrder"
[32m+[0m[0m "Role" = "coordinator"
[32m+[0m[0m "Service" = "citus"
}
[32m+[0m[0m tenancy = (known after apply)
[32m+[0m[0m user_data = "2a9e41ea765dcf3b3046ee10d2f458c18f00e430"
[32m+[0m[0m user_data_base64 = (known after apply)
[32m+[0m[0m user_data_replace_on_change = false
[32m+[0m[0m vpc_security_group_ids = (known after apply)
[32m+[0m[0m ebs_block_device {
[32m+[0m[0m delete_on_termination = false
[32m+[0m[0m device_name = "/dev/sdf"
[32m+[0m[0m encrypted = true
[32m+[0m[0m iops = 3000
[32m+[0m[0m kms_key_id = (known after apply)
[32m+[0m[0m snapshot_id = (known after apply)
[32m+[0m[0m tags = {
[32m+[0m[0m "Backup" = "Required"
[32m+[0m[0m "CostCenter" = "Platform"
[32m+[0m[0m "Environment" = "prod"
[32m+[0m[0m "Name" = "citus-coordinator-prod-data"
}
[32m+[0m[0m tags_all = (known after apply)
[32m+[0m[0m throughput = 125
[32m+[0m[0m volume_id = (known after apply)
[32m+[0m[0m volume_size = 500
[32m+[0m[0m volume_type = "gp3"
}
[32m+[0m[0m root_block_device {
[32m+[0m[0m delete_on_termination = false
[32m+[0m[0m device_name = (known after apply)
[32m+[0m[0m encrypted = true
[32m+[0m[0m iops = (known after apply)
[32m+[0m[0m kms_key_id = (known after apply)
[32m+[0m[0m tags = {
[32m+[0m[0m "Backup" = "Required"
[32m+[0m[0m "CostCenter" = "Platform"
[32m+[0m[0m "Environment" = "prod"
[32m+[0m[0m "Name" = "citus-coordinator-prod-root"
}
[32m+[0m[0m tags_all = (known after apply)
[32m+[0m[0m throughput = (known after apply)
[32m+[0m[0m volume_id = (known after apply)
[32m+[0m[0m volume_size = 100
[32m+[0m[0m volume_type = "gp3"
}
}
[1m # module.citus_cluster.aws_instance.workers[0][0m will be created
[0m [32m+[0m[0m resource "aws_instance" "workers" {
[32m+[0m[0m ami = "ami-0b2aae5f4283c0df2"
[32m+[0m[0m arn = (known after apply)
[32m+[0m[0m associate_public_ip_address = (known after apply)
[32m+[0m[0m availability_zone = (known after apply)
[32m+[0m[0m cpu_core_count = (known after apply)
[32m+[0m[0m cpu_threads_per_core = (known after apply)
[32m+[0m[0m disable_api_stop = (known after apply)
[32m+[0m[0m disable_api_termination = (known after apply)
[32m+[0m[0m ebs_optimized = (known after apply)
[32m+[0m[0m enable_primary_ipv6 = (known after apply)
[32m+[0m[0m get_password_data = false
[32m+[0m[0m host_id = (known after apply)
[32m+[0m[0m host_resource_group_arn = (known after apply)
[32m+[0m[0m iam_instance_profile = (known after apply)
[32m+[0m[0m id = (known after apply)
[32m+[0m[0m instance_initiated_shutdown_behavior = (known after apply)
[32m+[0m[0m instance_lifecycle = (known after apply)
[32m+[0m[0m instance_state = (known after apply)
[32m+[0m[0m instance_type = "r6i.2xlarge"
[32m+[0m[0m ipv6_address_count = (known after apply)
[32m+[0m[0m ipv6_addresses = (known after apply)
[32m+[0m[0m key_name = (known after apply)
[32m+[0m[0m monitoring = (known after apply)
[32m+[0m[0m outpost_arn = (known after apply)
[32m+[0m[0m password_data = (known after apply)
[32m+[0m[0m placement_group = (known after apply)
[32m+[0m[0m placement_partition_number = (known after apply)
[32m+[0m[0m primary_network_interface_id = (known after apply)
[32m+[0m[0m private_dns = (known after apply)
[32m+[0m[0m private_ip = (known after apply)
[32m+[0m[0m public_dns = (known after apply)
[32m+[0m[0m public_ip = (known after apply)
[32m+[0m[0m secondary_private_ips = (known after apply)
[32m+[0m[0m security_groups = (known after apply)
[32m+[0m[0m source_dest_check = true
[32m+[0m[0m spot_instance_request_id = (known after apply)
[32m+[0m[0m subnet_id = "subnet-0a1f5a9a74ed030cf"
[32m+[0m[0m tags = {
[32m+[0m[0m "Backup" = "Required"
[32m+[0m[0m "CostCenter" = "Platform"
[32m+[0m[0m "Environment" = "prod"
[32m+[0m[0m "Name" = "citus-worker-0-prod"
[32m+[0m[0m "Role" = "worker"
[32m+[0m[0m "Service" = "citus"
[32m+[0m[0m "WorkerIndex" = "0"
}
[32m+[0m[0m tags_all = {
[32m+[0m[0m "Backup" = "Required"
[32m+[0m[0m "CostCenter" = "Platform"
[32m+[0m[0m "Environment" = "prod"
[32m+[0m[0m "ManagedBy" = "Terraform"
[32m+[0m[0m "Name" = "citus-worker-0-prod"
[32m+[0m[0m "Owner" = "Platform Team"
[32m+[0m[0m "Project" = "FastOrder"
[32m+[0m[0m "Role" = "worker"
[32m+[0m[0m "Service" = "citus"
[32m+[0m[0m "WorkerIndex" = "0"
}
[32m+[0m[0m tenancy = (known after apply)
[32m+[0m[0m user_data = "7b4bd87c9982aab7fa463c8d12e99399661f8bde"
[32m+[0m[0m user_data_base64 = (known after apply)
[32m+[0m[0m user_data_replace_on_change = false
[32m+[0m[0m vpc_security_group_ids = (known after apply)
[32m+[0m[0m ebs_block_device {
[32m+[0m[0m delete_on_termination = false
[32m+[0m[0m device_name = "/dev/sdf"
[32m+[0m[0m encrypted = true
[32m+[0m[0m iops = 3000
[32m+[0m[0m kms_key_id = (known after apply)
[32m+[0m[0m snapshot_id = (known after apply)
[32m+[0m[0m tags = {
[32m+[0m[0m "Backup" = "Required"
[32m+[0m[0m "CostCenter" = "Platform"
[32m+[0m[0m "Environment" = "prod"
[32m+[0m[0m "Name" = "citus-worker-0-prod-data"
}
[32m+[0m[0m tags_all = (known after apply)
[32m+[0m[0m throughput = 125
[32m+[0m[0m volume_id = (known after apply)
[32m+[0m[0m volume_size = 500
[32m+[0m[0m volume_type = "gp3"
}
[32m+[0m[0m root_block_device {
[32m+[0m[0m delete_on_termination = false
[32m+[0m[0m device_name = (known after apply)
[32m+[0m[0m encrypted = true
[32m+[0m[0m iops = (known after apply)
[32m+[0m[0m kms_key_id = (known after apply)
[32m+[0m[0m tags = {
[32m+[0m[0m "Backup" = "Required"
[32m+[0m[0m "CostCenter" = "Platform"
[32m+[0m[0m "Environment" = "prod"
[32m+[0m[0m "Name" = "citus-worker-0-prod-root"
}
[32m+[0m[0m tags_all = (known after apply)
[32m+[0m[0m throughput = (known after apply)
[32m+[0m[0m volume_id = (known after apply)
[32m+[0m[0m volume_size = 100
[32m+[0m[0m volume_type = "gp3"
}
}
[1m # module.citus_cluster.aws_instance.workers[1][0m will be created
[0m [32m+[0m[0m resource "aws_instance" "workers" {
[32m+[0m[0m ami = "ami-0b2aae5f4283c0df2"
[32m+[0m[0m arn = (known after apply)
[32m+[0m[0m associate_public_ip_address = (known after apply)
[32m+[0m[0m availability_zone = (known after apply)
[32m+[0m[0m cpu_core_count = (known after apply)
[32m+[0m[0m cpu_threads_per_core = (known after apply)
[32m+[0m[0m disable_api_stop = (known after apply)
[32m+[0m[0m disable_api_termination = (known after apply)
[32m+[0m[0m ebs_optimized = (known after apply)
[32m+[0m[0m enable_primary_ipv6 = (known after apply)
[32m+[0m[0m get_password_data = false
[32m+[0m[0m host_id = (known after apply)
[32m+[0m[0m host_resource_group_arn = (known after apply)
[32m+[0m[0m iam_instance_profile = (known after apply)
[32m+[0m[0m id = (known after apply)
[32m+[0m[0m instance_initiated_shutdown_behavior = (known after apply)
[32m+[0m[0m instance_lifecycle = (known after apply)
[32m+[0m[0m instance_state = (known after apply)
[32m+[0m[0m instance_type = "r6i.2xlarge"
[32m+[0m[0m ipv6_address_count = (known after apply)
[32m+[0m[0m ipv6_addresses = (known after apply)
[32m+[0m[0m key_name = (known after apply)
[32m+[0m[0m monitoring = (known after apply)
[32m+[0m[0m outpost_arn = (known after apply)
[32m+[0m[0m password_data = (known after apply)
[32m+[0m[0m placement_group = (known after apply)
[32m+[0m[0m placement_partition_number = (known after apply)
[32m+[0m[0m primary_network_interface_id = (known after apply)
[32m+[0m[0m private_dns = (known after apply)
[32m+[0m[0m private_ip = (known after apply)
[32m+[0m[0m public_dns = (known after apply)
[32m+[0m[0m public_ip = (known after apply)
[32m+[0m[0m secondary_private_ips = (known after apply)
[32m+[0m[0m security_groups = (known after apply)
[32m+[0m[0m source_dest_check = true
[32m+[0m[0m spot_instance_request_id = (known after apply)
[32m+[0m[0m subnet_id = "subnet-02c930351cde1e9c3"
[32m+[0m[0m tags = {
[32m+[0m[0m "Backup" = "Required"
[32m+[0m[0m "CostCenter" = "Platform"
[32m+[0m[0m "Environment" = "prod"
[32m+[0m[0m "Name" = "citus-worker-1-prod"
[32m+[0m[0m "Role" = "worker"
[32m+[0m[0m "Service" = "citus"
[32m+[0m[0m "WorkerIndex" = "1"
}
[32m+[0m[0m tags_all = {
[32m+[0m[0m "Backup" = "Required"
[32m+[0m[0m "CostCenter" = "Platform"
[32m+[0m[0m "Environment" = "prod"
[32m+[0m[0m "ManagedBy" = "Terraform"
[32m+[0m[0m "Name" = "citus-worker-1-prod"
[32m+[0m[0m "Owner" = "Platform Team"
[32m+[0m[0m "Project" = "FastOrder"
[32m+[0m[0m "Role" = "worker"
[32m+[0m[0m "Service" = "citus"
[32m+[0m[0m "WorkerIndex" = "1"
}
[32m+[0m[0m tenancy = (known after apply)
[32m+[0m[0m user_data = "7b4bd87c9982aab7fa463c8d12e99399661f8bde"
[32m+[0m[0m user_data_base64 = (known after apply)
[32m+[0m[0m user_data_replace_on_change = false
[32m+[0m[0m vpc_security_group_ids = (known after apply)
[32m+[0m[0m ebs_block_device {
[32m+[0m[0m delete_on_termination = false
[32m+[0m[0m device_name = "/dev/sdf"
[32m+[0m[0m encrypted = true
[32m+[0m[0m iops = 3000
[32m+[0m[0m kms_key_id = (known after apply)
[32m+[0m[0m snapshot_id = (known after apply)
[32m+[0m[0m tags = {
[32m+[0m[0m "Backup" = "Required"
[32m+[0m[0m "CostCenter" = "Platform"
[32m+[0m[0m "Environment" = "prod"
[32m+[0m[0m "Name" = "citus-worker-1-prod-data"
}
[32m+[0m[0m tags_all = (known after apply)
[32m+[0m[0m throughput = 125
[32m+[0m[0m volume_id = (known after apply)
[32m+[0m[0m volume_size = 500
[32m+[0m[0m volume_type = "gp3"
}
[32m+[0m[0m root_block_device {
[32m+[0m[0m delete_on_termination = false
[32m+[0m[0m device_name = (known after apply)
[32m+[0m[0m encrypted = true
[32m+[0m[0m iops = (known after apply)
[32m+[0m[0m kms_key_id = (known after apply)
[32m+[0m[0m tags = {
[32m+[0m[0m "Backup" = "Required"
[32m+[0m[0m "CostCenter" = "Platform"
[32m+[0m[0m "Environment" = "prod"
[32m+[0m[0m "Name" = "citus-worker-1-prod-root"
}
[32m+[0m[0m tags_all = (known after apply)
[32m+[0m[0m throughput = (known after apply)
[32m+[0m[0m volume_id = (known after apply)
[32m+[0m[0m volume_size = 100
[32m+[0m[0m volume_type = "gp3"
}
}
[1m # module.citus_cluster.aws_security_group.citus[0m will be created
[0m [32m+[0m[0m resource "aws_security_group" "citus" {
[32m+[0m[0m arn = (known after apply)
[32m+[0m[0m description = "Security group for Citus cluster"
[32m+[0m[0m egress = [
[32m+[0m[0m {
[32m+[0m[0m cidr_blocks = [
[32m+[0m[0m "0.0.0.0/0",
]
[32m+[0m[0m description = "Allow all outbound"
[32m+[0m[0m from_port = 0
[32m+[0m[0m ipv6_cidr_blocks = []
[32m+[0m[0m prefix_list_ids = []
[32m+[0m[0m protocol = "-1"
[32m+[0m[0m security_groups = []
[32m+[0m[0m self = false
[32m+[0m[0m to_port = 0
},
]
[32m+[0m[0m id = (known after apply)
[32m+[0m[0m ingress = [
[32m+[0m[0m {
[32m+[0m[0m cidr_blocks = [
[32m+[0m[0m "10.0.0.0/8",
]
[32m+[0m[0m description = "PgBouncer access"
[32m+[0m[0m from_port = 6432
[32m+[0m[0m ipv6_cidr_blocks = []
[32m+[0m[0m prefix_list_ids = []
[32m+[0m[0m protocol = "tcp"
[32m+[0m[0m security_groups = []
[32m+[0m[0m self = false
[32m+[0m[0m to_port = 6432
},
[32m+[0m[0m {
[32m+[0m[0m cidr_blocks = [
[32m+[0m[0m "10.0.0.0/8",
]
[32m+[0m[0m description = "PostgreSQL access"
[32m+[0m[0m from_port = 5432
[32m+[0m[0m ipv6_cidr_blocks = []
[32m+[0m[0m prefix_list_ids = []
[32m+[0m[0m protocol = "tcp"
[32m+[0m[0m security_groups = []
[32m+[0m[0m self = false
[32m+[0m[0m to_port = 5432
},
[32m+[0m[0m {
[32m+[0m[0m cidr_blocks = [
[32m+[0m[0m "10.0.0.0/8",
]
[32m+[0m[0m description = "SSH access"
[32m+[0m[0m from_port = 22
[32m+[0m[0m ipv6_cidr_blocks = []
[32m+[0m[0m prefix_list_ids = []
[32m+[0m[0m protocol = "tcp"
[32m+[0m[0m security_groups = []
[32m+[0m[0m self = false
[32m+[0m[0m to_port = 22
},
[32m+[0m[0m {
[32m+[0m[0m cidr_blocks = []
[32m+[0m[0m description = "Internal cluster communication"
[32m+[0m[0m from_port = 0
[32m+[0m[0m ipv6_cidr_blocks = []
[32m+[0m[0m prefix_list_ids = []
[32m+[0m[0m protocol = "tcp"
[32m+[0m[0m security_groups = []
[32m+[0m[0m self = true
[32m+[0m[0m to_port = 65535
},
]
[32m+[0m[0m name = (known after apply)
[32m+[0m[0m name_prefix = "citus-prod-"
[32m+[0m[0m owner_id = (known after apply)
[32m+[0m[0m revoke_rules_on_delete = false
[32m+[0m[0m tags = {
[32m+[0m[0m "Backup" = "Required"
[32m+[0m[0m "CostCenter" = "Platform"
[32m+[0m[0m "Environment" = "prod"
[32m+[0m[0m "Name" = "citus-prod"
[32m+[0m[0m "Service" = "citus"
}
[32m+[0m[0m tags_all = {
[32m+[0m[0m "Backup" = "Required"
[32m+[0m[0m "CostCenter" = "Platform"
[32m+[0m[0m "Environment" = "prod"
[32m+[0m[0m "ManagedBy" = "Terraform"
[32m+[0m[0m "Name" = "citus-prod"
[32m+[0m[0m "Owner" = "Platform Team"
[32m+[0m[0m "Project" = "FastOrder"
[32m+[0m[0m "Service" = "citus"
}
[32m+[0m[0m vpc_id = "vpc-0af7da1e7d94d62bd"
}
[1mPlan:[0m 9 to add, 0 to change, 0 to destroy.
[0m
Changes to Outputs:
[32m+[0m[0m connection_string = (sensitive value)
[32m+[0m[0m coordinator_ip = (known after apply)
[32m+[0m[0m worker_ips = [
[32m+[0m[0m (known after apply),
[32m+[0m[0m (known after apply),
]
[90m
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ[0m
Saved the plan to: tfplan
To perform exactly these actions, run the following command to apply:
terraform apply "tfplan"
[0;32m[2026-02-05_09:27:48] β[0m Terraform plan succeeded
[0;36m[2026-02-05_09:27:48][0m Generating plan JSON...
[0;32m[2026-02-05_09:27:51] β[0m Terraform provisioning step completed successfully
Next step: Review the plan and apply with 'terraform apply tfplan'
[INFO] FastOrder Environment Preparation
[INFO] Service: zone
[INFO] Zone: universe
[INFO] Environment: dev
[INFO] Branch: main
[INFO] State Directory: /opt/fastorder/bash/scripts/env_app_setup/state
[INFO] Library: /opt/fastorder/bash/scripts/env_app_setup/lib/env-orchestrator
[INFO] IP: 142.93.238.16 (specified)
[INFO] Creating environment using fo-env...
[INFO] Creating new FastOrder environment (v1 topology)
[INFO] Generated environment ID: zone-universe-main-dev
[INFO] Using provided IP: 142.93.238.16
[INFO] Allocated interface: eth0:16
[INFO] Configuring network interface for VM IP: 142.93.238.16
[INFO] VM IP 142.93.238.16 is already configured on eth0:16
[CONFIG] No web configuration found for environment: zone-universe-main-dev
[CONFIG] Using defaults: ES_NODES=1, PG_WORKERS=1
[INFO] Service enabled flags: db=yes, eventbus=yes, search=yes
[ OK ] Created topology.json at /opt/fastorder/bash/scripts/env_app_setup/state/zone-universe-main-dev/topology.json
[ OK ] Generated overlay configurations in /opt/fastorder/bash/scripts/env_app_setup/state/zone-universe-main-dev/generated/
[ OK ] Updated environments.json
[ OK ] Updated setup.json
[ OK ] Environment created successfully!
[INFO]
[INFO] Environment Details:
[INFO] ID: zone-universe-main-dev
[INFO] Service: zone
[INFO] zone: universe
[INFO] Environment: dev
[INFO] Branch: main
[INFO] IP: 142.93.238.16
[INFO] Interface: eth0:16
[INFO]
[INFO] Configuration files:
[INFO] Topology: /opt/fastorder/bash/scripts/env_app_setup/state/zone-universe-main-dev/topology.json
[INFO] Generated: /opt/fastorder/bash/scripts/env_app_setup/state/zone-universe-main-dev/generated/*.env
[INFO] Overrides: /opt/fastorder/bash/scripts/env_app_setup/state/zone-universe-main-dev/overrides/*.env
[INFO]
[INFO] To use this environment:
[INFO] export ENV_ID="zone-universe-main-dev"
[INFO] source /opt/fastorder/bash/scripts/env_app_setup/lib/env-orchestrator/lib/config_management.sh
[INFO] init_environment
[ OK ] Environment preparation completed successfully!
[INFO] Creating topology from web form submission...
[INFO] Using environment from web interface: zone-universe-main-dev
[0;32m[2026-02-05 09:27:53][0m Using web-provided environment: zone-universe-main-dev
[0;32m[2026-02-05 09:27:53][0m Service: zone, Zone: universe, Branch: main, Env: dev
[ OK ] Environment initialized successfully (mode: general)
[INFO] Creating topology.json from web form submission...
[INFO] DEBUG: Service enabled flags...
[INFO] DB_ENABLED=yes
[INFO] EVENTBUS_ENABLED=yes
[INFO] SEARCH_ENABLED=yes
[INFO] DEBUG: Checking for form submission variables...
[INFO] service_es_ip=10.100.1.4
[INFO] service_es_fqdn=search-zone-universe-main-dev-elasticsearch-coordinator.fastorder.com
[INFO] service_pg_coordinator_ip=10.100.1.14
[WARN] IP 10.100.1.4 is already allocated, allocating new IP for search
[INFO] Adding search: search-zone-universe-main-dev-elasticsearch-coordinator.fastorder.com (10.100.1.249) [reallocated from 10.100.1.4]
[WARN] IP 10.100.1.6 is already allocated, allocating new IP for search-node-01
[INFO] Adding search-node-01: search-zone-universe-main-dev-elasticsearch-node-01.fastorder.com (10.100.1.250) [reallocated from 10.100.1.6]
[WARN] IP 10.100.1.8 is already allocated, allocating new IP for eventbus-broker-01
/opt/fastorder/bash/scripts/env_app_setup/lib/env-orchestrator/lib/common.sh: line 261: echo: write error: Broken pipe
[INFO] Adding eventbus-broker-01: eventbus-zone-universe-main-dev-kafka-broker-01.fastorder.com (10.100.1.71) [reallocated from 10.100.1.8]
[WARN] IP 10.100.1.10 is already allocated, allocating new IP for eventbus-connect
/opt/fastorder/bash/scripts/env_app_setup/lib/env-orchestrator/lib/common.sh: line 261: echo: write error: Broken pipe
[INFO] Adding eventbus-connect: eventbus-zone-universe-main-dev-kafka-connect.fastorder.com (10.100.1.120) [reallocated from 10.100.1.10]
[WARN] IP 10.100.1.12 is already allocated, allocating new IP for schema-registry
[ERROR] No available IPs in range 10.100.1.50-250
[WARN] Skipping schema-registry - could not allocate IP
[WARN] IP 10.100.1.14 is already allocated, allocating new IP for pg-coordinator
/opt/fastorder/bash/scripts/env_app_setup/lib/env-orchestrator/lib/common.sh: line 261: echo: write error: Broken pipe
[INFO] Adding pg-coordinator: db-zone-universe-main-dev-postgresql-coordinator.fastorder.com (10.100.1.114) [reallocated from 10.100.1.14]
[WARN] IP 10.100.1.16 is already allocated, allocating new IP for pgbouncer
/opt/fastorder/bash/scripts/env_app_setup/lib/env-orchestrator/lib/common.sh: line 261: echo: write error: Broken pipe
[INFO] Adding pgbouncer: db-zone-universe-main-dev-postgresql-bouncer.fastorder.com (10.100.1.78) [reallocated from 10.100.1.16]
[WARN] IP 10.100.1.18 is already allocated, allocating new IP for obs
/opt/fastorder/bash/scripts/env_app_setup/lib/env-orchestrator/lib/common.sh: line 261: echo: write error: Broken pipe
[INFO] Adding obs: obs-zone-universe-main-dev.fastorder.com (10.100.1.73) [reallocated from 10.100.1.18]
[ OK ] Topology created from form data
[INFO] Applications registered:
β eventbus-broker-01: eventbus-zone-universe-main-dev-kafka-broker-01.fastorder.com (10.100.1.71)
β eventbus-connect: eventbus-zone-universe-main-dev-kafka-connect.fastorder.com (10.100.1.120)
β obs: obs-zone-universe-main-dev.fastorder.com (10.100.1.73)
β pg-coordinator: db-zone-universe-main-dev-postgresql-coordinator.fastorder.com (10.100.1.114)
β pgbouncer: db-zone-universe-main-dev-postgresql-bouncer.fastorder.com (10.100.1.78)
β search: search-zone-universe-main-dev-elasticsearch-coordinator.fastorder.com (10.100.1.249)
β search-node-01: search-zone-universe-main-dev-elasticsearch-node-01.fastorder.com (10.100.1.250)
[ OK ] Topology created from form data
[INFO] Next steps:
[INFO] 1. Review the generated topology.json and configurations
[INFO] 2. Customize overrides/*.env files if needed
[INFO] 3. Run subsequent installation steps (02-install-postgresql, etc.)
[INFO] To use this environment in other scripts:
[INFO] export ENV_ID="$(fo-env list | tail -n1 | awk '{print $1}')"
[INFO] source /opt/fastorder/bash/scripts/env_app_setup/lib/env-orchestrator/lib/config_management.sh
[INFO] init_environment
β³ This step is pending and will execute after the previous steps complete successfully.
Loading logs...
[0;34m[INFO][0m ββββββββββββββββββββββββββββββββββββββββββββββββββ
[0;34m[INFO][0m π OBSERVABILITY CELL PROVISIONING STARTED
[0;34m[INFO][0m ββββββββββββββββββββββββββββββββββββββββββββββββββ
[0;34m[INFO][0m Script: 02-observability-cell/run.sh
[0;34m[INFO][0m Timestamp: 2026-02-05 09:28:09 UTC
[0;34m[INFO][0m ββββββββββββββββββββββββββββββββββββββββββββββββββ
[0;34m[INFO][0m Ensuring correct permissions for observability deployment...
[2026-02-05 09:28:09 UTC] USER=www-data EUID=0 PID=543332 ACTION=fsop ARGS=chmod 775 /var/log/fastorder
[2026-02-05 09:28:09 UTC] USER=www-data EUID=0 PID=543341 ACTION=fsop ARGS=chown www-data:www-data /var/log/fastorder
[2026-02-05 09:28:09 UTC] USER=www-data EUID=0 PID=543350 ACTION=fsop ARGS=touch /var/log/fastorder/provisioning-elevated.log
[2026-02-05 09:28:09 UTC] USER=www-data EUID=0 PID=543368 ACTION=fsop ARGS=chown www-data:www-data /var/log/fastorder/provisioning-elevated.log
[0;32m[OK][0m Log directory: /var/log/fastorder (775)
[0;32m[OK][0m Log file: provisioning-elevated.log (666)
[2026-02-05 09:28:09 UTC] USER=www-data EUID=0 PID=543377 ACTION=fsop ARGS=chmod 775 /opt/fastorder/bash/scripts/env_app_setup/state
[0;32m[OK][0m State directory: 775
[2026-02-05 09:28:09 UTC] USER=www-data EUID=0 PID=543386 ACTION=fsop ARGS=mkdir -p /etc/fastorder/observability/certs
[2026-02-05 09:28:09 UTC] USER=www-data EUID=0 PID=543395 ACTION=fsop ARGS=chmod 750 /etc/fastorder/observability/certs
[0;32m[OK][0m Cert directory: /etc/fastorder/observability/certs (750 - secure)
[0;32m[OK][0m Lib scripts: executable (755)
[0;32m[OK][0m All deployment scripts: executable (755)
[0;32m[OK][0m All directories: accessible (755)
[0;32m[OK][0m β
All permissions verified and fixed
[0;34m[CREDS][0m Using AWS credentials from: /var/www/.aws/credentials
[0;34m[CREDS][0m Credential management library loaded (region: me-central-1)
[INFO] Using web-provided environment: zone-universe-main-dev
[INFO] Loaded environment: zone-universe-main-dev (svc=zone zone=universe env=dev ip=142.93.238.16)
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
OBSERVABILITY CELL PROVISIONING
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
[INFO] Application Cell: zone-universe-main-dev
[INFO] Observability Cell: obs-zone-universe-main-dev
[INFO] Service: zone | Zone: universe | Env: dev
[INFO] Step 1/10: Provisioning network infrastructure...
[INFO] Using existing IP for obs: 10.100.1.73
/opt/fastorder/bash/scripts/env_app_setup/setup/02-observability-cell/run.sh: line 250: echo: write error: Broken pipe
[INFO] Allocated new IP for metrics: 10.100.1.53
[2026-02-05 09:28:11 UTC] USER=www-data EUID=0 PID=543608 ACTION=fsop ARGS=cp /tmp/tmp.17Z8iiBsK0 /opt/fastorder/bash/scripts/env_app_setup/state/zone-universe-main-dev/topology.json
/opt/fastorder/bash/scripts/env_app_setup/setup/02-observability-cell/run.sh: line 250: echo: write error: Broken pipe
[INFO] Allocated new IP for dashboards: 10.100.1.194
[ERROR] Failed to allocate IP for logstore - no available IPs in range
[ERROR] Failed to allocate observability IPs
β³ This step is pending and will execute after the previous steps complete successfully.
Loading logs...
β³ This step is pending and will execute after the previous steps complete successfully.
Loading logs...
β³ This step is pending and will execute after the previous steps complete successfully.
Loading logs...
β³ This step is pending and will execute after the previous steps complete successfully.
Loading logs...