Tag Configuration
Learn how to apply settings via cloud provider tags.
Sedai automatically infers resource tags and Kubernetes annotations. These can be used to optionally define feature settings and compute action controls as well as for configuring Infrastructure as Code (IaC) changes.
Features
When you define settings via tags, Sedai considers it an override on settings a resource would otherwise inherit from its parent (such as a group, account, or Kubernetes cluster).
Tags must include the prefix settings.sedai.io
followed by a period (.); for example:
settings.sedai.io.optimization.setting.configMode
For feature settings, the accepted keys of MANUAL
and AUTO
correspond to Recommend Mode and Autonomous Mode respectively.
For Kubernetes workloads, add the prefix settings.sedai.io
followed by a slash (/); for example:
settings.sedai.io/optimization.setting.configMode
For feature settings, the accepted keys of MANUAL
and AUTO
correspond to Copilot mode and Autopilot mode respectively.
Optimization
Learn more about optimization settings.
Optimization Setting
optimization.setting.configMode
OFF
, MANUAL
, AUTO
Goal
optimization.optimizationFocus.focus
COST
, DURATION
, COST_AND_DURATION
If goal is to improve performance, the allowed percent increase impact on cost from memory
optimization.optimizationFocus.maxMemoryIncreasePct
Integer between 0-100
If goal is to improve performance, the allowed percent increase impact on cost from CPU
optimization.optimizationFocus.maxCPUIncreasePct
Integer between 0-100
If goal is to reduce cost, the allowed percent increase on latency
optimization.optimizationFocus.maxLatencyIncreasePct
Integer between 0-100
Availability
Learn more about availability settings.
availability.configMode
OFF
, MANUAL
, AUTO
AWS Lambda only
telemetryLogging.enabled
Boolean
Release Intelligence
Learn more about release intelligence settings.
releaseIntelligence.configMode
OFF
, MANUAL
Compute Actions
The following settings only apply to AWS ECS/Fargate and Kubernetes stateless workloads.
Learn more about container and virtual machine compute actions.
Vertical Scaling
enableVerticalScaling.enabled
Boolean
↳ Minimum CPU in MiB (optional)
enableVerticalScaling.minCpu
Integer
↳ Minimum memory in GB (optional)
enableVerticalScaling.minMemory
Integer
Horizontal Scaling
enableHorizontalScaling.enabled
Boolean
↳ Minimum replica count (optional) Default is set to 2
enableHorizontalScaling.minReplicas
Integer
↳ Maximum replica count (optional)
enableHorizontalScaling.maxReplicas
Integer
↳ ECS only: Replica increment count (optional)
enableHorizontalScaling.replicaMultiplier
Integer
↳ ECS only: Replica increment count (optional)
enableHorizontalScaling.replicaIncrement
Integer
Auto Scaling (ECS only)
enableServiceAutoscalingConfiguration.enabled
Boolean
Autonomous Action without Traffic
autonomousActionWithoutTraffic.enabled
Boolean
Pre-production/Production
isProd.enabled
Boolean
IaC Configurations
Requires an IaC integration — learn more.
To apply IaC configurations, tags should be formatted with the following prefix:
configs:sedai.io:
For Kubernetes workloads, annotations should be formatted with the prefix:
configs.sedai.io/
For example:
default_repo_path
configs:sedai.io:default_repo_path = 412335
variables_file_path
configs:sedai.io:variables_file_path = terraform/prod/prod-sls-1.tfvars
If you use GitLabs, the variables_file_path
does not need to include the project name.
For example, if the project name is A and the variables file C is at location A/B/C then you only need to include B/C as the value for variables_file_path
.
AWS Lambda
memory_size
(MB)
configs:sedai:io:memory_size = var.memory_size["prod-sls-1"]
timeout
(seconds)
configs:sedai.io:timeout = var.timeout[“prod-sls-1”]
reserved_concurrency
configs:sedai.io:reserved_concurrency = var.reserved_concurrency[“prod-sls-1”]
provisioned_concurrency
configs:sedai.io:provisioned_concurrency = var.provisioned_concurrency[“prod-sls-1”]
AWS ECS
task_cpu
(units)
configs:sedai.io:task_cpu = var.task_cpu[“prod-app-1”]
task_memory
(MiB)
configs:sedai.io:task_memory = var.task_memory[“prod-app-1”]
desired_count
configs:sedai.io:desired_count = var.desired_count[“prod-app-1”]
container.soft_memory
(MiB)
configs:sedai.io:container.soft_memory = var.soft_memory[“prod-app-1”]
container.hard_memory
(MiB)
configs:sedai.io:container.hard_memory = var.hard_memory[“prod-app-1”]
container.soft_cpu
(units)
configs:sedai.io:container.soft_cpu = var.memosoft_cpury_size[“prod-app-1”]
autoscaler_config.as_min_task
configs:sedai.io:autoscaler_config.as_min_task = var.as_min_task[“prod-app-1”]
autoscaler_config.as_max_task
configs:sedai.io:autoscaler_config.as_max_task = var.as_max_task[“prod-app-1”]
autoscaler_config.metric
configs:sedai.io:autoscaler_config.metric = var.metric[“prod-app-1”]
autoscaler_config.target_value
configs:sedai.io:autoscaler_config.target_value = var.target_value[“prod-app-1”]
Kubernetes
replica_count
configs.sedai.io/container.replica_count = var.replica_count[“prod-app-1”]
container.cpu_request
configs.sedai.io/container.cpu_request = var.cpu_request[“prod-app-1”]
container.cpu_limit
configs.sedai.io/container.cpu_limit = var.cpu_limit[“prod-app-1”]
container.memory_request
configs.sedai.io/container.memory_request = var.memory_request[“prod-app-1”]
container.memory_limit
configs.sedai.io/container.memory_limit = var.memory_limit[“prod-app-1”]
hpa_min_count
configs.sedai.io/hpa_min_count = var.hpa_min_count[“prod-app-1”]
hpa_max_count
configs.sedai.io/hpa_max_count = var.hpa_max_count[“prod-app-1”]
hpa_metric
configs.sedai.io/hpa_metric = var.hpa_metric[“prod-app-1”]
hpa_target_value
configs.sedai.io/hpa_target_value = var.hpa_target_value[“prod-app-1”]
Last updated