Optimization
A comprehensive guide to configuring optimization features in Sedaiβs platform. Improve cloud performance and reduce costs with automatic optimization settings.
Last updated
A comprehensive guide to configuring optimization features in Sedaiβs platform. Improve cloud performance and reduce costs with automatic optimization settings.
Last updated
Sedai analyzes traffic and usage data to generate optimal configurations for your cloud resources. Sedai offers different optimization mechanisms based on a resource's context, but generally identifies optimal configurations to reduce cloud spend and/or improve performance.
When configuring optimization settings, you can choose a setting to either view opportunities or allow Sedai to continuously optimize configurations with reinforcement learning. You can also fine tune optimization by defining specific goals based on a resource's purpose and context.
By default, optimization is set to Datapilot for all resource types.
Sedai supports optimization for the following resource types:
Serverless
(AWS Lambda)
Configures memory allocation based on goal;
Manages provisioned concurrency for $LATEST and versioned Lambdas to reduce the cost of cold starts (disabled by default)
AWS Containers
(ECS/Fargate)
Configures memory and CPU allocations (at task and container level) based on goal
Kubernetes
(Stateless Workloads)
Configures memory and CPU allocations for workloads based on goal;
Manages Horizontal and Vertical Pod Autoscalers;
Configures node machines based on workload needs
Virtual Machines (AWS EC2, Azure VMs)
Configures machine size and type to generate optimal CPU and memory allocations
Support includes instances grouped by load balancers or tags
Storage (AWS EBS, AWS S3)
Manages storage class, intelligent tiering, capacity provisioning, and volume sizing
Streaming (GCP Dataflow)
Configures compute capacity at each stage of data pipeline
Learn more about how Sedai optimizes each resource type:
In addition to configuring memory and CPU allocations, Sedai also supports vertical and horizontal scalling for container-based resources such as AWS ECS/Fargate and Kuberentes workloads. These types of actions can be independently controlled from the Compute Actions section within settings.
Sedai also allows you to define parameters for node counts in your Kubernetes clusters. When generating optimization opportunities for clusters, Sedai will take into consideration any predefined minimum node count per node group and maximum node count across all node groups These parameters can be defined from Cluster Constraints in settings.
Sedai currently supports Autopilot for compute resources such as AWS Lambda, AWS ECS/Fargate, and Kubernetes stateless workloads. Support for storage and virtual machines coming soon.
Sedai does not generate opportunities for AWS Lambda optimization in Datapilot. Due to the nature of serverless functions' on-demand usage, optimization requires reinforcement learning.
Autopilot Automatically identifies and executes optimal configuration based on goal
Best for optimal results; allows Sedai to use reinforcement learning to continuously analyze resource usage and its impact on cost and performance.
Copilot
Analyzes usage data to identify opportunities to rightsize
Best for learning how Sedai works and exploring estimated savings; allows you to view potential configurations and optionally try them out.
By default, optimization is set to Datapilot for connected accounts and Kubernetes clusters. You can also modify this setting at the group and resource level.
We recommend connecting accounts and Kubernetes clusters with read-write access so that you can try out autonomous executions with optimization opportunities in Datapilot. If you start with read-only access, you will only be able to view potential configurations and will have to manually apply changes. View integration details for more on Sedai's read-write access.
Sedai currently supports goal-based optimization for AWS ECS/Fargate, AWS Lambda, and Kubernetes stateless workloads.
When it comes to optimization, your organization or team may have different goals for different cloud resources. You can configure goals at the account, cluster, group, or individual resource level. Goals are based on optimally configuring current builds or intentionally reducing cost or improving performance:
Balance: Balances cost and performance based on current configuration and observed usage and traffic (default for serverless functions)
Decrease cost: Identifies most cost-effective configuration based on observed usage and traffic (default for containers)
Improve performance: Identifies optimal configuration for lowest latency based on observed usage and traffic
For cost- or performance-based goals you additionally need to define an increase allowance since cost and performance inversely impact one another. This boundary determines to what degree Sedai can maximize savings for either goal without negatively impacting its inverse.
Boundaries are defined as percent increases. For example, if you want to reduce cost for a Lambda function, you can direct Sedai to find the optimal configuration without increasing its current average latency by more than 10%. Or if you want to prioritize performance for an ECS service, you can ask Sedai to optimize for performance without increasing the service's current average monthly spend by more than 20%. This allows you to fine tune optimizations based on your team's context and needs.
Enter a boundary of 0% if you want to optimize your goal without impacting the current cost or performance.
Goals can be defined in Datapilot and Autopilot. Datapilot will help get an idea of potential savings and improvements, but for best results we recommend using Autopilot.
You can optionally configure optimization settings using tags.