Learn more about the autonomous cloud platform and how to set up your account and connect to for the first time.
Sedai is an autonomous cloud management platform that acts as an intelligent autopilot for SREs and DevOps teams. It proactively manages production environments to prevent availability issues and improve performance and cloud costs.
Using an agentless approach, Sedai connects to your cloud and independently detects, prioritizes, and analyzes data to identify opportunities to safely act in production as well as provide deep contextual performance insights. The platform continuously learns from production behavior to refine its intelligence models and doesn't require manual thresholds.


If you need help during the onboarding process, contact our team at [email protected] or join Sedai on Slack to chat with the community.
Sedai supports AWS Lambda and AWS Elastic Kubernetes Service (EKS). Setup takes a few minutes and requires the following:
  1. 1.
    (EKS & Lambda) Setup IAM for Sedai
  2. 3.
    Connect Sedai to AWS
  3. 4.
    Connect Sedai to monitoring data
By default, Sedai collects any CloudWatch data once you connect to your AWS account, but also supports the following monitoring providers:
  • Prometheus
  • Datadog
  • AppDynamics
  • New Relic
  • Dynatrace
  • Splunk / Signal FX
The free account includes a limited number of monthly Lambda invocations as well as Kubernetes pods. Learn more about Sedai's pricing options for larger production environments at

How it works

Sedai's intelligence is based on a continuous learning loop.


Sedai uses Identity and Access Management (IAM) to access your cloud accounts so that it can understand your infrastructure and detect your production environments topology. It securely connects to your monitoring data via endpoints or APIs, and automatically identifies and prioritizes metrics.
During initial setup, the discovery process takes at least a half hour or longer depending on the size of your production environment. Once complete, Sedai displays all identified resources — in other words, your serverless functions and applications. Sedai continuously uses discovery to monitor your topology and detect changes to the infrastructure.


Sedai intelligently analyzes monitoring metrics, traffic patterns, and resource dependencies to deeply understand performance in production. By overlaying this data, Sedai independently determines expected resource performance on a granular seasonality-based level without manual thresholds. This in-depth analysis informs its Decision models so that Sedai can detect unhealthy symptoms early on.
When Sedai initially digests raw production data, it takes about 14 days for the system to fully understand performance trends and build its predictive analytics. As it evolves its intelligence models, Sedai automatically defines service level objective (SLO) targets based on its observations.


Sedai leverages its in-depth analysis of performance behavior to identify opportunities to optimize a resource for availability, performance, and cost.
Sedai generates Proactive Actions only when it can guarantee safe execution in production. By default, Sedai recommends Actions so you can review first and choose if Sedai should execute it, but you can also give Sedai permission to autonomously execute Actions on your behalf. Actions may be generated in response to a time-specific availability issue or if Sedai determines it can overall improve a resource's duration or cost. If Sedai detects unusual behavior, it will also alert you to manually review a resource's performance.
In addition to Actions, Sedai also generates Insights based on trend analysis. For example, Sedai automatically detects code deployments and creates a scorecard that grades the resource's performance post-release.

Learning Loop

Sedai continuously learns from production behavior to evolve its intelligence models. If Sedai is granted permission to act in production, it evaluates the outcome and determines if and how the system can continually improve its symptom detection and Actions.