Connect Monitoring Data
Learn how to connect monitoring data to Sedai's autonomous cloud management platform. Monitor, manage, and optimize your cloud infrastructure effortlessly.
Last updated
Learn how to connect monitoring data to Sedai's autonomous cloud management platform. Monitor, manage, and optimize your cloud infrastructure effortlessly.
Last updated
Sedai requires access to your APM/observability providers in order to learn resource behavior and teach its machine learning models. Each cloud integration needs a monitoring source in order for Sedai to generate recommendations or optionally operate on your resources. You will be prompted to add a monitoring source when integrating a new account or Kubernetes cluster, but you can add or modify a monitoring source at any time by navigating to the Settings > Integrations page and then selecting the corresponding account or cluster.
By default, Sedai automatically detects and imports standard metrics for CPU, memory, traffic, performance and errors (view sample list of default metrics here.) The system primarily relies on performance metrics, however each category is required in order for the system to run effectively and produce the best recommendations. You can view a list of imported metrics from the Settings > Metrics page. You can filter the list based on the parent account/Kubernetes cluster as well as by the monitoring provider.
It typically takes between 7-14 days for Sedai to examine monitoring data and generate its preliminary analysis, however you can optionally backfill metric data to see insights sooner. You can backfill data via a checkbox when adding a new monitoring integration or via its edit screen.
If you don't want to pull specific metrics Sedai imported, you can toggle them off from the Metrics page. If you don't want to pull metrics for all resources, you can optionally disable feature settings for specific resources. Learn more about configuring settings.
If your organization uses different metrics than the defined defaults (or custom metrics) contact support@sedai.io for help with advanced setup to map your metrics into Sedai.
In addition to learning resource behavior, Sedai also leverages APM data to decide if it can safely execute operations on resources. If Sedai is unable to detect relevant performance metrics from a monitoring integration, you can optionally configure settings to use an expedited operations safety workflow to safely relax Sedai's safety checks (learn more).
If your organization has configured custom metrics, contact support@sedai.io for help importing.
Select a provider to view integration details:
Controller Name
Controller Endpoint
Client ID
Client Secret Key
Once the controller name and endpoint URL are added, an API client will be created to provide secure access to the Controller, following which the client secret and the Client ID are received.
Load Balancer
load_balancer_name
Application
application_id
Instance ID
instance_id
Contact support@sedai.io for help integrating.
By default, Sedai connects to CloudWatch using the same credentials as the corresponding AWS account. You can alternatively connect CloudWatch independently of the account credentials by configuring IAM access.
Load Balancer
load_balancer_name
Application
application_id
Instance ID
instance_id
By default, memory metrics are not available EC2. We recommend enabling these metrics to access optimal EC2 configurations. You can enable by installing the CloudWatch agent using the command line or the AWS Systems Manager, as well as by installing the agent on new instances using a CloudFormation template. To learn more about using the CloudWatch agent for memory metrics, visit CloudWatch Agent docs.
Sedai uses these metrics to generate optimal configurations for cost savings or performance improvements.
You will need to authenticate your Chronosphere account using one of the following methods:
Basic Auth (requires username and password)
JWT Credentials (requires token)
OIDC Client Provider (requires Token URL, Client ID, and Client Secret)
No authentication
Endpoint
Certificate Authority
(Optional) PEM format accepted; chain of certificates supported
If you connect multiple AWS accounts or Kubernetes clusters to Sedai, you will need to add Datadog to each from its respective integration page.
Endpoint
Based on your site's location, select one of the following:
API Key
Application Key
Cluster
cluster
Namespace
destination_service_namespace
, kube_namespace
, namespace
Load Balancer
load_balancer_name
Application
destination_workload
, service
, kube_app_name
Container
container_name
Pod
pod_name
If you're not sure how tag values have been configured within your account, you can view your unique metrics with the following APIs:
To learn more about configuring custom tags, please visit Datadog Docs.
Project ID
Service Account JSON
Region
location
Load Balancer
load_balancer_name
Application
application_id
Instance
instance_id
Requires an endpoint.
Namespace
namespace
Load Balancer
load_balancer_name
Application
application_id
Pod
pod
Container
container
API Server
API Server URL
Namespace
exporter_namespace
, namespace
Load Balancer
load_balancer_name
, entity.name
Application
application_id
, appName
, entity.name
Pod
instance_id
, k8s.podName
, k8s.pod.name
Container
container
, k8s.containerName
, k8s.container.name
If Prometheus is your primary monitoring source, you will need to connect it to each Kubernetes cluster added to Sedai.
Sedai can receive monitoring data from multiple Prometheus instances running on-premise or public cloud.
To connect to Prometheus, you will need to provide its Endpoint. By default, Sedai does not require authentication since it assumes it can connect to Prometheus within the Kubernetes control plane via the Sedai Smart Agent.
You can authenticate Prometheus using the following methods:
Basic Auth (requires username and password)
JWT Credentials (requires token)
OIDC Client Credentials (requires token endpoint, client ID, and client secret)
OIDC Resource Owner Password (requires token, client ID, client secret, username, and password)
You can optionally provide a custom Certificate Authority (CA) if you use an https connection while communicating with your Prometheus endpoint.
Namespace
exporter_namespace
, envoy
, namespace
, destination_service_namespace
Load Balancer
service
Application
application_id
Pod
pod
Container
container
Region
region
Availability Zones
availability_zone
Operating System
os
Architecture
architecture
Instance Type
instance_type
API Server URL
Example format: https://api.YOUR_SIGNALFX_REALM.signalfx.com
API Key
To generate an API access token, please visit Splunk Docs.
Namespace
namespace
Load Balancer
load_balancer_name
Application
application_id
Pod
kubernetes_pod_name
Container
container_spec_name
You can additionally map tags for region, availability zone, SignalFlow Programs, and metric name wildcards.
API Key
API Server
This is your <wavefront_instance>
url, such as https://longboard.wavefront.com/
)
Testing the Wavefront connection is not currently supported within Sedai. However, if the API key is valid, the integration will connect.
Wavefront dimensions can vary by product version. To find your unique Wavefront dimensions, go to the Browse tab from your Wavefront platform and search for kubernetes.pod_container.cpu.usage_rate
metrics in the table view.
This will give an output similar to the example below. Note the following code lines for examples of different dimension values:
Line 6: Container dimension: container_name
Line 10: Application (Kubernetes Workload) dimension: label.app.kubernetes.io/name
, label.app
, label.name
Line 17: Load balancer (typically service) dimension: label.io.kompose.service
Line 28: Pod dimension: pod_name
In order to add custom metrics such as latency or traffic, you will need to additionally define their dimensions. For example, if nginx is the ingress/envoy controller, Sedai supports nginx/envoy ingress metrics.
In the example below, the load balancer dimension for these metrics is service
in the first code block (line 16) and envoy_cluster_name
in the second code block (line 4).
Namespace
namespace_name
Load Balancer
load_balancer_name
, envoy_cluster_name
, label.io.kompose.service
, service
Application
application_id
, label.app.kubernetes.io/name
, label.app
, label.name
Pod
pod
, pod_name
Container
container
, container_name
Sedai makes regular API calls to inform its ML models. To view charges caused by Sedai, go to the System > API Meter page and select a timeframe. By default, charges are broken down by account/Kubernetes cluster. Select one to view total calls and cost for the account/cluster's connected monitoring source.
Learn more
Learn more
US1:
EU1:
Go to your . From the projects list, select Manage all projects. The names and IDs for all the projects you're a member of are displayed.
You will need to create a new service account β .