Ask or search…
⌃K
📈

Connect Monitoring Data

Sedai will automatically discover your cloud topology, but additionally needs access to your APM/observability providers in order to learn resource behavior.
Each cloud account or Kubernetes cluster integration needs at least one monitoring data source in order to learn and analyze performance behavior. You will be prompted to add a monitoring source when integrating a new account or Kubernetes cluster, but you can add or modify a monitoring source at any time by navigating to the Settings > Integrations page and then selecting the corresponding account or cluster.
Once you integrate a monitoring source, Sedai automatically identifies your Golden Signals and prioritizes metric data so it can start analyzing resource behavior. It typically takes between 7-14 days for Sedai to examine monitoring data and generate its preliminary analysis.
If your organization has configured custom metrics, contact [email protected] for help importing.
Select a provider to view integration details:

Advanced Setup

Tag Mapping

Sedai leverages tags to accurately infer metrics and map them to your infrastructure so it can understand which metrics relate to resources discovered within a connected account/cluster.
By default, Sedai pre-populates standard tag values based on industry best practices for naming conventions. However, if you configured custom tags, you will need to provide the exact tag value used.
You can view default tags under each monitoring provider on this page.

Instance ID Pattern

If your instance IDs do not follow standard Kubernetes patterns, you will need to define the unique identifier or prefix used. Supported patterns include:
  • ${instanceID}
  • ${instanceName}
  • ${appID}
  • ${appName}
  • ${regionID}
  • $}loadBalancerId}
  • ${kubeNamespace}
  • ${kubeClusterName}

Supported Providers

AppDynamics

Controller Name
​
Controller Endpoint
​
Client ID
Learn more here​
Client Secret Key
Learn more here​
Once the controller name and endpoint URL are added, an API client will be created to provide secure access to the Controller, following which the client secret and the Client ID are received.

Tag Mapping

Cloud Account
Kubernetes Cluster
Load Balancer
load_balancer_name
Application
application_id
Instance ID
instance_id
Namespace
namespace
Load Balancer
load_balancer_name
Application
application_id
Container
container_name
Pod
pod_name

Azure Monitor

Contact [email protected] for help integrating.

CloudWatch

By default, Sedai connects to CloudWatch using the same credentials as the corresponding AWS account. You can alternatively connect CloudWatch independently of the account credentials by configuring IAM access.

Tag Mapping

Load Balancer
load_balancer_name
Application
application_id
Instance ID
instance_id
By default, memory metrics are not available EC2. We recommend enabling these metrics to access optimal EC2 configurations. You can enable by installing the CloudWatch agent using the command line or the AWS Systems Manager, as well as by installing the agent on new instances using a CloudFormation template. To learn more about using the CloudWatch agent for memory metrics, visit CloudWatch Agent docs.

Chronosphere

You will need to authenticate your Chronosphere account using one of the following methods:
  • Basic Auth (requires username and password)
  • JWT Credentials (requires token)
  • OIDC Client Provider (requires Token URL, Client ID, and Client Secret)
  • No authentication
Endpoint
​
Certificate Authority
(Optional) PEM format accepted; chain of certificates supported

Datadog

If you connect multiple AWS accounts or Kubernetes clusters to Sedai, you will need to add Datadog to each from its respective integration page.
Endpoint
Based on your site's location, select one of the following:
API Key
​
Application Key
​

Tag Mapping

Kubernetes Cluster
Cloud Account
Cluster
cluster
Namespace
destination_service_namespace, kube_namespace, namespace
Load Balancer
load_balancer_name
Application
destination_workload, service, kube_app_name
Container
container_name
Pod
pod_name
Topology
Default Tag Key
Load Balancer
load_balancer_name, targetgroup
Application
applicaton_id
Instance
instance_id
If you're not sure how tag values have been configured within your account, you can view your unique metrics with the following APIs:
APM Metrics
https://api.datadoghq.com/api/v2/metrics/trace.http.request.duration/all-tags
CPU Metrics (ECS only)
https://api.datadoghq.com/api/v2/metrics/ecs.fargate.cpu.usage/all-tags
To learn more about configuring custom tags, please visit Datadog Docs.

Google Monitoring

Project ID
Go to your API Console. From the projects list, select Manage all projects. The names and IDs for all the projects you're a member of are displayed.
Service Account JSON
You will need to create a new service account — learn more.

Tag Mapping

Cloud Account
Kuberenetes Cluster
Region
location
Load Balancer
load_balancer_name
Application
application_id
Instance
instance_id
Cluster
cluster_name
Region
location
Namespace
namespace_name
Load Balancer
load_balancer_name
Application
application_id
Pod
pod_name
Container
container_name

Netdata

Requires an endpoint.

Tag Mapping

Kubernetes Cluster
Cloud Account
Namespace
namespace
Load Balancer
load_balancer_name
Application
application_id
Pod
pod
Container
container
Load Balancer
load_balancer_name
Application
application_id
Instance
instance_id

New Relic

API Server
​Learn m​ore ​
API Server URL
​Learn more​

Tag Mapping

Kubernetes Cluster
Cloud Account
Namespace
exporter_namespace, namespace
Load Balancer
load_balancer_name, entity.name
Application
application_id, appName, entity.name
Pod
instance_id, k8s.podName, k8s.pod.name
Container
container, k8s.containerName, k8s.container.name
Load Balancer
load_balancer_name
Application
application_id
Instance
instance_id

Prometheus

If Prometheus is your primary monitoring source, you will need to connect it to each Kubernetes cluster added to Sedai.
Sedai can receive monitoring data from multiple Prometheus instances running on-premise or public cloud.
To connect to Prometheus, you will need to provide its Endpoint. By default, Sedai does not require authentication since it assumes it can connect to Prometheus within the Kubernetes control plane via the Sedai Smart Agent.
You can authenticate Prometheus using the following methods:
  • Basic Auth (requires username and password)
  • JWT Credentials (requires token)
  • OIDC Client Credentials (requires token endpoint, client ID, and client secret)
  • OIDC Resource Owner Password (requires token, client ID, client secret, username, and password)

Certificate Authority

You can optionally provide a custom Certificate Authority (CA) if you use an https connection while communicating with your Prometheus endpoint.

Tag Mapping

Topology
Default Label Value
Namespace
exporter_namespace, envoy, namespace, destination_service_namespace
Load Balancer
service
Application
application_id
Pod
pod
Container
container
Region
region
Availability Zones
availability_zone
Operating System
os
Architecture
architecture
Instance Type
instance_type

Splunk (Previously SignalFX)

API Server URL
Example format: https://api.YOUR_SIGNALFX_REALM.signalfx.com
API Key
​
To generate an API access token, please visit Splunk Docs.

Tag Mapping

Kubernetes Cluster
Cloud Account
Namespace
namespace
Load Balancer
load_balancer_name
Application
application_id
Pod
kubernetes_pod_name
Container
container_spec_name
You can additionally map tags for region, availability zone, SignalFlow Programs, and metric name wildcards.
Load Balancer
load_balancer_name
Application
application_id
Instance
instance_id

Wavefront

API Key
​Learn more​
API Server
This is your <wavefront_instance> url, such as https://longboard.wavefront.com/)
Testing the Wavefront connection is not currently supported within Sedai. However, if the API key is valid, the integration will connect.

Defining dimensions

Wavefront dimensions can vary by product version. To find your unique Wavefront dimensions, go to the Browse tab from your Wavefront platform and search for kubernetes.pod_container.cpu.usage_rate metrics in the table view.
This will give an output similar to the example below. Note the following code lines for examples of different dimension values:
  • Line 6: Container dimension: container_name
  • Line 10: Application (Kubernetes Workload) dimension: label.app.kubernetes.io/name, label.app, label.name
  • Line 17: Load balancer (typically service) dimension: label.io.kompose.service
  • Line 28: Pod dimension: pod_name
Example
1
kubernetes.pod_container.cpu.usage_rate: -
2
Source: ip-192-168-185-37.us-east-2.compute.internal
3
cluster: sedaivector
4
container_base_image: nvanderhoeven/pokeapi_app
5
container_name: pokeapi-app
6
label.app: -
7
label.app.kubernetes.io/component: -
8
label.app.kubernetes.io/instance: -
9
label.app.kubernetes.io/managed-by: -
10
label.app.kubernetes.io/name: -
11
label.app.kubernetes.io/version: -
12
label.chart: -
13
label.component: -
14
label.eks.amazonaws.com/component: -
15
label.helm.sh/chart: -
16
label.heritage: -
17
label.io.kompose.service: app
18
label.k8s-app: -
19
label.mode: -
20
label.name: -
21
label.olm.catalogSource: -
22
label.plane: -
23
label.release: -
24
label.statefulset.kubernetes.io/pod-name: -
25
label.vizier-name: -
26
namespace_name: poki-test-1
27
nodename: ip-192-168-185-37.us-east-2.compute.internal
28
pod_name: app-5b87b74bc6-xd9sj
29
type: pod_container
30
Value: 1.200k

Custom metrics

In order to add custom metrics such as latency or traffic, you will need to additionally define their dimensions. For example, if nginx is the ingress/envoy controller, Sedai supports nginx/envoy ingress metrics.
In the example below, the load balancer dimension for these metrics is service in the first code block (line 16) and envoy_cluster_name in the second code block (line 4).
Example
1
nginx.ingress.controller.requests.counter: -
2
Source: ip-192-168-185-37.us-east-2.compute.internal
3
_host: hello-test.info
4
cluster: sedaivector
5
controller_class: k8s.io/ingress-nginx
6
controller_namespace: ingress-nginx
7
controller_pod: ingress-nginx-controller-7dfdd55674-zjnrr
8
ingress: opt-ingress
9
label.app.kubernetes.io/component: controller
10
label.app.kubernetes.io/instance: ingress-nginx
11
label.app.kubernetes.io/name: ingress-nginx
12
method: GET
13
namespace: optimization-test
14
path: /pyroglyph
15
pod: ingress-nginx-controller-7dfdd55674-zjnrr
16
service: pyroglyph-service
17
status: 200
18
Value: 10.004M
Example
1
envoy.cluster.upstream.rq.completed.counter: -
2
Source: ip-192-168-48-229.us-east-2.compute.internal
3
cluster: sedaivector
4
envoy_cluster_name: default_httpbin_80
5
label.app.kubernetes.io/component: envoy
6
label.app.kubernetes.io/instance: my-release
7
label.app.kubernetes.io/managed-by: Helm
8
label.app.kubernetes.io/name: contour
9
label.helm.sh/chart: contour-7.3.5
10
namespace: projectcontour
11
pod: my-release-contour-envoy-h5jvk
12
Value: 2.574M

Tag Mapping

Kubernetes Clusters
Cloud Account
Namespace
namespace_name
Load Balancer
load_balancer_name, envoy_cluster_name, label.io.kompose.service, service
Application
application_id, label.app.kubernetes.io/name, label.app, label.name
Pod
pod, pod_name
Container
container, container_name
Load Balancer
load_balancer_name
Application
application_id
Instance
instance_id

Imported Metrics

Sedai automatically prioritizes and imports relevant metrics from connected monitoring sources. You can view a list of these default metrics from the Settings > Metrics page. You can filter the list based on the parent account/Kubernetes cluster as well as by the monitoring provider.
Sedai automatically imports default metrics based on the monitoring source. If your organization has configured custom metrics, contact [email protected] for help importing.
Sedai automatically pulls monitoring data for prioritized metrics for all connected resources. If you don't want to pull specific metrics Sedai imported, you can toggle them off from the Metrics page. If you don't want to pull metrics for all resources, you can optionally disable feature settings for specific resources. Learn more about configuring settings.

API Meter

Sedai makes regular API calls to inform its ML models. To view charges caused by Sedai, go to the System > API Meter page and select a timeframe. By default, charges are broken down by account/Kubernetes cluster. Select one to view total calls and cost for the account/cluster's connected monitoring source.
Last modified 6d ago