πŸ“ˆConnect Monitoring Data

Sedai will automatically discover your cloud topology, but additionally needs access to your APM/observability providers in order to learn resource behavior.

Each cloud account or Kubernetes cluster integration needs at least one monitoring data source in order to learn and analyze performance behavior. You will be prompted to add a monitoring source when integrating a new account or Kubernetes cluster, but you can add or modify a monitoring source at any time by navigating to the Settings > Integrations page and then selecting the corresponding account or cluster.

Once you integrate a monitoring source, Sedai automatically retrieves your Golden Signals (latency, traffic, errors, and saturation) so it can start analyzing resource behavior. It typically takes between 7-14 days for Sedai to examine monitoring data and generate its preliminary analysis.

If your organization has configured custom metrics, contact support@sedai.io for help importing.

Select a provider to view integration details:


Advanced Setup

Tag Mapping

Sedai leverages tags to accurately infer metrics and map them to your infrastructure so it can understand which metrics relate to resources discovered within a connected account/cluster.

By default, Sedai pre-populates standard tag values based on industry best practices for naming conventions. However, if you configured custom tags, you will need to provide the exact tag value used.

You can view default tags under each monitoring provider on this page.

Instance ID Pattern

If your instance IDs do not follow standard Kubernetes patterns, you will need to define the unique identifier or prefix used. Supported patterns include:

  • ${instanceID}

  • ${instanceName}

  • ${appID}

  • ${appName}

  • ${regionID}

  • $}loadBalancerId}

  • ${kubeNamespace}

  • ${kubeClusterName}


Supported Providers

AppDynamics

Controller Name

Controller Endpoint

Client ID

Learn more here

Client Secret Key

Learn more here

Once the controller name and endpoint URL are added, an API client will be created to provide secure access to the Controller, following which the client secret and the Client ID are received.

Tag Mapping

Load Balancer

load_balancer_name

Application

application_id

Instance ID

instance_id


Azure Monitor

Contact support@sedai.io for help integrating.


CloudWatch

By default, Sedai connects to CloudWatch using the same credentials as the corresponding AWS account. You can alternatively connect CloudWatch independently of the account credentials by configuring IAM access.

Tag Mapping

Load Balancer

load_balancer_name

Application

application_id

Instance ID

instance_id

By default, memory metrics are not available EC2. We recommend enabling these metrics to access optimal EC2 configurations. You can enable by installing the CloudWatch agent using the command line or the AWS Systems Manager, as well as by installing the agent on new instances using a CloudFormation template. To learn more about using the CloudWatch agent for memory metrics, visit CloudWatch Agent docs.


Chronosphere

You will need to authenticate your Chronosphere account using one of the following methods:

  • Basic Auth (requires username and password)

  • JWT Credentials (requires token)

  • OIDC Client Provider (requires Token URL, Client ID, and Client Secret)

  • No authentication

Endpoint

Certificate Authority

(Optional) PEM format accepted; chain of certificates supported


Datadog

If you connect multiple AWS accounts or Kubernetes clusters to Sedai, you will need to add Datadog to each from its respective integration page.

Endpoint

Based on your site's location, select one of the following:

API Key

Application Key

Tag Mapping

Cluster

cluster

Namespace

destination_service_namespace, kube_namespace, namespace

Load Balancer

load_balancer_name

Application

destination_workload, service, kube_app_name

Container

container_name

Pod

pod_name

If you're not sure how tag values have been configured within your account, you can view your unique metrics with the following APIs:

APM Metrics
https://api.datadoghq.com/api/v2/metrics/trace.http.request.duration/all-tags
CPU Metrics (ECS only)
https://api.datadoghq.com/api/v2/metrics/ecs.fargate.cpu.usage/all-tags

To learn more about configuring custom tags, please visit Datadog Docs.


Google Monitoring

Project ID

Go to your API Console. From the projects list, select Manage all projects. The names and IDs for all the projects you're a member of are displayed.

Service Account JSON

You will need to create a new service account β€” learn more.

Tag Mapping

Region

location

Load Balancer

load_balancer_name

Application

application_id

Instance

instance_id


Netdata

Requires an endpoint.

Tag Mapping

Namespace

namespace

Load Balancer

load_balancer_name

Application

application_id

Pod

pod

Container

container


New Relic

API Server

API Server URL

Tag Mapping

Namespace

exporter_namespace, namespace

Load Balancer

load_balancer_name, entity.name

Application

application_id, appName, entity.name

Pod

instance_id, k8s.podName, k8s.pod.name

Container

container, k8s.containerName, k8s.container.name


Prometheus

If Prometheus is your primary monitoring source, you will need to connect it to each Kubernetes cluster added to Sedai.

Sedai can receive monitoring data from multiple Prometheus instances running on-premise or public cloud.

To connect to Prometheus, you will need to provide its Endpoint. By default, Sedai does not require authentication since it assumes it can connect to Prometheus within the Kubernetes control plane via the Sedai Smart Agent.

You can authenticate Prometheus using the following methods:

  • Basic Auth (requires username and password)

  • JWT Credentials (requires token)

  • OIDC Client Credentials (requires token endpoint, client ID, and client secret)

  • OIDC Resource Owner Password (requires token, client ID, client secret, username, and password)

Certificate Authority

You can optionally provide a custom Certificate Authority (CA) if you use an https connection while communicating with your Prometheus endpoint.

Tag Mapping

TopologyDefault Label Value

Namespace

exporter_namespace, envoy, namespace, destination_service_namespace

Load Balancer

service

Application

application_id

Pod

pod

Container

container

Region

region

Availability Zones

availability_zone

Operating System

os

Architecture

architecture

Instance Type

instance_type


Splunk (Previously SignalFX)

API Server URL

Example format: https://api.YOUR_SIGNALFX_REALM.signalfx.com

API Key

To generate an API access token, please visit Splunk Docs.

Tag Mapping

Namespace

namespace

Load Balancer

load_balancer_name

Application

application_id

Pod

kubernetes_pod_name

Container

container_spec_name

You can additionally map tags for region, availability zone, SignalFlow Programs, and metric name wildcards.


Wavefront

API Key

API Server

This is your <wavefront_instance> url, such as https://longboard.wavefront.com/)

Testing the Wavefront connection is not currently supported within Sedai. However, if the API key is valid, the integration will connect.

Defining dimensions

Wavefront dimensions can vary by product version. To find your unique Wavefront dimensions, go to the Browse tab from your Wavefront platform and search for kubernetes.pod_container.cpu.usage_rate metrics in the table view.

This will give an output similar to the example below. Note the following code lines for examples of different dimension values:

  • Line 6: Container dimension: container_name

  • Line 10: Application (Kubernetes Workload) dimension: label.app.kubernetes.io/name, label.app, label.name

  • Line 17: Load balancer (typically service) dimension: label.io.kompose.service

  • Line 28: Pod dimension: pod_name

Example
kubernetes.pod_container.cpu.usage_rate: -
Source: ip-192-168-185-37.us-east-2.compute.internal
cluster: sedaivector
container_base_image: nvanderhoeven/pokeapi_app
container_name: pokeapi-app
label.app: -
label.app.kubernetes.io/component: -
label.app.kubernetes.io/instance: -
label.app.kubernetes.io/managed-by: -
label.app.kubernetes.io/name: -
label.app.kubernetes.io/version: -
label.chart: -
label.component: -
label.eks.amazonaws.com/component: -
label.helm.sh/chart: -
label.heritage: -
label.io.kompose.service: app
label.k8s-app: -
label.mode: -
label.name: -
label.olm.catalogSource: -
label.plane: -
label.release: -
label.statefulset.kubernetes.io/pod-name: -
label.vizier-name: -
namespace_name: poki-test-1
nodename: ip-192-168-185-37.us-east-2.compute.internal
pod_name: app-5b87b74bc6-xd9sj
type: pod_container
Value: 1.200k

Custom metrics

In order to add custom metrics such as latency or traffic, you will need to additionally define their dimensions. For example, if nginx is the ingress/envoy controller, Sedai supports nginx/envoy ingress metrics.

In the example below, the load balancer dimension for these metrics is service in the first code block (line 16) and envoy_cluster_name in the second code block (line 4).

Example
nginx.ingress.controller.requests.counter: -
Source: ip-192-168-185-37.us-east-2.compute.internal
_host: hello-test.info
cluster: sedaivector
controller_class: k8s.io/ingress-nginx
controller_namespace: ingress-nginx
controller_pod: ingress-nginx-controller-7dfdd55674-zjnrr
ingress: opt-ingress
label.app.kubernetes.io/component: controller
label.app.kubernetes.io/instance: ingress-nginx
label.app.kubernetes.io/name: ingress-nginx
method: GET
namespace: optimization-test
path: /pyroglyph
pod: ingress-nginx-controller-7dfdd55674-zjnrr
service: pyroglyph-service
status: 200
Value: 10.004M
Example
envoy.cluster.upstream.rq.completed.counter: -
Source: ip-192-168-48-229.us-east-2.compute.internal
cluster: sedaivector
envoy_cluster_name: default_httpbin_80
label.app.kubernetes.io/component: envoy
label.app.kubernetes.io/instance: my-release
label.app.kubernetes.io/managed-by: Helm
label.app.kubernetes.io/name: contour
label.helm.sh/chart: contour-7.3.5
namespace: projectcontour
pod: my-release-contour-envoy-h5jvk
Value: 2.574M

Tag Mapping

Namespace

namespace_name

Load Balancer

load_balancer_name, envoy_cluster_name, label.io.kompose.service, service

Application

application_id, label.app.kubernetes.io/name, label.app, label.name

Pod

pod, pod_name

Container

container, container_name


Imported Metrics

Sedai automatically prioritizes and imports relevant metrics from connected monitoring sources. You can view a list of these default metrics from the Settings > Metrics page. You can filter the list based on the parent account/Kubernetes cluster as well as by the monitoring provider.

Sedai automatically imports default metrics based on the monitoring source. If your organization has configured custom metrics, contact support@sedai.io for help importing.

Sedai automatically pulls monitoring data for prioritized metrics for all connected resources. If you don't want to pull specific metrics Sedai imported, you can toggle them off from the Metrics page. If you don't want to pull metrics for all resources, you can optionally disable feature settings for specific resources. Learn more about configuring settings.

API Meter

Sedai makes regular API calls to inform its ML models. To view charges caused by Sedai, go to the System > API Meter page and select a timeframe. By default, charges are broken down by account/Kubernetes cluster. Select one to view total calls and cost for the account/cluster's connected monitoring source.

Last updated