LogoLogo
  • ABOUT
    • πŸ‘‹Introduction
    • πŸ”Safety & Security
    • ♾️CI/CD
    • 🏁Start Here
      • What to Expect
      • Setup Checklist
      • Understanding Operation Modes
  • Get Started
    • Autonomous Cloud Management
      • Connect AWS account
      • Connect Azure account
      • Connect GCP account
      • Connect Kubernetes cluster
        • πŸ€–Sedai Smart Agent
        • EKS Agentless Setup
        • AKS Agentless Setup
        • GKE Agentless Setup
      • Connect Monitoring Data
    • Augmented FinOps
      • Connect AWS Billing Account
      • Connect Azure Billing Account
  • Platform
    • βš™οΈSettings
      • Features
        • Optimization
        • Availability
        • Tag Configuration
      • Groups
      • Schedules
    • πŸ”„Optimization
      • AWS
        • Lambda
        • ECS
        • EC2
        • Storage
      • Kubernetes
      • Azure
        • Azure VMs
      • GCP
        • Dataflow
    • πŸ’‘Release Intelligence
    • πŸ‘οΈActivity
    • 🎯Service Level Objectives
    • 🌐Integrations
      • πŸ–₯️Infrastructure as Code (IaC)
        • Resource to IaC File Mapping
      • πŸ””Notifications
      • 🎫Ticketing & Service Management
    • πŸ”‘Single Sign-On / RBAC
  • Help
    • ❓Technical FAQ
    • Contact support@sedai.io
    • Schedule onboarding call
Powered by GitBook
On this page
  • Advanced Setup
  • AppDynamics
  • Azure Monitor
  • CloudWatch
  • Chronosphere
  • Datadog
  • Google Monitoring
  • Netdata
  • New Relic
  • Prometheus
  • Splunk (Previously SignalFX)
  • Wavefront
  • API Meter

Was this helpful?

  1. Get Started
  2. Autonomous Cloud Management

Connect Monitoring Data

Learn how to connect monitoring data to Sedai's autonomous cloud management platform. Monitor, manage, and optimize your cloud infrastructure effortlessly.

Last updated 2 months ago

Was this helpful?

Sedai requires access to your APM/observability providers in order to learn resource behavior and teach its machine learning models. Each cloud integration needs a monitoring source in order for Sedai to generate recommendations or optionally operate on your resources. You will be prompted to add a monitoring source when integrating a new account or Kubernetes cluster, but you can add or modify a monitoring source at any time by navigating to the Settings > Integrations page and then selecting the corresponding account or cluster.

By default, Sedai automatically detects and imports standard metrics for CPU, memory, traffic, performance and errors (view sample list of default metrics .) The system primarily relies on performance metrics, however each category is required in order for the system to run effectively and produce the best recommendations. You can view a list of imported metrics from the Settings > Metrics page. You can filter the list based on the parent account/Kubernetes cluster as well as by the monitoring provider.

It typically takes between 7-14 days for Sedai to examine monitoring data and generate its preliminary analysis, however you can optionally backfill metric data to see insights sooner. You can backfill data via a checkbox when adding a new monitoring integration or via its edit screen.

If you don't want to pull specific metrics Sedai imported, you can toggle them off from the Metrics page. If you don't want to pull metrics for all resources, you can optionally disable feature settings for specific resources. Learn more about configuring .

If your organization uses different metrics than the defined defaults (or custom metrics) contact support@sedai.io for help with advanced setup to map your metrics into Sedai.

In addition to learning resource behavior, Sedai also leverages APM data to decide if it can safely execute operations on resources. If Sedai is unable to detect relevant performance metrics from a monitoring integration, you can optionally configure settings to use an expedited operations safety workflow to safely relax Sedai's safety checks ().

If your organization has configured custom metrics, contact support@sedai.io for help importing.

Select a provider to view integration details:


Advanced Setup

Tag Mapping

Sedai leverages tags to accurately infer metrics and map them to your infrastructure so it can understand which metrics relate to resources discovered within a connected account/cluster.

By default, Sedai pre-populates standard tag values based on industry best practices for naming conventions. However, if you configured custom tags, you will need to provide the exact tag value used.

You can view default tags under each monitoring provider on this page.

Instance ID Pattern (Kubernetes)

If your instance IDs do not follow standard Kubernetes patterns, you will need to define the unique identifier or prefix used. Supported patterns include:

  • ${instanceID}

  • ${instanceName}

  • ${appID}

  • ${appName}

  • ${regionID}

  • $}loadBalancerId}

  • ${kubeNamespace}

  • ${kubeClusterName}


AppDynamics

Controller Name

Controller Endpoint

Client ID

Client Secret Key

Once the controller name and endpoint URL are added, an API client will be created to provide secure access to the Controller, following which the client secret and the Client ID are received.

Tag Mapping

Load Balancer

load_balancer_name

Application

application_id

Instance ID

instance_id

Namespace

namespace

Load Balancer

load_balancer_name

Application

application_id

Container

container_name

Pod

pod_name


Azure Monitor

Contact support@sedai.io for help integrating.


CloudWatch

Tag Mapping

Load Balancer

load_balancer_name

Application

application_id

Instance ID

instance_id

Sedai uses these metrics to generate optimal configurations for cost savings or performance improvements.


Chronosphere

You will need to authenticate your Chronosphere account using one of the following methods:

  • Basic Auth (requires username and password)

  • JWT Credentials (requires token)

  • OIDC Client Provider (requires Token URL, Client ID, and Client Secret)

  • No authentication

Endpoint

Certificate Authority

(Optional) PEM format accepted; chain of certificates supported


Datadog

If you connect multiple AWS accounts or Kubernetes clusters to Sedai, you will need to add Datadog to each from its respective integration page.

Endpoint

Based on your site's location, select one of the following:

API Key

Application Key

Tag Mapping

Cluster

cluster

Namespace

destination_service_namespace, kube_namespace, namespace

Load Balancer

load_balancer_name

Application

destination_workload, service, kube_app_name

Container

container_name

Pod

pod_name

Topology
Default Tag Key

Load Balancer

load_balancer_name, targetgroup

Application

applicaton_id

Instance

instance_id

If you're not sure how tag values have been configured within your account, you can view your unique metrics with the following APIs:

APM Metrics
https://api.datadoghq.com/api/v2/metrics/trace.http.request.duration/all-tags

Istio Metrics 
https://api.datadoghq.com/api/v2/metrics/istio.mesh.request.count/all-tags

You can see tags for any metircs using the all-tags api for datadog 
CPU Metrics (ECS only)
https://api.datadoghq.com/api/v2/metrics/ecs.fargate.cpu.usage/all-tags

If the APM metrics for an application isn't a standard one, you can find all the metrics

https://api.datadoghq.com//api/v1/metrics?from=0

Google Monitoring

Project ID

Service Account JSON

Tag Mapping

Region

location

Load Balancer

load_balancer_name

Application

application_id

Instance

instance_id

Cluster

cluster_name

Region

location

Namespace

namespace_name

Load Balancer

load_balancer_name

Application

application_id

Pod

pod_name

Container

container_name


Netdata

Requires an endpoint.

Tag Mapping

Namespace

namespace

Load Balancer

load_balancer_name

Application

application_id

Pod

pod

Container

container

Load Balancer

load_balancer_name

Application

application_id

Instance

instance_id


New Relic

API Server

API Server URL

Tag Mapping

Namespace

exporter_namespace, namespace

Load Balancer

load_balancer_name, entity.name

Application

application_id, appName, entity.name

Pod

instance_id, k8s.podName, k8s.pod.name

Container

container, k8s.containerName, k8s.container.name

Load Balancer

load_balancer_name

Application

application_id

Instance

instance_id


Prometheus

If Prometheus is your primary monitoring source, you will need to connect it to each Kubernetes cluster added to Sedai.

Sedai can receive monitoring data from multiple Prometheus instances running on-premise or public cloud.

You can authenticate Prometheus using the following methods:

  • Basic Auth (requires username and password)

  • JWT Credentials (requires token)

  • OIDC Client Credentials (requires token endpoint, client ID, and client secret)

  • OIDC Resource Owner Password (requires token, client ID, client secret, username, and password)

Certificate Authority

You can optionally provide a custom Certificate Authority (CA) if you use an https connection while communicating with your Prometheus endpoint.

Tag Mapping

Topology
Default Label Value

Namespace

exporter_namespace, envoy, namespace, destination_service_namespace

Load Balancer

service

Application

application_id

Pod

pod

Container

container

Region

region

Availability Zones

availability_zone

Operating System

os

Architecture

architecture

Instance Type

instance_type


Splunk (Previously SignalFX)

API Server URL

Example format: https://api.YOUR_SIGNALFX_REALM.signalfx.com

API Key

Tag Mapping

Namespace

namespace

Load Balancer

load_balancer_name

Application

application_id

Pod

kubernetes_pod_name

Container

container_spec_name

You can additionally map tags for region, availability zone, SignalFlow Programs, and metric name wildcards.

Load Balancer

load_balancer_name

Application

application_id

Instance

instance_id


Wavefront

API Key

API Server

This is your <wavefront_instance> url, such as https://longboard.wavefront.com/)

Testing the Wavefront connection is not currently supported within Sedai. However, if the API key is valid, the integration will connect.

Defining dimensions

This will give an output similar to the example below. Note the following code lines for examples of different dimension values:

  • Line 6: Container dimension: container_name

  • Line 10: Application (Kubernetes Workload) dimension: label.app.kubernetes.io/name, label.app, label.name

  • Line 17: Load balancer (typically service) dimension: label.io.kompose.service

  • Line 28: Pod dimension: pod_name

Example
kubernetes.pod_container.cpu.usage_rate: -
Source: ip-192-168-185-37.us-east-2.compute.internal
cluster: sedaivector
container_base_image: nvanderhoeven/pokeapi_app
container_name: pokeapi-app
label.app: -
label.app.kubernetes.io/component: -
label.app.kubernetes.io/instance: -
label.app.kubernetes.io/managed-by: -
label.app.kubernetes.io/name: -
label.app.kubernetes.io/version: -
label.chart: -
label.component: -
label.eks.amazonaws.com/component: -
label.helm.sh/chart: -
label.heritage: -
label.io.kompose.service: app
label.k8s-app: -
label.mode: -
label.name: -
label.olm.catalogSource: -
label.plane: -
label.release: -
label.statefulset.kubernetes.io/pod-name: -
label.vizier-name: -
namespace_name: poki-test-1
nodename: ip-192-168-185-37.us-east-2.compute.internal
pod_name: app-5b87b74bc6-xd9sj
type: pod_container
Value: 1.200k

Custom metrics

In order to add custom metrics such as latency or traffic, you will need to additionally define their dimensions. For example, if nginx is the ingress/envoy controller, Sedai supports nginx/envoy ingress metrics.

In the example below, the load balancer dimension for these metrics is service in the first code block (line 16) and envoy_cluster_name in the second code block (line 4).

Example
nginx.ingress.controller.requests.counter: -
Source: ip-192-168-185-37.us-east-2.compute.internal
_host: hello-test.info
cluster: sedaivector
controller_class: k8s.io/ingress-nginx
controller_namespace: ingress-nginx
controller_pod: ingress-nginx-controller-7dfdd55674-zjnrr
ingress: opt-ingress
label.app.kubernetes.io/component: controller
label.app.kubernetes.io/instance: ingress-nginx
label.app.kubernetes.io/name: ingress-nginx
method: GET
namespace: optimization-test
path: /pyroglyph
pod: ingress-nginx-controller-7dfdd55674-zjnrr
service: pyroglyph-service
status: 200
Value: 10.004M
Example
envoy.cluster.upstream.rq.completed.counter: -
Source: ip-192-168-48-229.us-east-2.compute.internal
cluster: sedaivector
envoy_cluster_name: default_httpbin_80
label.app.kubernetes.io/component: envoy
label.app.kubernetes.io/instance: my-release
label.app.kubernetes.io/managed-by: Helm
label.app.kubernetes.io/name: contour
label.helm.sh/chart: contour-7.3.5
namespace: projectcontour
pod: my-release-contour-envoy-h5jvk
Value: 2.574M

Tag Mapping

Namespace

namespace_name

Load Balancer

load_balancer_name, envoy_cluster_name, label.io.kompose.service, service

Application

application_id, label.app.kubernetes.io/name, label.app, label.name

Pod

pod, pod_name

Container

container, container_name

Load Balancer

load_balancer_name

Application

application_id

Instance

instance_id


API Meter

Sedai makes regular API calls to inform its ML models. To view charges caused by Sedai, go to the System > API Meter page and select a timeframe. By default, charges are broken down by account/Kubernetes cluster. Select one to view total calls and cost for the account/cluster's connected monitoring source.

Learn more

Learn more

By default, Sedai connects to CloudWatch using the same credentials as the corresponding AWS account. You can alternatively connect CloudWatch independently of the account credentials by configuring .

By default, memory metrics are not available EC2. We recommend enabling these metrics to access optimal EC2 configurations. You can enable by installing the CloudWatch agent using the or the, as well as by installing the agent on new instances using a. To learn more about using the CloudWatch agent for memory metrics, visit.

US1:

EU1:

To learn more about configuring custom tags, please visit .

Go to your . From the projects list, select Manage all projects. The names and IDs for all the projects you're a member of are displayed.

You will need to create a new service account β€” .

To connect to Prometheus, you will need to provide its Endpoint. By default, Sedai does not require authentication since it assumes it can connect to Prometheus within the Kubernetes control plane via the Sedai .

To generate an API access token, please visit .

Wavefront dimensions can vary by product version. To find your unique Wavefront dimensions, go to the and search for kubernetes.pod_container.cpu.usage_rate metrics in the table view.

IAM access
command line
AWS Systems Manager
CloudFormation template
CloudWatch Agent docs
Datadog Docs
Smart Agent
Splunk Docs
Browse tab from your Wavefront platform
here
here
https://api.datadoghq.com
https://api.datadoghq.eu
API Console
Learn more
Learn more
Learn more
here
settings
https://docs.appdynamics.com/appd/4.5.x/en/extend-appdynamics/appdynamics-apis/api-clients%E2%80%8Cdocs.appdynamics.com
learn more
learn more
API and Application KeysDatadog Infrastructure and Application Monitoring
Logo
Cover

AppDynamics

Cover

Azure Monitor

Cover

CloudWatch

Cover

Chronosphere

Cover

Datadog

Cover

Google Monitoring

Cover

Netdata

Cover

New Relic

Cover

Prometheus

Cover

Splunk/SignalFX

Cover

Wavefront