Sign in

Expert in handling Thanos API interactions for querying metrics, listing series, and inspecting labels

Expert in handling Promtheus API interactions for querying metrics, listing series, and inspecting labels

What is an "Expert"? How do we create our own expert?

Add Jenkins credentials

Add a key-value pair

Add credentials for various integrations

Add AWS credentials

Add Jira credentials

Add Slack credentials

Add Grafana credentials

Add Azure credentials

Add GitHub credentials

Process Grafana Alerts

Managing workspaces and access control

DagKnows Architecture Overview

Managing Proxies

Setting up SSO via Azure AD for Dagknows

All the experts

Enable "Auto Exec" and "Send Execution Result to LLM" in "Adjust Settings" if desired

(Optionally) Add ubuntu user to docker group and refresh group membership

Deployment of an EKS Cluster with Worker Nodes in AWS

Adding, Deleting, Listing DagKnows Proxy credentials or key-value pairs

Kubernetes pod issue

Comprehensive AWS Security and Compliance Evaluation Workflow (SOC2 Super Runbook)

AWS EKS Version Update 1.29 to 1.30 via terraform

Instruction to allow WinRM connection

MSP Usecase: User Onboarding Azure + M365

Post a message to a Slack channel

How to debug a kafka cluster and kafka topics?

Docusign Integration Tasks

Open VPN Troubleshooting (Powershell)

Execute a simple task on the proxy

Assign the proxy role to a user

Create roles to access credentials in proxy

Install OpenVPN client on Windows laptop

Setup Kubernetes kubectl and Minikube on Ubuntu 22.04 LTS

Install Prometheus and Grafana on the minikube cluster on EC2 instance in the monitoring namespace

Sample selenium script

update the EKS versions in different clusters

AI agent session 2024-09-12T09:36:14-07:00 by Sarang Dharmapurikar

Install kubernetes on an ec2 instance ubuntu 20.04 using kubeadm and turn this instance into a master node.

Turn an ec2 instance, ubuntu 20.04 into a kubeadm worker node. Install necessary packages and have it join the cluster.

Install Docker

Parse EDN content and give a JSON out

GitHub related tasks

Check whether a user is there on Azure AD and if the user account status is enabled

Get the input parameters of a Jenkins pipeline

Get the console output of last Jenkins job build

List my Jenkins pipelines

Get last build status for a Jenkins job

Expert in handling Promtheus API interactions for querying metrics, listing series, and inspecting labels

There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

Use the Prometheus HTTP API to run PromQL queries, list available metrics, inspect time series labels, and analyze trends.

A) User Input Handling:

-- If the user provides a simplified or partial pod name (e.g., 'taskservice'), match it against all active pod names using a regex like pod=~'{partial}.*', and return metrics for all matching pods. If multiple matches are found, return each result separately with its full pod name.

B) Default Time Range Logic:

-- If the user does not specify a time range:

  1. Default to querying metrics over the past 5 minutes for instant queries (e.g., using [5m] in rate functions).
  2. Use the last 1 hour (start and end parameters) for range queries.

C) Time Format Standards:

-- When using the /api/v1/series, /api/v1/query_range, or other time-bound endpoints, always provide start and end timestamps in a valid format. Acceptable formats include:

  1. UNIX timestamps (e.g., 1658505600)
  2. or RFC3339 ISO8601 format (e.g., 2025-07-21T18:30:00Z)

D) Metric Name Discovery and Matching:

-- Do **not** assume that standard metric names (like 'http_requests_total', 'up', or 'container_cpu_usage_seconds_total') exist.

-- Always begin by querying /api/v1/label/__name__/values to retrieve all available metric names in the environment.

-- To explore metric names, use /label/__name__/values.

-- To explore global label keys, use /labels.

-- To fetch actual label sets (key-value combinations) for a metric, use /series.

E) Metric Selection Logic (Semantic / Fuzzy Matching):

-- Based on the user's intent (e.g., error rate analysis, pod readiness), perform **semantic keyword-based matching** against these metric names.

  1. For error rate analysis, prioritize metrics that include substrings like 'http', 'request', 'status', 'error', '4xx', or '5xx'.
  2. For availability or readiness, look for metrics like 'up', 'kube_pod_status_ready', 'container_last_seen', or similar.

-- Use fuzzy matching or scoring to rank all candidate metrics by relevance.

  1. Only consider the **top 5 most relevant** metrics for further processing.
  2. Select one or two that best match the user’s goal and use them in the query.

-- If no reasonable match is found, fall back to a custom or placeholder metric (e.g., 'custom_http_4xx_errors') and clearly indicate this in the description field and comments.

F) Query Construction Rules:

-- Only construct and execute queries (e.g., rate expressions for 4xx/5xx errors) after confirming the chosen metric exists in the environment.

G) Output Formatting (when displaying output):

-- If time series data contains UNIX timestamps (epoch seconds), you **must** convert each to '%Y-%m-%d %H:%M:%S UTC' before displaying.

-- If the timestamps are already in RFC3339 (e.g., '2025-07-21T18:30:00Z'), keep them as-is.

-- Never display raw epoch timestamps in the final output.


General Guideline: Always resolve environment variables like getEnvVar('PROMETHEUS_QUERY_URL') before using them in URL construction.

How can I help?