Sign in
agent:
Auto Exec

Search and retrieve recent logs from Elasticsearch for specific services containing target keywords.

Trace-based log analysis across microservices or services in Elasticsearch for distributed request tracking using problematic trace_ids from jaeger

List my elasticsearch indices to give me an index pattern name I can search the logs for

Send comprehensive troubleshooting report with root cause and relevant details to Slack channel 'demo'

Perform preliminary infrastructure check by deriving EC2 instance ID from demo app URL, checking instance state, and verifying security group access for port 81

Summarize all recent exceptions and errors for a given set of service or services in jaeger.

Show traces in the last n minutes where service.name in a list of target service/s and (http.target or http.route or url.path contains /path_filter for eg: /api/checkout).

Identify slow or high latency traces for a given service or list of services in jaeger

List all my services in jaeger excludes system related services

Perform preliminary infrastructure check by deriving EC2 instance ID from demo app URL, checking instance state, and verifying security group access for port 81

Fetch the most recent 5 logs from the elasticsearch index <index_name> in last n minutes <lookback_minutes>

Queries Elasticsearch to fetch the latest logs from a list of specified services with required fields

Fetches the latest 10 logs from Elasticsearch for a specific service, sorted by timestamp in descending order

List my elasticsearch indices to give me an index pattern name I can search the logs for

Add a key-value pair

Add credentials for various integrations

What is an "Expert"? How do we create our own expert?

Process Grafana Alerts

Managing workspaces and access control

DagKnows Architecture Overview

Managing Proxies

Setting up SSO via Azure AD for Dagknows

All the experts

Enable "Auto Exec" and "Send Execution Result to LLM" in "Adjust Settings" if desired

(Optionally) Add ubuntu user to docker group and refresh group membership

Deployment of an EKS Cluster with Worker Nodes in AWS

Adding, Deleting, Listing DagKnows Proxy credentials or key-value pairs

Comprehensive AWS Security and Compliance Evaluation Workflow (SOC2 Super Runbook)

AWS EKS Version Update 1.29 to 1.30 via terraform

Instruction to allow WinRM connection

MSP Usecase: User Onboarding Azure + M365

Post a message to a Slack channel

How to debug a kafka cluster and kafka topics?

Docusign Integration Tasks

Open VPN Troubleshooting (Powershell)

Execute a simple task on the proxy

Assign the proxy role to a user

Create roles to access credentials in proxy

Install OpenVPN client on Windows laptop

Setup Kubernetes kubectl and Minikube on Ubuntu 22.04 LTS

Install Prometheus and Grafana on the minikube cluster on EC2 instance in the monitoring namespace

Sample selenium script

update the EKS versions in different clusters

AI agent session 2024-09-12T09:36:14-07:00 by Sarang Dharmapurikar

Install kubernetes on an ec2 instance ubuntu 20.04 using kubeadm and turn this instance into a master node.

Turn an ec2 instance, ubuntu 20.04 into a kubeadm worker node. Install necessary packages and have it join the cluster.

Install Docker

Parse EDN content and give a JSON out

GitHub related tasks

Check whether a user is there on Azure AD and if the user account status is enabled

Search and retrieve recent logs from Elasticsearch for specific services containing target keywords.

There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

This workflow queries Elasticsearch to find logs from designated target_services like (frontend-proxy, frontend, product-catalog) within a specified time window. It searches for logs containing specific keywords in either the URL path or message body, such as '/api/checkout'. The query returns the limit say 30 most recent matching logs, sorted by timestamp in descending order. Key fields extracted include timestamp, service name, pod name, URL details, message body, and tracing identifiers for debugging purposes.

import requests import json from datetime import datetime, timedelta from urllib.parse import urljoin checkout_related_logs = {} # Get Elasticsearch URL elastic_url = getEnvVar('ELASTIC_URL_OTEL') if not elastic_url.endswith('/'): elastic_url += '/' # Time range end_time = datetime.utcnow() start_time = end_time - timedelta(minutes=n_minutes) # Build query with OR condition for URL path and body query = { "size": limit, "query": { "bool": { "must": [ { "range": { "@timestamp": { "gte": start_time.isoformat() + "Z", "lte": end_time.isoformat() + "Z" } } }, { "terms": { "resource.service.name.keyword": target_services } }, { "bool": { "should": [ { "wildcard": { "attributes.url.path.keyword": { "value": f"*{keyword}*", "case_insensitive": True } } }, { "wildcard": { "body": { "value": f"*{keyword}*", "case_insensitive": True } } } ], "minimum_should_match": 1 } } ] } }, "sort": [ {"@timestamp": {"order": "desc"}} ], "_source": [ "@timestamp", "resource.service.name", "resource.k8s.pod.name", "attributes.url.path", "attributes.url.full", "body", "traceId", "spanId", "severity.text" ] } # Execute # search_url = urljoin(elastic_url, f"{index_pattern}/_search") # resp = requests.post(search_url, json=query, headers={'Content-Type': 'application/json'}) # if resp.status_code != 200: # print(f"Error querying Elasticsearch: {resp.status_code} - {resp.text}") # checkout_related_logs = {"error": f"Failed to query Elasticsearch: {resp.status_code}"} # else: # data = resp.json() # result = { # "total_hits": data.get("hits", {}).get("total", {}).get("value", 0), # "time_range": { # "start": start_time.isoformat() + "Z", # "end": end_time.isoformat() + "Z", # "duration_minutes": n_minutes # }, # "keyword_searched": keyword, # "logs": [] # } # # Process hits # for hit in data.get("hits", {}).get("hits", []): # src = hit.get("_source", {}) # body_text = src.get("body", "") or "" # log_entry = { # "timestamp": src.get("@timestamp", ""), # "service_name": src.get("resource", {}).get("service.name", ""), # "pod_name": src.get("resource", {}).get("k8s.pod.name", ""), # "url_path": src.get("attributes", {}).get("url.path", ""), # "url_full": src.get("attributes", {}).get("url.full", ""), # "body": body_text[:500] + ("..." if len(body_text) > 500 else ""), # "trace_id": src.get("traceId", ""), # "span_id": src.get("spanId", ""), # "severity": src.get("severity", {}).get("text", "") # } # result["logs"].append(log_entry) # checkout_related_logs = result # Execute search_url = urljoin(elastic_url, f"{index_pattern}/_search") try: resp = requests.post( search_url, json=query, headers={'Content-Type': 'application/json'}, timeout=6, ) resp.raise_for_status() data = resp.json() except requests.exceptions.RequestException as e: print(f"Error querying Elasticsearch: {e}") checkout_related_logs = {"error": f"Failed to query Elasticsearch: {e}"} else: # Safe parsing of response hits_root = (data.get("hits") or {}) total_hits = (hits_root.get("total") or {}).get("value", 0) hits = hits_root.get("hits", []) result = { "total_hits": total_hits, "time_range": { "start": start_time.isoformat() + "Z", "end": end_time.isoformat() + "Z", "duration_minutes": n_minutes }, "keyword_searched": keyword, "logs": [] } for hit in hits: src = hit.get("_source", {}) if isinstance(hit, dict) else {} body_text = src.get("body", "") or "" res = src.get("resource", {}) or {} attrs = src.get("attributes", {}) or {} log_entry = { "timestamp": src.get("@timestamp", ""), "service_name": res.get("service.name", ""), "pod_name": res.get("k8s.pod.name", ""), "url_path": attrs.get("url.path", ""), "url_full": attrs.get("url.full", ""), "body": body_text[:500] + ("..." if len(body_text) > 500 else ""), "trace_id": src.get("traceId", ""), "span_id": src.get("spanId", ""), "severity": (src.get("severity", {}) or {}).get("text", "") } result["logs"].append(log_entry) checkout_related_logs = result print(json.dumps(checkout_related_logs, indent=4, default=str))
copied