Sign in
agent:
Auto Exec

All the Experts (LGTM)

Fetch the most recent 5 logs from the elasticsearch index <index_name> in last n minutes <lookback_minutes>

Queries Elasticsearch to fetch the latest logs from a list of specified services with required fields

Fetches the latest 10 logs from Elasticsearch for a specific service, sorted by timestamp in descending order

List my elasticsearch indices to give me an index pattern name I can search the logs for

Add a key-value pair

Add credentials for various integrations

What is an "Expert"? How do we create our own expert?

Process Grafana Alerts

Managing workspaces and access control

DagKnows Architecture Overview

Managing Proxies

Setting up SSO via Azure AD for Dagknows

All the experts

Enable "Auto Exec" and "Send Execution Result to LLM" in "Adjust Settings" if desired

(Optionally) Add ubuntu user to docker group and refresh group membership

Deployment of an EKS Cluster with Worker Nodes in AWS

Adding, Deleting, Listing DagKnows Proxy credentials or key-value pairs

Comprehensive AWS Security and Compliance Evaluation Workflow (SOC2 Super Runbook)

AWS EKS Version Update 1.29 to 1.30 via terraform

Instruction to allow WinRM connection

MSP Usecase: User Onboarding Azure + M365

Post a message to a Slack channel

How to debug a kafka cluster and kafka topics?

Docusign Integration Tasks

Open VPN Troubleshooting (Powershell)

Execute a simple task on the proxy

Assign the proxy role to a user

Create roles to access credentials in proxy

Install OpenVPN client on Windows laptop

Setup Kubernetes kubectl and Minikube on Ubuntu 22.04 LTS

Install Prometheus and Grafana on the minikube cluster on EC2 instance in the monitoring namespace

Sample selenium script

update the EKS versions in different clusters

AI agent session 2024-09-12T09:36:14-07:00 by Sarang Dharmapurikar

Install kubernetes on an ec2 instance ubuntu 20.04 using kubeadm and turn this instance into a master node.

Turn an ec2 instance, ubuntu 20.04 into a kubeadm worker node. Install necessary packages and have it join the cluster.

Install Docker

Parse EDN content and give a JSON out

GitHub related tasks

Check whether a user is there on Azure AD and if the user account status is enabled

Get the input parameters of a Jenkins pipeline

Get the console output of last Jenkins job build

List my Jenkins pipelines

Get last build status for a Jenkins job

Trigger a Jenkins job with param values

Give me steps to do health checks on a Linux Server

Trigger for tickets which have status new/open, group DevOps, assignee None, and public comment includes a keyword

Process Zendesk Ticket for updating comments (auto reply)

Add a public comment to a Zendesk Ticket

List my elasticsearch indices to give me an index pattern name I can search the logs for

There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

Lists all Elasticsearch indices and analyzes them to recommend the best index pattern for log searching.

This task outputs a recommended_index_pattern which can be used for searching logs in elasticsearch.

import requests import json from urllib.parse import urlparse # Get Elasticsearch URL from environment elastic_url = getEnvVar('ELASTIC_URL_OTEL') # Parse URL to determine if SSL should be used parsed_url = urlparse(elastic_url) use_ssl = parsed_url.scheme == 'https' try: # Make request to get all indices response = requests.get(f"{elastic_url}/_cat/indices?format=json", verify=use_ssl, timeout=30) if response.status_code == 200: indices_data = response.json() # Extract index names indices = [index_info['index'] for index_info in indices_data] index_count = len(indices) # Analyze indices to recommend a pattern log_indices = [idx for idx in indices if 'log' in idx.lower()] otel_indices = [idx for idx in indices if 'otel' in idx.lower()] # Determine the best index pattern if otel_indices: # Check if there are date-based otel-logs indices otel_log_indices = [idx for idx in otel_indices if 'log' in idx.lower()] if otel_log_indices: INDEX_PATTERN = "otel-logs-*" pattern_explanation = f"Found {len(otel_log_indices)} OpenTelemetry log indices. The pattern 'otel-logs-*' will match all date-based log indices like {', '.join(otel_log_indices[:3])}{'...' if len(otel_log_indices) > 3 else ''}." else: INDEX_PATTERN = "otel-*" pattern_explanation = f"Found {len(otel_indices)} OpenTelemetry indices. The pattern 'otel-*' will match all OpenTelemetry indices." elif log_indices: INDEX_PATTERN = "*log*" pattern_explanation = f"Found {len(log_indices)} log-related indices. The pattern '*log*' will match all indices containing 'log' in their name." else: INDEX_PATTERN = "*" pattern_explanation = "No specific log indices detected. Using '*' to match all indices, but this may include non-log data." print(f"Found {index_count} indices:") for i, index_name in enumerate(indices, 1): print(f"{i}. {index_name}") print(f"\nRecommended index pattern: {INDEX_PATTERN}") print(f"Explanation: {pattern_explanation}") else: print(f"Error retrieving indices: HTTP {response.status_code}") print(f"Response: {response.text}") indices = [] index_count = 0 INDEX_PATTERN = "*" pattern_explanation = "Failed to retrieve indices, defaulting to '*' pattern" except Exception as e: print(f"Exception occurred while retrieving indices: {str(e)}") indices = [] index_count = 0 INDEX_PATTERN = "*" pattern_explanation = "Exception occurred, defaulting to '*' pattern" print(f"\nOutput parameters:") print(f"indices: {json.dumps(indices, indent=2)}") print(f"index_count: {index_count}") print(f"INDEX_PATTERN: {INDEX_PATTERN}") print(f"pattern_explanation: {pattern_explanation}")
copied