Sign in

Managing workspaces and access control

DagKnows Architecture Overview

Managing Proxies

Setting up SSO via Azure AD for Dagknows

All the experts

Enable "Auto Exec" and "Send Execution Result to LLM" in "Adjust Settings" if desired

Add credentials for various integrations

Add a key-value pair

(Optionally) Add ubuntu user to docker group and refresh group membership

Deployment of an EKS Cluster with Worker Nodes in AWS

Adding, Deleting, Listing DagKnows Proxy credentials or key-value pairs

Kubernetes pod issue

Comprehensive AWS Security and Compliance Evaluation Workflow (SOC2 Super Runbook)

AWS EKS Version Update 1.29 to 1.30 via terraform

Instruction to allow WinRM connection

MSP Usecase: User Onboarding Azure + M365

Post a message to a Slack channel

How to debug a kafka cluster and kafka topics?

Docusign Integration Tasks

Open VPN Troubleshooting (Powershell)

Execute a simple task on the proxy

Assign the proxy role to a user

Create roles to access credentials in proxy

Install OpenVPN client on Windows laptop

Setup Kubernetes kubectl and Minikube on Ubuntu 22.04 LTS

Install Prometheus and Grafana on the minikube cluster on EC2 instance in the monitoring namespace

Sample selenium script

update the EKS versions in different clusters

AI agent session 2024-09-12T09:36:14-07:00 by Sarang Dharmapurikar

Install kubernetes on an ec2 instance ubuntu 20.04 using kubeadm and turn this instance into a master node.

Turn an ec2 instance, ubuntu 20.04 into a kubeadm worker node. Install necessary packages and have it join the cluster.

Install Docker

Parse EDN content and give a JSON out

GitHub related tasks

Check whether a user is there on Azure AD and if the user account status is enabled

Get the input parameters of a Jenkins pipeline

Get the console output of last Jenkins job build

List my Jenkins pipelines

Get last build status for a Jenkins job

Trigger a Jenkins job with param values

List all the resource ARNs in a given region

Give me steps to do health checks on a Linux Server

Trigger for tickets which have status new/open, group DevOps, assignee None, and public comment includes a keyword

Process Zendesk Ticket for updating comments (auto reply)

Add a public comment to a Zendesk Ticket

Identify list out IAM users list in AWS using dagknows

Restoring an AWS Redshift Cluster from a Snapshot

Notify about disk space before cleaning up

Set an AWS IAM Password Policy

Enforce Password Change for AWS IAM Users

AWS EKS Version Update 1.29 to 1.30 via terraform

There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.
  1. 1

    Make changes for AWS EKS version 1.30

    There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

    Check Terraform configuration for the below changes

    • All managed Node groups will default to using Amzon Linux 2023 (as default node OS).
    • Default EBS class changed to gp3, so use that by default to avoid issues
    • Minimum required IAM policy for the Amazon EKS cluster IAM role also requires: "ec2:DescribeAvailabilityZones"
    • Check for deprecated api versions for kubernetes and replace them if used anywhere.
    • Check the versions of Terraform aws eks module and other modules if they are compatible with the new EKS version
    • Check version and upgrade managed Add-ons for EKS cluster(not applicable in our case, we use a helm chart based deployment)

    Kubernetes Deprecation API guide: https://kubernetes.io/docs/reference/using-api/deprecation-guide/

    1
  2. 2

    Minimum required IAM policy for the Amazon EKS cluster IAM role also requires: "ec2:DescribeAvailabilityZones"

    There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

    Edit aws_iam_policy_eks in eks/user.tf to include "ec2:DescribeAvailabilityZones" as well.

    Basic requirement of policy was increased to include this policy.

    2
  3. 3

    Edit eks/provider.tf to replace a deprecated api

    There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

    Change from v1beta1 to v1 for the api_version

    3
  4. 4

    Backup Statefiles before upgrade for the EKS cluster

    There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.
    4
  5. 5

    Ensure OIDC provider URL and Service Account issuer URL are different before upgrading to EKS v1.30

    There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

    Before upgrading an EKS cluster to v1.30, verify that the OIDC provider URL used for IAM authentication is different from the Service Account issuer URL. If they are the same, disassociate the identity provider to avoid API server startup failures due to new validation in Kubernetes v1.30.

    By default both have the same value: A AWS managed OIDC Provider [which leads to version update issues >>> Kube API server failing]

    5
    1. 5.1

      Get current Service Account Issuer URL to EKS Cluster

      There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

      This is generated by default when the EKS cluster is created

      5.1
    2. 5.2

      List IAM OIDC Provider ARN for the custer

      There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.
      5.2
    3. 5.3

      Backup IAM OIDC Provider ARN(optional)

      There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.
      5.3
    4. 5.4

      List IAM roles using OIDC provider

      There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

      For Prod Cluster: (Roles with OIDC usage)

      eks-prod-341-alb-ingress

      eks-prod-341-efs-csi-driver

      5.4
    5. 5.5

      List Identity Provider Config

      There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

      This should come up as empty for now.

      5.5
    6. 5.6

      Delete old IAM oidc identity provider

      There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.
      5.6
    7. 5.7

      Creating a new OIDC provider using AWS Cognito

      There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.
      5.7
      1. 5.7.1

        Create user pool in AWS Cognito

        There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.
        5.7.1
      2. 5.7.2

        Create an app client for AWS cognito using the user_id from previously created user-pool

        There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.
        5.7.2
      3. 5.7.3

        Create an IAM OIDC Provider Using AWS Cognito

        There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.
        5.7.3
      4. 5.7.4

        Associating Cognito OIDC provider with EKS Cluster

        There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.
        5.7.4
    8. 5.8

      Remove old oidc from required eks statefile

      There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

      Use terraform state list | grep "oidc_provider" to find the required state file items.

      5.8
    9. 5.9

      Run Terraform Import to sync the manually created Cognito User Pool into Terraform state

      There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

      Use arn to import if facing issues, correlate name from main.tf file

      5.9
    10. 5.10

      Add the following code blocks in eks/main.tf outside the eks module

      There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.
      5.10
    11. 5.11

      Import the existing Cognito-EKS association into Terraform

      There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.
      5.11
    12. 5.12

      Add below lines in eks/main.tf in the eks module to not let terraform create irsa roles and provider by default

      There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.
      5.12
    13. 5.13

      Do a terraform init, plan and apply cycle for eks module so new outputs of eks module are propogated

      There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

      cluster_oidc_issuer_url: aws based oidc

      oidc_provider_arn: cognito based

      Should be different now.

      5.13
    14. 5.14

      Do a terraform init, plan and apply cycle for eks-services module so new outputs of eks module are used for IAM Role creation

      There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.
      5.14
  6. 6

    Edit the eks/variable.tf to edit the cluster_version for eks update 1.29 to 1.30

    There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

    Ensure the version is in double quotes.

    6
  7. 7

    Upgrade AWS EKS Cluster to 1.30

    There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

    Control Plane upgrade requires ~8mins to update.

    Another 10-20 mins for worker node update to eks 1.30

    7
    1. 7.1

      Can manually drain nodes to achieve instant version update effect

      There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

      Easy Way: Double the desired count of nodes in node group and then bring it back to original

      7.1
    2. 7.2

      To check current updates to AWS EKS

      There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.
      7.2
    3. 7.3

      Check cluster update status for each update id

      There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.
      7.3
  8. 8

    Remove the OIDC cognito changes made to the eks/main.tf file and re-run terraform init, plan and apply to recreate the old OIDC managed by AWS in eks module and then run the same for eks-services module so new eks module outputs are propagated.

    There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

    This is basically reverting back to old AWS managed OIDC provider otherwise we face authentication issues for OIDC related roles like, efs-csi, alb-ingress, cluster-autoscaler etc.

    8
  9. 9

    After the above changes verify whether the old AWS managed OIDC has been added as a open id connect provider

    There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

    Old AWS managed OIDC should show up here now for the relevant cluster.

    9