Sign in
agent:

Deployment of an EKS Cluster with Worker Nodes in AWS

There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

The workflow involves planning and configuring the necessary components to establish an Amazon EKS cluster in the specified region. It begins with the creation of a Virtual Private Cloud (VPC), subnets, and security groups essential for the cluster's operation. Following this, an EKS cluster named 'my-eks-cluster' is set up, along with a worker node to ensure functionality. The process concludes with the creation of an EKS cluster node group, completing the deployment in the specified region.

role_name = "MYEKSCLUSTERROLE2" worker_role_name = "EKSWORKERNODEROLE2" #instance_type = "t3.medium" disk_size = 20 # in (GB) min_size = number_of_worker_nodes max_size = number_of_worker_nodes desired_size = number_of_worker_nodes ami_type = "AL2023_x86_64_STANDARD" kubernetes_version = "1.32" vpc_cidr = "10.0.0.0/16" nodegroup_name = "mynodegroup"
copied
  1. 1

    Plan and configuration for creating an EKS cluster and adding a worker node

    There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

    To create an EKS cluster and add a worker node, we need to follow these steps: 1. **VPC and Subnets**: Create a VPC with public and private subnets across multiple availability zones in the us-west-2 region. 2. **Security Groups**: Set up security groups to allow necessary traffic for the EKS cluster and worker nodes. 3. **IAM Roles**: Create IAM roles for the EKS cluster and worker nodes with necessary permissions. 4. **EKS Cluster**: Use the AWS Management Console, AWS CLI, or eksctl to create the EKS cluster. 5. **Worker Nodes**: Configure and launch worker nodes using an Amazon EKS-optimized AMI. 6. **Node Group**: Create a node group to manage the worker nodes. Once the plan and configuration are agreed upon, we can proceed with the implementation using tools like eksctl or AWS CLI. Let me know if you need further details or if we should proceed with the implementation.

    1
  2. 2

    Create a functional EKS cluster with a worker node in us-west-2

    There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.


    2
    1. 2.1

      Set up AWS CLI and configure it with the necessary credentials and region us-west-2

      There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

      This script sets up AWS CLI with the necessary credentials and configures the default region to us-west-2.

      import os # Set AWS credentials and default region aws_access_key_id = getEnvVar('AWS_ACCESS_KEY_ID') aws_secret_access_key = getEnvVar('AWS_SECRET_ACCESS_KEY') region = region # Configure AWS CLI os.system(f'aws configure set aws_access_key_id {aws_access_key_id}') os.system(f'aws configure set aws_secret_access_key {aws_secret_access_key}') os.system(f'aws configure set default.region {region}') print('AWS CLI configured with region:', region)
      copied
      2.1
    2. 2.2

      Create an IAM role with the necessary permissions for EKS and attach it to the EC2 instances

      There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

      This script creates an IAM role with the necessary permissions for EKS and attaches it to EC2 instances.

      import boto3 import json # Define role and instance profile names # role_name = "MyEKSClusterRole" instance_profile_name = f"{role_name}-InstanceProfile" # Policy ARNs eks_cluster_policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy" # Custom inline policy custom_policy_document = { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "iam:ListRoles", "eks:*" ], "Resource": "*" } ] } # Updated trust policy for both EC2 and EKS trust_policy = { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": [ "ec2.amazonaws.com", "eks.amazonaws.com" ] }, "Action": "sts:AssumeRole" } ] } # Initialize session and client session = boto3.Session( aws_access_key_id=getEnvVar('AWS_ACCESS_KEY_ID'), aws_secret_access_key=getEnvVar('AWS_SECRET_ACCESS_KEY') ) iam_client = session.client('iam') # Create the IAM Role role_response = iam_client.create_role( RoleName=role_name, AssumeRolePolicyDocument=json.dumps(trust_policy), Description="EKS Cluster Role with EC2 and EKS trust" ) role_arn = role_response['Role']['Arn'] # Attach AWS managed policy iam_client.attach_role_policy( RoleName=role_name, PolicyArn=eks_cluster_policy_arn ) # Attach custom inline policy iam_client.put_role_policy( RoleName=role_name, PolicyName="EKSCustomPolicy", PolicyDocument=json.dumps(custom_policy_document) ) # Create instance profile if it doesn't exist try: iam_client.create_instance_profile( InstanceProfileName=instance_profile_name ) print(f"Created instance profile: {instance_profile_name}") except iam_client.exceptions.EntityAlreadyExistsException: print(f"Instance profile {instance_profile_name} already exists.") # Add role to instance profile iam_client.add_role_to_instance_profile( InstanceProfileName=instance_profile_name, RoleName=role_name ) print('Role ARN:', role_arn)
      copied
      2.2
    3. 2.3

      Create AWS IAM Role for EKS Worker nodes

      There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.
      import boto3 import time # Initialize session and IAM client session = boto3.Session( aws_access_key_id=getEnvVar('AWS_ACCESS_KEY_ID'), aws_secret_access_key=getEnvVar('AWS_SECRET_ACCESS_KEY') ) iam_client = session.client('iam') # Set role and instance profile names #worker_role_name = "EKSWORKERNODEROLE" instance_profile_name = worker_role_name + "-InstanceProfile" # 1. Trust policy so EC2 instances can assume the role trust_policy = { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": [ "ec2.amazonaws.com", "eks.amazonaws.com" ] }, "Action": "sts:AssumeRole" } ] } # 2. Create the IAM role try: role_response = iam_client.create_role( RoleName=worker_role_name, AssumeRolePolicyDocument=json.dumps(trust_policy), Description="EKS Worker Node Role" ) print(f" Created role: {worker_role_name}") except iam_client.exceptions.EntityAlreadyExistsException: print(f" Role {worker_role_name} already exists.") role_response = iam_client.get_role(RoleName=worker_role_name) role_arn = role_response['Role']['Arn'] # 3. Attach required managed policies managed_policies = [ "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly", "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy", "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy" ] for policy_arn in managed_policies: iam_client.attach_role_policy(RoleName=worker_role_name, PolicyArn=policy_arn) print(f" Attached policy: {policy_arn}") # 4. Create instance profile and attach role try: iam_client.create_instance_profile(InstanceProfileName=instance_profile_name) print(f" Created instance profile: {instance_profile_name}") except iam_client.exceptions.EntityAlreadyExistsException: print(f" Instance profile {instance_profile_name} already exists.") # Add role to instance profile (wait to ensure profile is ready) time.sleep(5) try: iam_client.add_role_to_instance_profile( InstanceProfileName=instance_profile_name, RoleName=worker_role_name ) print(f" Added role to instance profile.") except iam_client.exceptions.LimitExceededException: print(" Role already associated with instance profile.") # Final output print(f" Role ARN: {role_arn}") worker_role_arn = role_arn
      copied
      2.3
  3. 3

    Create a VPC, subnets, and security groups required for the EKS cluster in the us-west-2 region.

    There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

    This script creates a VPC, subnets, and security groups required for the EKS cluster in the us-west-2 region, ensuring no conflicts with existing resources.

    3
    1. 3.1

      Create a VPC in the us-west-2 region for the EKS cluster.

      There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

      This script checks for existing VPCs to avoid conflicts and creates a new VPC in the us-west-2 region if it doesn't already exist.

      import boto3 # Initialize a session using Amazon EC2 session = boto3.Session( aws_access_key_id=getEnvVar('AWS_ACCESS_KEY_ID'), aws_secret_access_key=getEnvVar('AWS_SECRET_ACCESS_KEY'), region_name=region ) # Create EC2 client ec2_client = session.client('ec2') # Check existing VPCs vpcs = ec2_client.describe_vpcs() vpc_id = None # Reuse if VPC already exists for vpc in vpcs['Vpcs']: if vpc['CidrBlock'] == vpc_cidr: vpc_id = vpc['VpcId'] print(f"Using existing VPC: {vpc_id}") break # Otherwise, create new VPC if not vpc_id: if len(vpcs['Vpcs']) >= 5: raise Exception('VPC limit exceeded. Please delete unused VPCs or increase the limit.') vpc_response = ec2_client.create_vpc( CidrBlock=vpc_cidr, TagSpecifications=[{ 'ResourceType': 'vpc', 'Tags': [{'Key': 'Name', 'Value': 'MyEKS-VPC'}] }] ) vpc_id = vpc_response['Vpc']['VpcId'] ec2_client.modify_vpc_attribute(VpcId=vpc_id, EnableDnsSupport={'Value': True}) ec2_client.modify_vpc_attribute(VpcId=vpc_id, EnableDnsHostnames={'Value': True}) print(f"Created VPC: {vpc_id}") # Create Internet Gateway igw_response = ec2_client.create_internet_gateway() igw_id = igw_response['InternetGateway']['InternetGatewayId'] ec2_client.attach_internet_gateway(InternetGatewayId=igw_id, VpcId=vpc_id) print(f"Attached Internet Gateway: {igw_id}") # Create route table and default route to IGW route_table = ec2_client.create_route_table(VpcId=vpc_id) rtb_id = route_table['RouteTable']['RouteTableId'] ec2_client.create_route( RouteTableId=rtb_id, DestinationCidrBlock='0.0.0.0/0', GatewayId=igw_id ) print(f"Created route table with IGW route: {rtb_id}") print('Final VPC ID:', vpc_id)
      copied
      3.1
    2. 3.2

      Create subnets within the VPC in the us-west-2 region for the EKS cluster.

      There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

      This script creates new subnets within the specified VPC in the us-west-2 region for the EKS cluster, finding non-conflicting CIDRs dynamically by iterating through potential CIDR blocks starting from a higher range to avoid conflicts.

      import boto3 import json # Initialize a session using Amazon EC2 session = boto3.Session( aws_access_key_id=getEnvVar('AWS_ACCESS_KEY_ID'), aws_secret_access_key=getEnvVar('AWS_SECRET_ACCESS_KEY'), region_name=region ) # Create EC2 client ec2_client = session.client('ec2') # Describe existing subnets to avoid conflicts existing_subnets = ec2_client.describe_subnets(Filters=[{'Name': 'vpc-id', 'Values': [vpc_id]}]) existing_cidrs = [subnet['CidrBlock'] for subnet in existing_subnets['Subnets']] # Function to find a non-conflicting CIDR def find_non_conflicting_cidr(existing_cidrs, base_cidr, start_octet): for i in range(start_octet, 256): new_cidr = f"{base_cidr}.{i}.0/24" if new_cidr not in existing_cidrs: return new_cidr raise Exception("Unable to find a non-conflicting CIDR") # Base CIDR for subnets base_cidr = "10.0" # Find CIDRs for 2 public and 2 private subnets public_subnet_cidrs, private_subnet_cidrs = [], [] for i in range(50, 256): new_cidr = f"{base_cidr}.{i}.0/24" if new_cidr not in existing_cidrs: public_subnet_cidrs.append(new_cidr) if len(public_subnet_cidrs) == 2: break for i in range(60, 256): new_cidr = f"{base_cidr}.{i}.0/24" if new_cidr not in existing_cidrs: private_subnet_cidrs.append(new_cidr) if len(private_subnet_cidrs) == 2: break # Get available AZs in the region (first 2) az_response = ec2_client.describe_availability_zones( Filters=[{'Name': 'region-name', 'Values': [region]}, {'Name': 'state', 'Values': ['available']}] ) available_azs = sorted([az['ZoneName'] for az in az_response['AvailabilityZones']])[:2] if len(available_azs) < 2: raise Exception("At least 2 Availability Zones are required.") # Create public subnets in distinct AZs public_subnet_ids = [] for i, cidr in enumerate(public_subnet_cidrs): az = available_azs[i] subnet_response = ec2_client.create_subnet( VpcId=vpc_id, CidrBlock=cidr, AvailabilityZone=az ) subnet_id = subnet_response['Subnet']['SubnetId'] public_subnet_ids.append(subnet_id) # Enable auto-assign public IP on launch ec2_client.modify_subnet_attribute( SubnetId=subnet_id, MapPublicIpOnLaunch={'Value': True} ) # Associate subnet with the route table connected to the Internet Gateway ec2_client.associate_route_table( SubnetId=subnet_id, RouteTableId=rtb_id ) # Create private subnets in distinct AZs private_subnet_ids = [] for i, cidr in enumerate(private_subnet_cidrs): az = available_azs[i] subnet_response = ec2_client.create_subnet( VpcId=vpc_id, CidrBlock=cidr, AvailabilityZone=az ) private_subnet_ids.append(subnet_response['Subnet']['SubnetId']) # Output print('Public Subnet IDs:', json.dumps(public_subnet_ids, indent=4)) print('Private Subnet IDs:', json.dumps(private_subnet_ids, indent=4))
      copied
      3.2
    3. 3.3

      Create security groups within the VPC in the us-west-2 region for the EKS cluster.

      There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

      This script checks if the security group already exists within the specified VPC in the us-west-2 region for the EKS cluster. If it doesn't exist, it creates the security group and authorizes inbound traffic.

      import boto3 # Initialize a session using Amazon EC2 session = boto3.Session( aws_access_key_id=getEnvVar('AWS_ACCESS_KEY_ID'), aws_secret_access_key=getEnvVar('AWS_SECRET_ACCESS_KEY'), region_name=region ) # Create EC2 client ec2_client = session.client('ec2') # Check if the security group already exists existing_sgs = ec2_client.describe_security_groups(Filters=[{'Name': 'vpc-id', 'Values': [vpc_id]}]) security_group_id = None for sg in existing_sgs['SecurityGroups']: if sg['GroupName'] == 'EKS-Security-Group': security_group_id = sg['GroupId'] break # Create security group if it doesn't exist if not security_group_id: security_group_response = ec2_client.create_security_group( GroupName='EKS-Security-Group', Description='Security group for EKS cluster', VpcId=vpc_id ) security_group_id = security_group_response['GroupId'] # Authorize inbound traffic for security group ec2_client.authorize_security_group_ingress( GroupId=security_group_id, IpPermissions=[ { 'IpProtocol': '-1', 'IpRanges': [{'CidrIp': '0.0.0.0/0'}] } ] ) print('Security Group ID:', security_group_id)
      copied
      3.3
  4. 4

    Create worker nodes for EKS cluster 'my-eks-cluster' with t3.micro type EC2 instances, maximum 2 worker nodes, in us-west-2

    There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

    This script creates a node group for the EKS cluster 'my-eks-cluster' with t3.micro EC2 instances, setting the desired capacity to 2, in the us-west-2 region.

    import boto3 import json import time def create_eks_cluster_with_addons(): eks_client = boto3.client( 'eks', region_name=region, aws_access_key_id=getEnvVar('AWS_ACCESS_KEY_ID'), aws_secret_access_key=getEnvVar('AWS_SECRET_ACCESS_KEY') ) # Step 1: Create the EKS cluster print("Creating EKS cluster...") # This EKS Cluster allows both endpoint public and private access as shown in the configuration below response = eks_client.create_cluster( name=eks_cluster_name, version=kubernetes_version, roleArn=role_arn, resourcesVpcConfig={ 'subnetIds': public_subnet_ids, 'securityGroupIds': [security_group_id], 'endpointPublicAccess': True, 'endpointPrivateAccess': True }, kubernetesNetworkConfig={ 'serviceIpv4Cidr': '10.102.0.0/16' # <-- Specify the service CIDR here } ) cluster_arn = response['cluster']['arn'] print(f"Cluster ARN: {cluster_arn}") # Step 2: Wait for the cluster to become active print("Waiting for EKS cluster to become ACTIVE...") waiter = eks_client.get_waiter('cluster_active') waiter.wait(name=eks_cluster_name) print("EKS cluster is ACTIVE.") # Step 3: Install essential add-ons addons = ['vpc-cni', 'kube-proxy', 'coredns'] addons_status = {} for addon in addons: try: print(f"Installing add-on: {addon}") response = eks_client.create_addon( clusterName=eks_cluster_name, addonName=addon ) addons_status[addon] = 'Created' except Exception as e: addons_status[addon] = f'Error: {str(e)}' print("Add-on status:") print(json.dumps(addons_status, indent=4, default=str)) return cluster_arn, addons_status # Run the combined setup cluster_arn, addons_status = create_eks_cluster_with_addons()
    copied
    4
  5. 5

    Create an EKS cluster in region us-west-2 using the pre-configured VPC, security groups, and IAM roles with Kubernetes version 1.31.

    There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

    This script creates an EKS cluster in the us-west-2 region using the specified VPC, security groups, and IAM roles with Kubernetes version 1.32, and outputs the ARN of the created EKS cluster.

    import boto3 import json def create_managed_node_group(cluster_name, nodegroup_name, instance_type, ami_type, disk_size, min_size, max_size, desired_size, subnet_ids, worker_role_arn, region): eks_client = boto3.client( 'eks', region_name=region, aws_access_key_id=getEnvVar('AWS_ACCESS_KEY_ID'), aws_secret_access_key=getEnvVar('AWS_SECRET_ACCESS_KEY') ) try: response = eks_client.create_nodegroup( clusterName=cluster_name, nodegroupName=nodegroup_name, scalingConfig={ 'minSize': min_size, 'maxSize': max_size, 'desiredSize': desired_size }, diskSize=disk_size, subnets=subnet_ids, instanceTypes=[instance_type], amiType=ami_type, nodeRole=worker_role_arn ) nodegroup_status = response['nodegroup']['status'] print(f"Nodegroup Status: {nodegroup_status}") return nodegroup_status except Exception as e: print(f"Error: {str(e)}") return f"Error: {str(e)}" nodegroup_status = create_managed_node_group( eks_cluster_name, nodegroup_name, instance_type, ami_type, disk_size, min_size, max_size, desired_size, subnet_ids, worker_role_arn, region )
    copied
    5