Sign in
agent:

Comprehensive AWS Security and Compliance Evaluation Workflow (SOC2 Super Runbook)

There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

This workflow involves a thorough evaluation of various AWS services and configurations to ensure compliance with security best practices. It includes checking Amazon S3 buckets for public read access and encryption compliance, as well as auditing AWS IAM user credentials and access keys. The process also assesses IAM policies for overly permissive statements and evaluates password policies. Additionally, it verifies AWS CloudTrail configurations and VPC flow logs, and audits security groups for open SSH ports and inbound traffic restrictions. Overall, the workflow aims to maintain a secure and compliant AWS environment.

days_threshold = 90 maxAccessKeyAge = 90 required_minimum_password_length = 8 require_symbols = True require_numbers = True require_uppercase = True require_lowercase = True allow_users_to_change_password = True regions = ["us-east-2"] region_name = "us-east-2"
copied
  1. 1

    Evaluation of Amazon S3 Buckets for Public Read Access Compliance

    There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

    The workflow involves identifying Amazon S3 buckets that permit public read access. This is achieved by assessing the Block Public Access settings, bucket policies, and Access Control Lists (ACLs). Each bucket is then flagged as either NON_COMPLIANT or COMPLIANT based on the evaluation. The process ensures that only authorized access is allowed, enhancing the security of the stored data. This compliance check is crucial for maintaining data privacy and adhering to security best practices.

    1
    1. 1.1

      List all Amazon S3 buckets in the region us-east-2.

      There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

      This script lists all Amazon S3 buckets in the specified region.

      import boto3 import json def list_s3_buckets(region_name): s3_client = boto3.client('s3', region_name=region_name, aws_access_key_id=getEnvVar('AWS_ACCESS_KEY_ID'), aws_secret_access_key=getEnvVar('AWS_SECRET_ACCESS_KEY')) response = s3_client.list_buckets() bucket_names = [bucket['Name'] for bucket in response['Buckets']] print(json.dumps(bucket_names, indent=4, default=str)) return bucket_names bucket_names = list_s3_buckets(region_name)
      copied
      1.1
    2. 1.2

      Evaluate Block Public Access settings for each S3 bucket in the region us-east-2.

      There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

      This script evaluates Block Public Access settings for each S3 bucket in the specified region and flags them as NON_COMPLIANT or COMPLIANT.

      import boto3 import json def evaluate_bucket_public_access(bucket_names, region_name): s3_client = boto3.client('s3', region_name=region_name, aws_access_key_id=getEnvVar('AWS_ACCESS_KEY_ID'), aws_secret_access_key=getEnvVar('AWS_SECRET_ACCESS_KEY')) compliance_status = {} for bucket_name in bucket_names: try: # Check Block Public Access settings block_public_access = s3_client.get_bucket_policy_status(Bucket=bucket_name) is_public = block_public_access['PolicyStatus']['IsPublic'] if is_public: compliance_status[bucket_name] = 'NON_COMPLIANT' else: compliance_status[bucket_name] = 'COMPLIANT' except s3_client.exceptions.ClientError as e: error_code = e.response['Error']['Code'] if error_code == 'NoSuchBucketPolicy': compliance_status[bucket_name] = 'COMPLIANT' else: compliance_status[bucket_name] = f'ERROR: {str(e)}' print(json.dumps(compliance_status, indent=4, default=str)) return compliance_status bucket_compliance_status = evaluate_bucket_public_access(bucket_names, region_name)
      copied
      1.2
    3. 1.3

      Check bucket policies for public read access for each S3 bucket in the region us-east-2.

      There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

      This script checks bucket policies for public read access for each S3 bucket in the specified region and flags them as NON_COMPLIANT or COMPLIANT.

      import boto3 import json def check_bucket_policies(bucket_names, region_name): s3_client = boto3.client('s3', region_name=region_name, aws_access_key_id=getEnvVar('AWS_ACCESS_KEY_ID'), aws_secret_access_key=getEnvVar('AWS_SECRET_ACCESS_KEY')) policy_compliance_status = {} for bucket_name in bucket_names: try: # Get bucket policy policy = s3_client.get_bucket_policy(Bucket=bucket_name) policy_document = json.loads(policy['Policy']) # Check for public read access is_public = False for statement in policy_document.get('Statement', []): if statement.get('Effect') == 'Allow': principal = statement.get('Principal') if principal == '*' or principal == {'AWS': '*'}: actions = statement.get('Action') if isinstance(actions, str): actions = [actions] if 's3:GetObject' in actions or 's3:*' in actions: is_public = True break if is_public: policy_compliance_status[bucket_name] = 'NON_COMPLIANT' else: policy_compliance_status[bucket_name] = 'COMPLIANT' except s3_client.exceptions.ClientError as e: error_code = e.response['Error']['Code'] if error_code == 'NoSuchBucketPolicy': policy_compliance_status[bucket_name] = 'COMPLIANT' else: policy_compliance_status[bucket_name] = f'ERROR: {str(e)}' print(json.dumps(policy_compliance_status, indent=4, default=str)) return policy_compliance_status bucket_policy_compliance_status = check_bucket_policies(bucket_names, region_name)
      copied
      1.3
    4. 1.4

      Check ACLs for public read access for each S3 bucket in the region us-east-2.

      There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

      This script checks ACLs for public read access for each S3 bucket in the specified region and flags them as NON_COMPLIANT or COMPLIANT.

      import boto3 import json def check_bucket_acls(bucket_names, region_name): s3_client = boto3.client('s3', region_name=region_name, aws_access_key_id=getEnvVar('AWS_ACCESS_KEY_ID'), aws_secret_access_key=getEnvVar('AWS_SECRET_ACCESS_KEY')) acl_compliance_status = {} for bucket_name in bucket_names: try: # Get bucket ACL acl = s3_client.get_bucket_acl(Bucket=bucket_name) # Check for public read access is_public = False for grant in acl['Grants']: grantee = grant.get('Grantee', {}) if grantee.get('Type') == 'Group' and 'AllUsers' in grantee.get('URI', ''): if 'READ' in grant.get('Permission', ''): is_public = True break if is_public: acl_compliance_status[bucket_name] = 'NON_COMPLIANT' else: acl_compliance_status[bucket_name] = 'COMPLIANT' except s3_client.exceptions.ClientError as e: acl_compliance_status[bucket_name] = f'ERROR: {str(e)}' print(json.dumps(acl_compliance_status, indent=4, default=str)) return acl_compliance_status bucket_acl_compliance_status = check_bucket_acls(bucket_names, region_name)
      copied
      1.4
  2. 2

    Compliance Check for S3 Bucket Encryption

    There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

    The workflow involves identifying Amazon S3 buckets that either do not have default encryption enabled or lack a policy explicitly denying unencrypted put-object requests. These buckets are then flagged as NON_COMPLIANT. This process ensures that all S3 buckets adhere to security best practices by enforcing encryption standards. By flagging non-compliant buckets, the workflow helps maintain data security and compliance within the cloud environment. This proactive approach aids in mitigating potential data breaches and unauthorized access.

    2
    1. 2.1

      This script identifies S3 buckets without default encryption or lacking a policy denying unencrypted put-object requests.

      2.1
      1. 2.1.1

        List all Amazon S3 buckets in the AWS account.

        There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

        This script lists all S3 buckets in the AWS account.

        import boto3 # Initialize boto3 client for S3 s3_client = boto3.client('s3', aws_access_key_id=getEnvVar('AWS_ACCESS_KEY_ID'), aws_secret_access_key=getEnvVar('AWS_SECRET_ACCESS_KEY'), region_name='us-east-2') # List all S3 buckets buckets = s3_client.list_buckets()['Buckets'] # Extract bucket names bucket_names = [bucket['Name'] for bucket in buckets] print("Bucket names:", bucket_names)
        copied
        2.1.1
      2. 2.1.2

        Check each S3 bucket for default encryption settings and identify buckets without default encryption enabled.

        There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

        This script checks each S3 bucket for default encryption settings and identifies buckets without default encryption enabled.

        import boto3 # Initialize boto3 client for S3 s3_client = boto3.client('s3', aws_access_key_id=getEnvVar('AWS_ACCESS_KEY_ID'), aws_secret_access_key=getEnvVar('AWS_SECRET_ACCESS_KEY'), region_name='us-east-2') non_compliant_buckets = [] for bucket_name in bucket_names: try: # Check if default encryption is enabled encryption = s3_client.get_bucket_encryption(Bucket=bucket_name) rules = encryption['ServerSideEncryptionConfiguration']['Rules'] if not rules: non_compliant_buckets.append(bucket_name) except s3_client.exceptions.ClientError as e: # If the error is because the bucket does not have encryption enabled if e.response['Error']['Code'] == 'ServerSideEncryptionConfigurationNotFoundError': non_compliant_buckets.append(bucket_name) print("Non-compliant buckets:", non_compliant_buckets)
        copied
        2.1.2
      3. 2.1.3

        Check each S3 bucket for a policy explicitly denying unencrypted put-object requests and identify buckets lacking such a policy.

        There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

        This script checks each S3 bucket for a policy explicitly denying unencrypted put-object requests and identifies buckets lacking such a policy.

        import boto3 import json # Initialize boto3 client for S3 s3_client = boto3.client('s3', aws_access_key_id=getEnvVar('AWS_ACCESS_KEY_ID'), aws_secret_access_key=getEnvVar('AWS_SECRET_ACCESS_KEY'), region_name='us-east-2') buckets_lacking_policy = [] for bucket_name in bucket_names: try: # Get the bucket policy policy = s3_client.get_bucket_policy(Bucket=bucket_name) policy_statements = json.loads(policy['Policy'])['Statement'] # Check for a policy explicitly denying unencrypted put-object requests policy_found = False for statement in policy_statements: if statement.get('Effect') == 'Deny': conditions = statement.get('Condition', {}) if 'Bool' in conditions and 'aws:SecureTransport' in conditions['Bool']: if conditions['Bool']['aws:SecureTransport'] == 'false': policy_found = True break if not policy_found: buckets_lacking_policy.append(bucket_name) except s3_client.exceptions.ClientError as e: # If the error is because the bucket does not have a policy if e.response['Error']['Code'] == 'NoSuchBucketPolicy': buckets_lacking_policy.append(bucket_name) print("Buckets lacking policy explicitly denying unencrypted put-object requests:", buckets_lacking_policy)
        copied
        2.1.3
  3. 3

    Audit of AWS S3 Buckets for Server Access Logging

    There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

    The workflow involves checking AWS S3 buckets to determine if Server Access Logging is enabled. The results are organized by region, highlighting the number of buckets lacking this feature. This process helps in identifying potential security and compliance gaps. By tabulating the data, it provides a clear overview of the current logging status across different regions. The outcome aids in prioritizing actions to enable logging where necessary.

    3
    1. 3.1

      List all AWS S3 buckets across all regions.

      There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

      The script lists all AWS S3 buckets using the provided AWS credentials.

      import boto3 # Retrieve AWS credentials aws_access_key_id = getEnvVar('AWS_ACCESS_KEY_ID') aws_secret_access_key = getEnvVar('AWS_SECRET_ACCESS_KEY') # Initialize a session using Boto3 session = boto3.Session( aws_access_key_id=aws_access_key_id, aws_secret_access_key=aws_secret_access_key ) # Create an S3 client s3_client = session.client('s3') # List all buckets try: response = s3_client.list_buckets() buckets_list = [bucket['Name'] for bucket in response['Buckets']] print("Buckets List:", buckets_list) except Exception as e: print(f"Error listing buckets: {e}")
      copied
      3.1
    2. 3.2

      Check each S3 bucket to determine if Server Access Logging is enabled.

      There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

      The script checks each S3 bucket to determine if Server Access Logging is enabled and outputs the status.

      import boto3 import json # Retrieve AWS credentials aws_access_key_id = getEnvVar('AWS_ACCESS_KEY_ID') aws_secret_access_key = getEnvVar('AWS_SECRET_ACCESS_KEY') # Initialize a session using Boto3 session = boto3.Session( aws_access_key_id=aws_access_key_id, aws_secret_access_key=aws_secret_access_key ) # Create an S3 client s3_client = session.client('s3') logging_status = {} # Check each bucket for server access logging for bucket in buckets_list: try: response = s3_client.get_bucket_logging(Bucket=bucket) if 'LoggingEnabled' in response: logging_status[bucket] = 'Enabled' else: logging_status[bucket] = 'Not Enabled' except Exception as e: logging_status[bucket] = f'Error: {str(e)}' # Print the logging status print("Logging Status:", json.dumps(logging_status, indent=4, default=str))
      copied
      3.2
  4. 4

    Audit of AWS S3 Buckets for Public Write Access

    There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

    The workflow involves identifying AWS S3 buckets that do not have public write access restrictions in place. This process includes listing each bucket along with its respective region. The goal is to ensure that all S3 buckets are secure and not vulnerable to unauthorized public write access. By auditing these settings, the workflow helps maintain data integrity and security within the AWS environment.

    4
    1. 4.1

      List the number of AWS S3 buckets which do not have public write access prohibited, including their region.

      There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

      This script lists AWS S3 buckets with public write access, grouped by region.

      import boto3 from botocore.exceptions import ClientError aws_access_key_id = getEnvVar('AWS_ACCESS_KEY_ID') aws_secret_access_key = getEnvVar('AWS_SECRET_ACCESS_KEY') # Initialize S3 client s3_client = boto3.client('s3', aws_access_key_id=aws_access_key_id, aws_secret_access_key=aws_secret_access_key) # Get the list of all buckets buckets = s3_client.list_buckets()['Buckets'] buckets_with_public_write_access = {} # Check each bucket's ACL for bucket in buckets: bucket_name = bucket['Name'] try: # Get bucket location location = s3_client.get_bucket_location(Bucket=bucket_name)['LocationConstraint'] if location is None: location = 'us-east-1' # Get bucket ACL acl = s3_client.get_bucket_acl(Bucket=bucket_name) for grant in acl['Grants']: grantee = grant['Grantee'] permission = grant['Permission'] if grantee.get('URI') == 'http://acs.amazonaws.com/groups/global/AllUsers' and permission == 'WRITE': if location not in buckets_with_public_write_access: buckets_with_public_write_access[location] = [] buckets_with_public_write_access[location].append(bucket_name) break except ClientError as e: print(f"Error checking bucket {bucket_name}: {e}") print("Buckets with public write access:") print(json.dumps(buckets_with_public_write_access, indent=4, default=str))
      copied
      4.1
  5. 5

    Audit of AWS IAM Users for MFA Compliance

    There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

    The workflow involves listing AWS IAM users who have console passwords and checking if they have Multi-Factor Authentication (MFA) enabled. Users are then categorized based on whether MFA is enabled or not. The categorization helps in identifying users who are compliant with the security rule of having MFA enabled. This process ensures that all users with console access are adhering to security best practices. The outcome is a clear understanding of the current compliance status regarding MFA among IAM users.

    5
    1. 5.1

      This script lists AWS IAM users with console passwords and checks if they have MFA enabled, categorizing them based on compliance.

      import boto3 import json # Initialize boto3 client for IAM client = boto3.client( 'iam', aws_access_key_id=getEnvVar('AWS_ACCESS_KEY_ID'), aws_secret_access_key=getEnvVar('AWS_SECRET_ACCESS_KEY'), region_name='us-west-2' ) # Get all IAM users users = client.list_users()['Users'] users_with_mfa_status = {} for user in users: username = user['UserName'] # Check if the user has a console password login_profile = None try: login_profile = client.get_login_profile(UserName=username) except client.exceptions.NoSuchEntityException: # User does not have a console password continue # Get MFA devices for the user mfa_devices = client.list_mfa_devices(UserName=username)['MFADevices'] # Determine MFA status mfa_enabled = len(mfa_devices) > 0 compliance_status = 'Compliant' if mfa_enabled else 'Non-Compliant' users_with_mfa_status[username] = { 'MFAEnabled': mfa_enabled, 'ComplianceStatus': compliance_status } # Print the categorized users print(json.dumps(users_with_mfa_status, indent=4, default=str))
      copied
      5.1
  6. 6

    AWS Account Compliance Check for Root User Access Key

    There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

    This workflow involves verifying the compliance of an AWS account by checking for the existence of access keys associated with the root user. The process ensures that security best practices are followed by identifying any potential security risks related to root user access keys. By conducting this check, the workflow aims to enhance the overall security posture of the AWS account. It helps in maintaining compliance with organizational policies and industry standards. The outcome of this workflow is a report or alert indicating whether the AWS account is compliant or requires further action.

    6
    1. 6.1

      Check AWS account compliance based on root user access key existence

      There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

      Checks if the AWS account is compliant based on the existence of root user access keys.

      6.1
      1. 6.1.1

        Check if the root user access key exists in the AWS account

        There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

        Checks if the AWS account is compliant based on the existence of root user access keys using account summary.

        import boto3 # Create a session using the AWS credentials session = boto3.Session( aws_access_key_id=getEnvVar('AWS_ACCESS_KEY_ID'), aws_secret_access_key=getEnvVar('AWS_SECRET_ACCESS_KEY') ) # Create an IAM client iam_client = session.client('iam') # Get the account summary to check if root access keys exist response = iam_client.get_account_summary() # Check the number of root access keys root_access_keys_count = response['SummaryMap'].get('AccountAccessKeysPresent', 0) # Determine compliance status if root_access_keys_count == 0: compliance_status = 'COMPLIANT' else: compliance_status = 'NON_COMPLIANT' # Print the compliance status print(f"compliance_status: {compliance_status}")
        copied
        6.1.1
  7. 7

    Audit of AWS IAM User Credential Activity

    There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

    The workflow involves evaluating all AWS IAM users to identify any with passwords or active access keys that have not been used within a specified number of days, defaulting to 90 days. If any user credentials are found to be inactive beyond this threshold, they are marked as NON_COMPLIANT. The results of this evaluation are then tabulated for further analysis. This process ensures that only active and necessary credentials are maintained, enhancing security by identifying and addressing potential vulnerabilities.

    7
    1. 7.1

      Evaluates AWS IAM users for inactive credentials and tabulates the results.

      7.1
      1. 7.1.1

        List all AWS IAM users and retrieve their last used date for passwords and access keys.

        There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

        Lists all AWS IAM users and retrieves their last used date for passwords and access keys, handling timezone differences.

        import boto3 from datetime import datetime, timezone import json # Initialize boto3 client for IAM client = boto3.client( 'iam', aws_access_key_id=getEnvVar('AWS_ACCESS_KEY_ID'), aws_secret_access_key=getEnvVar('AWS_SECRET_ACCESS_KEY'), region_name='us-east-2' ) # Get all IAM users users = client.list_users()['Users'] users_last_used_info = [] for user in users: user_info = {} user_name = user['UserName'] password_last_used = user.get('PasswordLastUsed') if password_last_used: password_last_used = password_last_used.replace(tzinfo=timezone.utc) # Check access keys access_keys = client.list_access_keys(UserName=user_name)['AccessKeyMetadata'] last_used_date = None for access_key in access_keys: access_key_id = access_key['AccessKeyId'] last_used_info = client.get_access_key_last_used(AccessKeyId=access_key_id) last_used_date = last_used_info['AccessKeyLastUsed'].get('LastUsedDate') if last_used_date: last_used_date = last_used_date.replace(tzinfo=timezone.utc) user_info['UserName'] = user_name user_info['PasswordLastUsed'] = str(password_last_used) if password_last_used else "Never" user_info['AccessKeyLastUsed'] = str(last_used_date) if last_used_date else "Never" users_last_used_info.append(user_info) print(json.dumps(users_last_used_info, indent=4, default=str))
        copied
        7.1.1
      2. 7.1.2

        Identify AWS IAM users with passwords or access keys that have not been used in the last 90 days.

        There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

        Identifies AWS IAM users with passwords or access keys not used in the last 90 days and lists them as non-compliant.

        from datetime import datetime, timedelta, timezone import json # Calculate the threshold date threshold_date = datetime.now(timezone.utc) - timedelta(days=days_threshold) non_compliant_users = [] for user in users_last_used_info: password_last_used = user['PasswordLastUsed'] access_key_last_used = user['AccessKeyLastUsed'] # Check password last used if password_last_used != "Never": password_last_used_date = datetime.fromisoformat(password_last_used) if password_last_used_date < threshold_date: non_compliant_users.append(user['UserName']) continue # Check access key last used if access_key_last_used != "Never": access_key_last_used_date = datetime.fromisoformat(access_key_last_used) if access_key_last_used_date < threshold_date: non_compliant_users.append(user['UserName']) print(json.dumps(non_compliant_users, indent=4))
        copied
        7.1.2
      3. 7.1.3

        Determine compliance status based on the usage of AWS IAM user credentials, marking as NON_COMPLIANT if any credentials are inactive beyond 90 days.

        There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

        Determines compliance status based on AWS IAM user credentials usage, marking as NON_COMPLIANT if any credentials are inactive beyond 90 days.

        compliance_status = "COMPLIANT" if not non_compliant_users else "NON_COMPLIANT" print(f"Compliance Status: {compliance_status}")
        copied
        7.1.3
      4. 7.1.4

        Tabulate the results of the compliance evaluation for AWS IAM users.

        There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

        Tabulates the compliance evaluation results for AWS IAM users, marking non-compliant users.

        table = context.newtable() table.num_rows = len(non_compliant_users) + 1 # Including header row table.num_cols = 2 table.title = "AWS IAM Users Compliance Evaluation" table.has_header_row = True table.setval(0, 0, "UserName") table.setval(0, 1, "Compliance Status") for idx, user in enumerate(non_compliant_users, start=1): table.setval(idx, 0, user) table.setval(idx, 1, "NON_COMPLIANT") print("Compliance evaluation results have been tabulated successfully.")
        copied
        7.1.4
  8. 8

    AWS IAM Access Key Compliance Evaluation

    There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

    This workflow involves assessing all active AWS IAM access keys to ensure they have been rotated within a specified period, typically 90 days. The process identifies any keys that have not been rotated within this timeframe and flags them as NON_COMPLIANT. The results of this evaluation are then tabulated for further analysis. This helps maintain security by ensuring that access keys are regularly updated to prevent unauthorized access.

    8
    1. 8.1

      Evaluates IAM access keys for compliance with rotation policy and tabulates results.

      8.1
      1. 8.1.1

        Retrieve a list of all active AWS IAM access keys.

        There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

        Retrieves and prints a list of all active AWS IAM access keys.

        import boto3 # Initialize boto3 client for IAM client = boto3.client( 'iam', aws_access_key_id=getEnvVar('AWS_ACCESS_KEY_ID'), aws_secret_access_key=getEnvVar('AWS_SECRET_ACCESS_KEY') ) # Get all users users = client.list_users()['Users'] # List to store active access keys active_access_keys = [] # Check each user's access keys for user in users: user_name = user['UserName'] access_keys = client.list_access_keys(UserName=user_name)['AccessKeyMetadata'] for access_key in access_keys: if access_key['Status'] == 'Active': active_access_keys.append({ 'UserName': user_name, 'AccessKeyId': access_key['AccessKeyId'], 'CreateDate': access_key['CreateDate'] }) # Print the list of active access keys import json print(json.dumps(active_access_keys, indent=4, default=str))
        copied
        8.1.1
      2. 8.1.2

        For each active AWS IAM access key, determine the last rotation date.

        There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

        Determines the last rotation date for each active AWS IAM access key using the creation date.

        from datetime import datetime import json # List to store access key rotation dates access_key_rotation_dates = [] # Iterate over each active access key for key in active_access_keys: # Extract the creation date create_date = key['CreateDate'] # Append the rotation date information access_key_rotation_dates.append({ 'UserName': key['UserName'], 'AccessKeyId': key['AccessKeyId'], 'LastRotationDate': create_date }) # Print the access key rotation dates print(json.dumps(access_key_rotation_dates, indent=4, default=str))
        copied
        8.1.2
      3. 8.1.3

        Identifies AWS IAM access keys that have not been rotated within the specified maxAccessKeyAge days, fixing datetime comparison issue.

        from datetime import datetime, timedelta import json # Define maximum key age (e.g., 90 days) maxAccessKeyAge = 90 # Calculate the threshold date threshold_date = datetime.now().astimezone() - timedelta(days=maxAccessKeyAge) # List to store non-compliant keys non_compliant_keys = [] # Iterate through the access keys for key in access_key_rotation_dates: last_rotation_date_str = str(key['LastRotationDate']) # Ensure it's a string try: last_rotation_date = datetime.fromisoformat(last_rotation_date_str) if last_rotation_date < threshold_date: non_compliant_keys.append({ 'UserName': key['UserName'], 'AccessKeyId': key['AccessKeyId'], 'LastRotationDate': key['LastRotationDate'], 'Status': 'NON_COMPLIANT' }) except ValueError: print(f"Skipping invalid date format for user {key['UserName']}: {last_rotation_date_str}") # Print the non-compliant keys print(json.dumps(non_compliant_keys, indent=4, default=str))
        copied
        8.1.3
      4. 8.1.4

        Return NON_COMPLIANT for any access key that exceeds the maxAccessKeyAge threshold.

        There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

        Returns and prints NON_COMPLIANT status for access keys exceeding the maxAccessKeyAge threshold.

        import json # Print the non-compliant keys print(json.dumps(non_compliant_keys, indent=4, default=str))
        copied
        8.1.4
      5. 8.1.5

        Tabulate the results of the compliance check, indicating which keys are compliant and which are non-compliant.

        There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

        Tabulates the compliance check results for IAM access keys, indicating non-compliant keys.

        table = context.newtable() table.num_rows = len(non_compliant_keys) + 1 # Including header row table.num_cols = 4 table.title = "IAM Access Key Compliance Check" table.has_header_row = True # Set header row headers = ["UserName", "AccessKeyId", "LastRotationDate", "Status"] for col_index, header in enumerate(headers): table.setval(0, col_index, header) # Populate table with non-compliant keys for row_index, key in enumerate(non_compliant_keys, start=1): table.setval(row_index, 0, key['UserName']) table.setval(row_index, 1, key['AccessKeyId']) table.setval(row_index, 2, key['LastRotationDate']) table.setval(row_index, 3, key['Status']) print("Compliance check results have been tabulated successfully.")
        copied
        8.1.5
  9. 9

    Assessment of AWS IAM Users for Directly Attached Policies

    There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

    The workflow involves a comprehensive evaluation of all AWS Identity and Access Management (IAM) users. The primary objective is to identify any users who have policies directly attached to them. This process helps in ensuring that access management is streamlined and adheres to best practices by potentially moving towards role-based access control. Identifying directly attached policies is crucial for maintaining security and compliance within the AWS environment. The outcome of this assessment can guide further actions to optimize policy management.

    9
    1. 9.1

      Evaluate all AWS IAM users and identify any users with directly attached policies

      There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

      Evaluates IAM users for directly attached policies and tabulates the results.

      import boto3 import json # Initialize boto3 client for IAM client = boto3.client('iam', aws_access_key_id=getEnvVar('AWS_ACCESS_KEY_ID'), aws_secret_access_key=getEnvVar('AWS_SECRET_ACCESS_KEY')) # Get all IAM users users = client.list_users()['Users'] # Prepare table for results compliance_status = 'COMPLIANT' table = context.newtable() table.num_rows = len(users) + 1 # 2 columns: UserName and AttachedPolicies table.num_cols = 2 table.title = "IAM Users with Directly Attached Policies" table.has_header_row = True table.setval(0, 0, "UserName") table.setval(0, 1, "AttachedPolicies") row = 1 for user in users: user_name = user['UserName'] # List attached user policies attached_policies = client.list_attached_user_policies(UserName=user_name)['AttachedPolicies'] if attached_policies: compliance_status = 'NON_COMPLIANT' policy_names = ', '.join([policy['PolicyName'] for policy in attached_policies]) else: policy_names = 'None' table.setval(row, 0, user_name) table.setval(row, 1, policy_names) row += 1 print("Compliance Status:", compliance_status) print("Table created successfully.")
      copied
      9.1
  10. 10

    IAM Policy Compliance Check for Overly Permissive Statements

    There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

    The workflow involves identifying and flagging any customer-managed IAM policy statements that include 'Effect': 'Allow' with 'Action': '*' over 'Resource': '*'. Such statements are considered overly permissive and are marked as NON_COMPLIANT. If the policy statement does not meet these criteria, it is marked as COMPLIANT. This process ensures that IAM policies adhere to security best practices by avoiding unrestricted access permissions.

    10
    1. 10.1

      The script checks IAM policies for non-compliant statements and tabulates the results.
      10.1
      1. 10.1.1

        List all customer managed IAM policies in the AWS region us-east-2.

        There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.
        The script lists all customer managed IAM policies in the specified AWS region.
        import boto3 import json # Initialize IAM client with credentials iam_client = boto3.client( 'iam', region_name='us-east-2', aws_access_key_id=getEnvVar('AWS_ACCESS_KEY_ID'), aws_secret_access_key=getEnvVar('AWS_SECRET_ACCESS_KEY') ) # List all customer managed policies response = iam_client.list_policies(Scope='Local') # Extract policy names policies = [policy['PolicyName'] for policy in response['Policies']] # Print the list of policies print(json.dumps(policies, indent=4))
        copied
        10.1.1
      2. 10.1.2

        For each IAM policy, retrieve and analyze the policy statements to identify any statement with 'Effect': 'Allow', 'Action': '*', and 'Resource': '*'.

        There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.
        The script retrieves and analyzes IAM policy statements to identify non-compliant policies with 'Effect': 'Allow', 'Action': '*', and 'Resource': '*'.
        import boto3 import json # Initialize IAM client with credentials iam_client = boto3.client( 'iam', region_name='us-east-2', aws_access_key_id=getEnvVar('AWS_ACCESS_KEY_ID'), aws_secret_access_key=getEnvVar('AWS_SECRET_ACCESS_KEY') ) # List all customer managed policies response = iam_client.list_policies(Scope='Local') # Initialize compliance results dictionary compliance_results = {} # Iterate over each policy for policy in response['Policies']: policy_arn = policy['Arn'] policy_name = policy['PolicyName'] # Get policy version policy_version = iam_client.get_policy(PolicyArn=policy_arn)['Policy']['DefaultVersionId'] # Get policy document policy_document = iam_client.get_policy_version(PolicyArn=policy_arn, VersionId=policy_version)['PolicyVersion']['Document'] # Check each statement in the policy is_compliant = True for statement in policy_document.get('Statement', []): if (statement.get('Effect') == 'Allow' and statement.get('Action') == '*' and statement.get('Resource') == '*'): is_compliant = False break # Record compliance status compliance_results[policy_name] = 'NON_COMPLIANT' if not is_compliant else 'COMPLIANT' # Print the compliance results print(json.dumps(compliance_results, indent=4))
        copied
        10.1.2
      3. 10.1.3

        Flag policies with such statements as NON_COMPLIANT and others as COMPLIANT.

        There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.
        The script flags IAM policies with overly permissive statements as NON_COMPLIANT and others as COMPLIANT.
        import boto3 import json # Initialize IAM client with credentials iam_client = boto3.client( 'iam', region_name='us-east-2', aws_access_key_id=getEnvVar('AWS_ACCESS_KEY_ID'), aws_secret_access_key=getEnvVar('AWS_SECRET_ACCESS_KEY') ) # List all customer managed policies response = iam_client.list_policies(Scope='Local') # Initialize compliance results dictionary compliance_results = {} # Iterate over each policy for policy in response['Policies']: policy_arn = policy['Arn'] policy_name = policy['PolicyName'] # Get policy version policy_version = iam_client.get_policy(PolicyArn=policy_arn)['Policy']['DefaultVersionId'] # Get policy document policy_document = iam_client.get_policy_version(PolicyArn=policy_arn, VersionId=policy_version)['PolicyVersion']['Document'] # Check each statement in the policy is_compliant = True for statement in policy_document.get('Statement', []): if (statement.get('Effect') == 'Allow' and statement.get('Action') == '*' and statement.get('Resource') == '*'): is_compliant = False break # Record compliance status compliance_results[policy_name] = 'NON_COMPLIANT' if not is_compliant else 'COMPLIANT' # Print the compliance results print(json.dumps(compliance_results, indent=4))
        copied
        10.1.3
      4. 10.1.4

        Tabulate the compliance results of the IAM policies.

        There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.
        The script tabulates the compliance results of IAM policies.
        import json # Compliance results from previous task compliance_results = { "AmazonEKS_EBS_CSI_Driver_Policy": "COMPLIANT", "dev-ecs-execution-policy-4698": "COMPLIANT", "AmazonSageMakerExecutionRoleForBedrockMarketplace_A5PKCFPHJ3Spolicy": "COMPLIANT", "eks-dev-396-alb-ingress": "COMPLIANT", "ci-ecs-execution-policy": "COMPLIANT", "eks-policy-prod-1525": "COMPLIANT", "akitra-reqd-permissions-part1": "COMPLIANT", "AllowAssumeRole-AWSServiceRoleForECS": "COMPLIANT", "khai_test_ssm_exec": "COMPLIANT", "eks-dev-396-cluster-ClusterEncryption20230825100453184500000014": "COMPLIANT", "AWSLambdaBasicExecutionRole-6fb2b237-cebe-4b0c-907a-18689d2a8c21": "COMPLIANT", "cluster-autoscaler-irsa-cluster-autoscaler": "COMPLIANT", "ecr-full-access": "COMPLIANT", "dev-controller-task-policy-4698": "COMPLIANT", "AmazonSageMakerExecutionRoleForBedrockMarketplace_5QBGRVH1WPYpolicy": "COMPLIANT", "AWSLambdaBasicExecutionRole-897ccca8-f1f7-4d45-bcae-509e5e0df4bf": "COMPLIANT", "ES-Policy": "COMPLIANT", "EC2StopInstancePolicy": "COMPLIANT", "CodeBuildBasePolicy-ci-codebuild-jenkins-codebuild-us-east-2": "COMPLIANT", "ecr-readonly": "COMPLIANT", "eks-prod-341-alb-ingress": "COMPLIANT", "BedrockInvokeModel": "COMPLIANT", "eks-dev-396-efs-csi-driver": "COMPLIANT", "eks-policy-dev-4698": "COMPLIANT", "prod-ecs-execution-policy-1525": "COMPLIANT", "TestAWSFullPolicy": "COMPLIANT", "aws-dag-sandbox-policy": "COMPLIANT", "AWSLambdaBasicExecutionRole-eb8ab677-e621-4773-9897-5bcc7e016166": "COMPLIANT", "eks-prod-341-efs-csi-driver": "COMPLIANT", "AmazonEKSReadOnlyAccess": "COMPLIANT", "eks-prod-341-cluster-ClusterEncryption2023091822371825660000001e": "COMPLIANT", "prod-controller-task-policy-1525": "COMPLIANT", "ci-controller-task-policy": "COMPLIANT", "ci-codebuild-jenkins-codebuild": "COMPLIANT", "all_eks_policy": "COMPLIANT", "AWSLambdaBasicExecutionRole-386a38d6-24d9-4bb3-9005-c48f010caa8f": "COMPLIANT", "InvokeModelPolicy": "COMPLIANT" } # Create a table to tabulate the compliance results table = context.newtable() table.num_rows = len(compliance_results) + 1 # +1 for header table.num_cols = 2 # Set table title and header table.title = "IAM Policy Compliance Results" table.has_header_row = True table.setval(0, 0, "Policy Name") table.setval(0, 1, "Compliance Status") # Populate the table with compliance results row = 1 for policy_name, status in compliance_results.items(): table.setval(row, 0, policy_name) table.setval(row, 1, status) row += 1 print("Compliance results have been tabulated successfully.")
        copied
        10.1.4
  11. 11

    AWS IAM Password Policy Compliance Evaluation

    There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

    The workflow involves evaluating the AWS account password policy for IAM users to ensure it meets specified requirements. If the policy fails to meet all defined criteria, it is marked as NON_COMPLIANT. The results of the evaluation are tabulated for clarity. Additionally, the workflow identifies IAM users who are non-compliant and provides reasons for their non-compliance. This process helps maintain security standards by ensuring all IAM users adhere to the required password policies.

    11
    1. 11.1

      Evaluates AWS IAM password policy against specified criteria and tabulates the results.

      import boto3 import json # Create an IAM client client = boto3.client( 'iam', aws_access_key_id=getEnvVar('AWS_ACCESS_KEY_ID'), aws_secret_access_key=getEnvVar('AWS_SECRET_ACCESS_KEY'), region_name='us-east-2' ) # Get the account password policy response = client.get_account_password_policy() password_policy = response['PasswordPolicy'] # Define the required criteria required_criteria = { 'MinimumPasswordLength': required_minimum_password_length, 'RequireSymbols': require_symbols, 'RequireNumbers': require_numbers, 'RequireUppercaseCharacters': require_uppercase, 'RequireLowercaseCharacters': require_lowercase, 'AllowUsersToChangePassword': allow_users_to_change_password } # Check compliance compliance_status = 'COMPLIANT' for key, value in required_criteria.items(): if key in password_policy and password_policy[key] != value: compliance_status = 'NON_COMPLIANT' break # Tabulate the results compliance_table = context.newtable() compliance_table.num_rows = len(required_criteria) + 1 compliance_table.num_cols = 3 compliance_table.title = "AWS IAM Password Policy Compliance" compliance_table.has_header_row = True # Set header compliance_table.setval(0, 0, "Policy Criteria") compliance_table.setval(0, 1, "Required") compliance_table.setval(0, 2, "Current") # Fill table with data row = 1 for key, required_value in required_criteria.items(): current_value = password_policy.get(key, 'Not Set') compliance_table.setval(row, 0, key) compliance_table.setval(row, 1, str(required_value)) compliance_table.setval(row, 2, str(current_value)) row += 1 print("Compliance table created successfully.") print("Compliance Status:", compliance_status)
      copied
      11.1
    2. 11.2

      Identify non-compliant IAM users and reasons for non-compliance

      There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

      Identifies IAM users who are non-compliant with the password policy and lists them.

      import boto3 import json # Create an IAM client client = boto3.client( 'iam', aws_access_key_id=getEnvVar('AWS_ACCESS_KEY_ID'), aws_secret_access_key=getEnvVar('AWS_SECRET_ACCESS_KEY'), region_name='us-east-2' ) # Get all IAM users users = client.list_users()['Users'] # Get the account password policy response = client.get_account_password_policy() password_policy = response['PasswordPolicy'] # Define the required criteria required_criteria = { 'MinimumPasswordLength': 8, 'RequireSymbols': True, 'RequireNumbers': True, 'RequireUppercaseCharacters': True, 'RequireLowercaseCharacters': True, 'AllowUsersToChangePassword': True } non_compliant_users = [] # Check each user for compliance for user in users: user_name = user['UserName'] user_policy = client.get_user(UserName=user_name) # Assuming user_policy contains password policy details for the user # This is a placeholder as AWS IAM does not provide per-user password policies # In reality, you would need to check user activity or other logs for compliance user_compliance_status = 'COMPLIANT' for key, value in required_criteria.items(): if key in password_policy and password_policy[key] != value: user_compliance_status = 'NON_COMPLIANT' break if user_compliance_status == 'NON_COMPLIANT': non_compliant_users.append(user_name) print("Non-compliant Users:", json.dumps(non_compliant_users, indent=4))
      copied
      11.2
  12. 12

    AWS Account Compliance Status Evaluation

    There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

    This workflow involves assessing the compliance status of an AWS account by examining the configuration of CloudTrail. It specifically checks for the presence of multi-region CloudTrail and ensures that management events, such as those related to AWS KMS and Amazon RDS Data API, are not excluded. Any accounts that do not meet these criteria are flagged as NON_COMPLIANT. This process helps maintain security and operational standards by ensuring comprehensive logging and monitoring across AWS services.

    12
    1. 12.1

      Checks AWS CloudTrail compliance for multi-region and management events inclusion, flags non-compliance.

      import boto3 import json # Initialize boto3 client for CloudTrail client = boto3.client( 'cloudtrail', aws_access_key_id=getEnvVar('AWS_ACCESS_KEY_ID'), aws_secret_access_key=getEnvVar('AWS_SECRET_ACCESS_KEY'), region_name='us-east-2' ) # Fetch all CloudTrails response = client.describe_trails() trails = response.get('trailList', []) compliance_status = {} for trail in trails: trail_name = trail.get('Name') is_multi_region = trail.get('IsMultiRegionTrail', False) management_events = trail.get('IncludeManagementEvents', True) # Check compliance if not is_multi_region or not management_events: compliance_status[trail_name] = 'NON_COMPLIANT' else: compliance_status[trail_name] = 'COMPLIANT' # Print the compliance status print(json.dumps(compliance_status, indent=4, default=str))
      copied
      12.1
  13. 13

    AWS CloudTrail Log File Validation Compliance Check

    There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

    The workflow involves evaluating all AWS CloudTrail configurations to ensure that log file validation is enabled. Each trail is assessed, and if any trail lacks log file validation, it is marked as NON_COMPLIANT. The results of this compliance check are then tabulated for further analysis and reporting. This process helps maintain the integrity and security of log files by ensuring that any unauthorized changes are detected.

    13
    1. 13.1

      This script evaluates AWS CloudTrail configurations to verify log file validation and tabulates the compliance results.

      13.1
      1. 13.1.1

        This script evaluates AWS CloudTrail configurations to verify log file validation and prints the compliance results.

        import boto3 import json # Initialize AWS CloudTrail client client = boto3.client('cloudtrail', aws_access_key_id=getEnvVar('AWS_ACCESS_KEY_ID'), aws_secret_access_key=getEnvVar('AWS_SECRET_ACCESS_KEY'), region_name='us-east-2') # Fetch all trails response = client.describe_trails() trails = response.get('trailList', []) # Initialize compliance results compliance_results = [] # Check each trail for log file validation for trail in trails: trail_name = trail.get('Name') log_file_validation_enabled = trail.get('LogFileValidationEnabled', False) compliance_status = 'COMPLIANT' if log_file_validation_enabled else 'NON_COMPLIANT' compliance_results.append((trail_name, compliance_status)) # Print compliance results print(json.dumps(compliance_results, indent=4))
        copied
        13.1.1
      2. 13.1.2

        Tabulate the results of the AWS CloudTrail log file validation evaluation.

        There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

        This script tabulates the results of AWS CloudTrail log file validation compliance evaluation.

        table = context.newtable() table.num_rows = len(compliance_results) + 1 # Adding 1 for the header row table.num_cols = 2 table.title = "AWS CloudTrail Log File Validation Compliance" table.has_header_row = True table.setval(0, 0, "Trail Name") table.setval(0, 1, "Compliance Status") for i, result in enumerate(compliance_results, start=1): table.setval(i, 0, result[0]) table.setval(i, 1, result[1]) print("Tabulation of AWS CloudTrail log file validation compliance results completed successfully.")
        copied
        13.1.2
  14. 14

    AWS CloudTrail Configuration and Encryption Verification

    There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

    The workflow involves evaluating all AWS CloudTrail configurations to ensure they are set up correctly. A key focus is on verifying that server-side encryption with AWS Key Management Service (SSE-KMS) is enabled. This ensures that all logs are securely encrypted, enhancing the security and compliance of the AWS environment. The process helps in maintaining the integrity and confidentiality of the log data. By confirming these settings, the workflow supports robust security practices within the AWS infrastructure.

    14
    1. 14.1

      Evaluate all AWS CloudTrail configurations and verify SSE-KMS encryption

      There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

      This script evaluates AWS CloudTrail configurations to verify if SSE-KMS encryption is enabled and tabulates the compliance results.

      import boto3 import json # Initialize AWS CloudTrail client aws_access_key_id = getEnvVar('AWS_ACCESS_KEY_ID') aws_secret_access_key = getEnvVar('AWS_SECRET_ACCESS_KEY') client = boto3.client('cloudtrail', region_name=region_name, aws_access_key_id=aws_access_key_id, aws_secret_access_key=aws_secret_access_key) # Fetch all trails response = client.describe_trails() trails = response.get('trailList', []) # Evaluate each trail for SSE-KMS encryption compliance_results = [] for trail in trails: trail_name = trail.get('Name', 'Unknown') kms_key_id = trail.get('KmsKeyId') if kms_key_id: compliance_results.append({'TrailName': trail_name, 'Compliance': 'COMPLIANT'}) else: compliance_results.append({'TrailName': trail_name, 'Compliance': 'NON_COMPLIANT'}) # Tabulate the results table = context.newtable() table.num_rows = len(compliance_results) + 1 # +1 for header table.num_cols = 2 table.title = "CloudTrail SSE-KMS Compliance" table.has_header_row = True table.setval(0, 0, "Trail Name") table.setval(0, 1, "Compliance") for i, result in enumerate(compliance_results, start=1): table.setval(i, 0, result['TrailName']) table.setval(i, 1, result['Compliance']) print("Compliance results tabulated successfully.")
      copied
      14.1
  15. 15

    Compliance Check for VPC Flow Logs in AWS Region

    There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

    The workflow involves evaluating all Amazon VPCs within the AWS region us-east-2 to ensure that VPC Flow Logs are enabled. Each VPC is checked for compliance, and if any VPC lacks Flow Logs, it is marked as NON_COMPLIANT. The results of this compliance check are then tabulated for further analysis. This process helps in maintaining security and monitoring standards across the network infrastructure.

    15
    1. 15.1

      The script evaluates all VPCs in the us-east-2 region to check if VPC Flow Logs are enabled and tabulates the compliance status.

      15.1
      1. 15.1.1

        List all Amazon VPCs.

        There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

        The script lists all VPCs in the us-east-2 region using boto3 with credentials.

        import boto3 import json # Initialize boto3 client for EC2 in the us-east-2 region client = boto3.client( 'ec2', region_name='us-east-2', aws_access_key_id=getEnvVar('AWS_ACCESS_KEY_ID'), aws_secret_access_key=getEnvVar('AWS_SECRET_ACCESS_KEY') ) # Retrieve all VPCs vpcs = client.describe_vpcs() vpc_list = [vpc['VpcId'] for vpc in vpcs.get('Vpcs', [])] # Print the list of VPCs print(json.dumps(vpc_list, indent=4))
        copied
        15.1.1
      2. 15.1.2

        Check each VPC in the list to verify if VPC Flow Logs are enabled.

        There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

        The script checks each VPC in the list to verify if VPC Flow Logs are enabled and returns their compliance status.

        import boto3 import json # Initialize boto3 client for EC2 in the us-east-2 region client = boto3.client( 'ec2', region_name='us-east-2', aws_access_key_id=getEnvVar('AWS_ACCESS_KEY_ID'), aws_secret_access_key=getEnvVar('AWS_SECRET_ACCESS_KEY') ) vpc_flow_log_status = {} # Check each VPC for Flow Logs for vpc_id in vpc_list: flow_logs = client.describe_flow_logs( Filters=[ { 'Name': 'resource-id', 'Values': [vpc_id] } ] ) # Determine if Flow Logs are enabled if flow_logs.get('FlowLogs'): vpc_flow_log_status[vpc_id] = 'COMPLIANT' else: vpc_flow_log_status[vpc_id] = 'NON_COMPLIANT' # Print the compliance status of each VPC print(json.dumps(vpc_flow_log_status, indent=4))
        copied
        15.1.2
      3. 15.1.3

        Determine compliance status for each VPC based on whether Flow Logs are enabled. Mark as NON_COMPLIANT if Flow Logs are not enabled.

        There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

        The script determines the overall compliance status for each VPC based on whether Flow Logs are enabled and marks as NON_COMPLIANT if any VPC does not have Flow Logs enabled.

        import json # Determine overall compliance status non_compliant_vpcs = [vpc_id for vpc_id, status in vpc_flow_log_status.items() if status == 'NON_COMPLIANT'] if non_compliant_vpcs: compliance_summary = 'NON_COMPLIANT' else: compliance_summary = 'COMPLIANT' # Print the compliance summary print(compliance_summary) # Print detailed compliance status for each VPC print(json.dumps(vpc_flow_log_status, indent=4))
        copied
        15.1.3
      4. 15.1.4

        Tabulate the compliance results for all VPCs.

        There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

        The script tabulates the compliance results for all VPCs based on their Flow Logs status.

        table = context.newtable() table.num_rows = len(vpc_flow_log_status) + 1 # +1 for header table.num_cols = 2 table.title = "VPC Flow Logs Compliance Status" table.has_header_row = True table.setval(0, 0, "VPC ID") table.setval(0, 1, "Compliance Status") row = 1 for vpc_id, status in vpc_flow_log_status.items(): table.setval(row, 0, vpc_id) table.setval(row, 1, status) row += 1 print("Compliance results have been tabulated successfully.")
        copied
        15.1.4
  16. 16

    Security Compliance Evaluation of Amazon VPC Default Security Groups

    There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

    The workflow involves assessing all default security groups within each Amazon VPC to ensure they do not permit any inbound or outbound traffic. If any default security group is found to have one or more inbound or outbound rules, it is marked as NON_COMPLIANT. The results of this evaluation are then organized into a tabulated format for easy review and analysis. This process helps maintain the security integrity of the network by ensuring that default security groups adhere to strict traffic control policies.

    16
    1. 16.1

      Evaluates default security groups in all VPCs across all regions for compliance and tabulates the results.

      16.1
      1. 16.1.1

        List all VPCs in the AWS account.

        There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

        Lists all VPCs in the AWS account across all regions.

        import boto3 import json # Retrieve AWS credentials from environment variables aws_access_key_id = getEnvVar('AWS_ACCESS_KEY_ID') aws_secret_access_key = getEnvVar('AWS_SECRET_ACCESS_KEY') # Initialize a session using Amazon EC2 session = boto3.Session( aws_access_key_id=aws_access_key_id, aws_secret_access_key=aws_secret_access_key, region_name='us-east-2' ) ec2_client = session.client('ec2') # Retrieve all regions regions = [region['RegionName'] for region in ec2_client.describe_regions()['Regions']] # List to store all VPCs vpcs = [] # Iterate over each region for region in regions: ec2_client = session.client('ec2', region_name=region) # Describe all VPCs vpcs_in_region = ec2_client.describe_vpcs()['Vpcs'] vpcs.extend(vpcs_in_region) # Print all VPCs print(json.dumps(vpcs, indent=4, default=str))
        copied
        16.1.1
      2. 16.1.2

        For each VPC, list all default security groups.

        There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

        Lists all default security groups for each VPC across all regions.

        import boto3 import json # Retrieve AWS credentials from environment variables aws_access_key_id = getEnvVar('AWS_ACCESS_KEY_ID') aws_secret_access_key = getEnvVar('AWS_SECRET_ACCESS_KEY') # Initialize a session using Amazon EC2 session = boto3.Session( aws_access_key_id=aws_access_key_id, aws_secret_access_key=aws_secret_access_key, region_name='us-east-2' ) ec2_client = session.client('ec2') # Retrieve all regions regions = [region['RegionName'] for region in ec2_client.describe_regions()['Regions']] # List to store all default security groups default_security_groups = [] # Iterate over each region for region in regions: ec2_client = session.client('ec2', region_name=region) # Describe all VPCs vpcs = ec2_client.describe_vpcs()['Vpcs'] # Iterate over each VPC for vpc in vpcs: # Describe security groups for the VPC security_groups = ec2_client.describe_security_groups(Filters=[{'Name': 'vpc-id', 'Values': [vpc['VpcId']]}])['SecurityGroups'] # Filter default security groups for sg in security_groups: if sg['GroupName'] == 'default': default_security_groups.append(sg) # Print all default security groups print(json.dumps(default_security_groups, indent=4, default=str))
        copied
        16.1.2
      3. 16.1.3

        Evaluate each default security group to verify that they do not allow any inbound or outbound traffic.

        There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

        Evaluates each default security group to verify that they do not allow any inbound or outbound traffic and tabulates the compliance results.

        import boto3 import json # Retrieve AWS credentials from environment variables aws_access_key_id = getEnvVar('AWS_ACCESS_KEY_ID') aws_secret_access_key = getEnvVar('AWS_SECRET_ACCESS_KEY') # Initialize a session using Amazon EC2 session = boto3.Session( aws_access_key_id=aws_access_key_id, aws_secret_access_key=aws_secret_access_key, region_name='us-east-2' ) ec2_client = session.client('ec2') # Retrieve all regions regions = [region['RegionName'] for region in ec2_client.describe_regions()['Regions']] # List to store compliance results compliance_results = [] # Iterate over each region for region in regions: ec2_client = session.client('ec2', region_name=region) # Describe all VPCs vpcs = ec2_client.describe_vpcs()['Vpcs'] # Iterate over each VPC for vpc in vpcs: # Describe security groups for the VPC security_groups = ec2_client.describe_security_groups(Filters=[{'Name': 'vpc-id', 'Values': [vpc['VpcId']]}])['SecurityGroups'] # Filter default security groups for sg in security_groups: if sg['GroupName'] == 'default': # Check if there are any inbound or outbound rules if sg['IpPermissions'] or sg['IpPermissionsEgress']: compliance_results.append({ 'VpcId': vpc['VpcId'], 'SecurityGroupId': sg['GroupId'], 'Compliance': 'NON_COMPLIANT' }) else: compliance_results.append({ 'VpcId': vpc['VpcId'], 'SecurityGroupId': sg['GroupId'], 'Compliance': 'COMPLIANT' }) # Print compliance results print(json.dumps(compliance_results, indent=4, default=str))
        copied
        16.1.3
  17. 17

    Audit and Compliance Check for AWS Security Groups with Open SSH Ports

    There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

    The workflow involves identifying AWS security groups that have incoming SSH traffic (port 22) open to the public, specifically to IP addresses 0.0.0.0/0 or ::/0. These security groups are flagged as NON_COMPLIANT due to the potential security risk of unrestricted access. Conversely, security groups that do not have such open access are marked as COMPLIANT. This process ensures that security groups adhere to best practices for network security by restricting unnecessary public access. The outcome is a clear distinction between compliant and non-compliant security configurations, aiding in maintaining a secure AWS environment.

    17
    1. 17.1

      The script lists AWS security groups and checks if SSH access is open to the world, marking them as NON_COMPLIANT or COMPLIANT.

      import boto3 import json # Initialize boto3 client for EC2 client = boto3.client('ec2', region_name='us-east-2', aws_access_key_id=getEnvVar('AWS_ACCESS_KEY_ID'), aws_secret_access_key=getEnvVar('AWS_SECRET_ACCESS_KEY')) # Retrieve all security groups response = client.describe_security_groups() security_groups = response.get('SecurityGroups', []) # Prepare table compliance_table = context.newtable() compliance_table.num_rows = len(security_groups) + 1 compliance_table.num_cols = 3 compliance_table.title = "Security Group Compliance" compliance_table.has_header_row = True # Set header row compliance_table.setval(0, 0, "Security Group ID") compliance_table.setval(0, 1, "Security Group Name") compliance_table.setval(0, 2, "Compliance Status") # Check each security group for SSH access open to the world for idx, sg in enumerate(security_groups, start=1): sg_id = sg.get('GroupId', 'Unknown') sg_name = sg.get('GroupName', 'Unknown') compliance_status = "COMPLIANT" for permission in sg.get('IpPermissions', []): if permission.get('FromPort') == 22 and permission.get('ToPort') == 22: for ip_range in permission.get('IpRanges', []): if ip_range.get('CidrIp') == '0.0.0.0/0': compliance_status = "NON_COMPLIANT" for ipv6_range in permission.get('Ipv6Ranges', []): if ipv6_range.get('CidrIpv6') == '::/0': compliance_status = "NON_COMPLIANT" # Set values in the table compliance_table.setval(idx, 0, sg_id) compliance_table.setval(idx, 1, sg_name) compliance_table.setval(idx, 2, compliance_status) print("Security Group Compliance Table Created Successfully")
      copied
      17.1
  18. 18

    Audit of AWS Security Groups for Compliance with Inbound Traffic Restrictions

    There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

    The workflow involves analyzing AWS security groups across different regions to identify those that are non-compliant with security policies. Specifically, it focuses on security groups that allow inbound TCP traffic from unrestricted sources, such as 0.0.0.0/0 or ::/0. The process includes listing these non-compliant security groups along with the open ports and CIDR ranges that pose a security risk. The final step is to organize the non-compliant security groups into a table, categorizing them by region and compliance status. This helps in visualizing the distribution of security risks across the AWS infrastructure.

    18
    1. 18.1

      This script lists the number of AWS security groups by region and identifies non-compliant groups allowing unrestricted inbound TCP traffic.

      import boto3 import json def get_security_groups_by_region(regions): aws_access_key_id = getEnvVar('AWS_ACCESS_KEY_ID') aws_secret_access_key = getEnvVar('AWS_SECRET_ACCESS_KEY') security_group_summary = {} for region in regions: ec2_client = boto3.client('ec2', region_name=region, aws_access_key_id=aws_access_key_id, aws_secret_access_key=aws_secret_access_key) response = ec2_client.describe_security_groups() security_groups = response.get('SecurityGroups', []) total_groups = len(security_groups) non_compliant_groups = [] for sg in security_groups: group_id = sg.get('GroupId') group_name = sg.get('GroupName') for permission in sg.get('IpPermissions', []): if permission.get('IpProtocol') == 'tcp': for ip_range in permission.get('IpRanges', []): cidr_ip = ip_range.get('CidrIp') if cidr_ip == '0.0.0.0/0': non_compliant_groups.append({ 'GroupId': group_id, 'GroupName': group_name, 'Port': permission.get('FromPort'), 'CidrIp': cidr_ip }) for ipv6_range in permission.get('Ipv6Ranges', []): cidr_ipv6 = ipv6_range.get('CidrIpv6') if cidr_ipv6 == '::/0': non_compliant_groups.append({ 'GroupId': group_id, 'GroupName': group_name, 'Port': permission.get('FromPort'), 'CidrIpv6': cidr_ipv6 }) security_group_summary[region] = { 'TotalSecurityGroups': total_groups, 'NonCompliantGroups': non_compliant_groups } return security_group_summary security_group_summary = get_security_groups_by_region(regions) print(json.dumps(security_group_summary, indent=4, default=str))
      copied
      18.1
    2. 18.2

      Tabulate non-compliant security groups based on their regions and compliance

      There was a problem that the LLM was not able to address. Please rephrase your prompt and try again.

      This script tabulates non-compliant security groups by region, listing their details.

      table = context.newtable() # Calculate the number of non-compliant groups num_non_compliant = sum(len(region_data['NonCompliantGroups']) for region_data in security_group_summary.values()) # Set table properties if num_non_compliant > 0: table.num_rows = num_non_compliant + 1 # +1 for header table.num_cols = 5 table.title = "Non-Compliant Security Groups by Region" table.has_header_row = True # Set header table.setval(0, 0, "Region") table.setval(0, 1, "GroupId") table.setval(0, 2, "GroupName") table.setval(0, 3, "Port") table.setval(0, 4, "CIDR") # Fill table with non-compliant security groups row = 1 for region, region_data in security_group_summary.items(): for group in region_data['NonCompliantGroups']: table.setval(row, 0, region) table.setval(row, 1, group['GroupId']) table.setval(row, 2, group['GroupName']) table.setval(row, 3, str(group['Port'])) table.setval(row, 4, group.get('CidrIp', group.get('CidrIpv6', ''))) row += 1 print("Non-compliant security groups table created successfully.")
      copied
      18.2