Various command lines that have helped me recently.
IAM
aws iam list-attached-role-policies --role-name $ROLE_NAME
aws iam list-role-policies --role-name $ROLE_NAME
Whoami with:
aws sts get-caller-identity
Policies are a collection of actions and services that can be assigned. List all homemade policies with:
aws iam list-policies --scope Local --query 'Policies[].Arn' --output table
Similarly, list all roles with:
aws iam list-roles --query 'Roles[].RoleName' --output table
List all the Actions for a policy with:
aws iam get-policy-version --policy-arn $POLICY_ARN --version-id $(aws iam get-policy --policy-arn $POLICY_ARN --query 'Policy.DefaultVersionId' --output text) --query 'PolicyVersion.Document.Statement[].Action' --output json | jq -r '.[]' | sort -u
List all the trust policies for a given role:
aws iam get-role --role-name $ROLE_NAME --query 'Role.AssumeRolePolicyDocument' --output json
Note that assuming a role implies some temporary elevation of privileges while attaching a role is more about defining what a role actually is.
List everything attached to a policy:
aws iam list-entities-for-policy --policy-arn $POLICY_ARN
Instance profiles contain roles. They act as a bridge to securely pass an IAM role to an EC2 instance, enabling the instance to access other AWS services without needing to store long-term, hard-coded credentials like access keys. You can see them with:
aws iam list-instance-profiles-for-role --query 'AttachedPolicies[*].PolicyArn' --role-name $ROLE_NAME --query "InstanceProfiles[].InstanceProfileName" --output text
Secrets
See access to K8s secrets with:
kubectl logs -n kube-system -l app=csi-secrets-store-provider-aws -XXX
See an AWS secret with:
aws secretsmanager get-secret-value --secret-id $SECRET_ARN --region $REGION
Deleting them is interesting as they will linger unless told otherwise:
aws --region $REGION secretsmanager delete-secret --secret-id $SECRET_NAME --force-delete-without-recovery
Infra
To see why your EKS deployments aren't working:
kubectl get events --sort-by=.metadata.creationTimestamp | tail -20
Terraform seems to have a problem deleting load balancers in AWS. You can see them with:
aws elbv2 describe-load-balancers
List the load balancers:
aws elb describe-load-balancers --region $REGION
List the VPCs:
aws ec2 describe-vpcs --region $REGION
Glue
Create with:
aws glue create-database --database-input '{"Name": "YOUR_DB_NAME"}' --region $REGION
Create an Iceberg table with:
aws glue create-table \
--database-name YOUR_DB_NAME \
--table-input '
{
"Name": "TABLE_NAME",
"TableType": "EXTERNAL_TABLE",
"StorageDescriptor": {
"Location": "s3://ROOT_DIRECTORY_OF_TABLE/",
"Columns": [
{ "Name": "id", "Type": "int" },
...
{ "Name": "randomInt", "Type": "int" }
],
"SerdeInfo": {
"SerializationLibrary": "org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe"
}
},
"Parameters": {
"iceberg.table.default.namespace": "YOUR_DB_NAME"
}
}' \
--open-table-format-input '
{
"IcebergInput": {
"MetadataOperation": "CREATE",
"Version": "2"
}
}' \
--region $REGION
Get all the databases with:
aws glue get-databases --query 'DatabaseList[*].Name' --output table
aws glue get-databases --query 'DatabaseList[*].Name' --output table
Get tables with:
aws glue get-tables --database-name YOUR_DB_NAME
Drop with:
aws glue delete-table --name TABLE_NAME --database-name YOUR_DB_NAME
No comments:
Post a Comment