Thursday, September 11, 2025

Configuring Polaris for GCP

After successfully configuring Polaris to vend AWS credentials, I've moved on to the Google Cloud Platform.

As far as the Spark client and Polaris are concerned, there's not much difference to AWS. The only major change is that upon creating the Catalog, you need to put in your JSON a gcsServiceAccount. This refers to the service account you need to create.

You create it with something like:

$ gcloud iam service-accounts create my-vendable-sa --display-name "Vendable Service Account"

and then add to it the user for whom it will deputize:

$ gcloud iam service-accounts add-iam-policy-binding my-vendable-sa@philltest.iam.gserviceaccount.com   --member="user:phillhenry@gmail.com"   --role="roles/iam.serviceAccountTokenCreator"

where philltest is the project and phillhenry is the account for which the token will proxy.

$ gcloud auth application-default login --impersonate-service-account=my-vendable-sa@philltest.iam.gserviceaccount.com

Note it will warn you to run something like gcloud auth application-default set-quota-project philltest or whatever your project is called.

This will open your browser. After you sign in, it will write a credential in JSON file (it will tell you where).  You must point the environment variable GOOGLE_APPLICATION_CREDENTIALS to this file before you run Polaris.

Note that it's the application-default switch that writes the credential as JSON so it can be used by your applications.

Not so fast...

Running my Spark code just barfed when it tried to write to GCS with:

my-vendable-sa@philltest.iam.gserviceaccount.com does not have serviceusage.services.use access

Note that if the error message mentions your use (in my case PhillHenry@gmail.com) not the service account, you've pointed GOOGLE_APPLICATION_CREDENTIALS at the wrong application_default_credentials.json file.

Basically, I could write to the bucket using the command line but not using the token. I captured the token by putting a breakpoint here in Iceberg's parsing of the REST response from Polaris. Using the token (BAD_ACCESS_TOKEN), I ran:

$ curl -H "Authorization: Bearer ${BAD_ACCESS_TOKEN}" https://storage.googleapis.com/storage/v1/b/odinconsultants_phtest
{
  "error": {
    "code": 401,
    "message": "Invalid Credentials",
    "errors": [
      {
        "message": "Invalid Credentials",
        "domain": "global",
        "reason": "authError",
        "locationType": "header",
        "location": "Authorization"
      }
    ]
  }
}

The mistake I was making was that the command line was associated with me (PhillHenry) not the service account - that's why I could upload on CLI. Check your CLOUDSDK_CONFIG environment variable to see which credential files you're using.

Seems that the service account must have the role to access the account

$ gcloud projects get-iam-policy philltest --flatten="bindings[].members"  --format="table(bindings.role)"  --filter="my-vendable-sa@philltest.iam.gserviceaccount.com"
ROLE
roles/serviceusage.serviceUsageAdmin
roles/storage.admin
roles/storage.objectAdmin
roles/storage.objectCreator

This appears to fix it because the service account must be able to see the project before it can see the project's buckets [SO].

Now, Spark is happily writing to GCS.

Configuring Polaris for AWS

Polaris can vend credentials from the top three cloud providers. That is, your Spark instance does not need to be granted access to the cloud provider as long as it can connect to the relevant Polaris catalog.

There are 3 steps in configuring vended credentials for Spark:
  1. Configure your cloud account such that it's happy handing out access tokens
  2. Configure Polaris, both the credentials to access Polaris and the Catalog that is essentially a proxy to the cloud provider
  3. Configure Spark's SparkConf.
Here's what you must do for AWS:

Permissioning Polaris to use AWS tokens

The key Action you need to vend tokens is sts:AssumeRole. If you were doing this in the AWS GUI, you'd go to IAM/Roles, select the Role that can read your S3 bucket, click on Trust relationships and grant the user (or better, a group) the right to access it with some JSON like:

        {
            "Sid": "Statement2",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::985050164616:user/henryp"
            },
            "Action": "sts:AssumeRole"
        }

This is only half the job. You then need a reciprical relationship for, in my case, user/henryp. Here, I go to IAM/Users, find my user and create an inlined entry in Permissions policies. This just needs the Statement

{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::985050164616:role/myrole"
}

Where myrole is the role that can read the bucket.

Ideally, this would all be automated in scripts rather than point-and-click but I'm at the exploratory stage at the moment.

What's going on in Polaris

Calling:

spark.sql(s"CREATE NAMESPACE IF NOT EXISTS $catalog.$namespace")

triggers these steps:

Spark's CatalogManager.load  initializes an Iceberg SparkCatalog that fetches a token via OAuth2Util.fetchToken via its HTTPClient.

Polaris's TokenRequestValidator will validateForClientCredentialsFlow and insist that the clientId and clientSecret are neither null nor empty

These values were taken from SparkConf's spark.sql.catalog.$catalog.credential setting after being split after OAuth2Util did its parseCredential in Iceberg - splitting ID and secret on the colon. It also ensures the scope and grantType is something Polaris recognises.

The upshot is that despite what some guides say, you don't want credential to be BEARER.

Configuring Spark

Calling:

spark.createDataFrame(data).writeTo(tableName).create()

does trigger credential vending as you can see by putting a breakpoint in AwsCredentialStorageIntegration.getSubscopedCreds. This invokes AWS's StsAuthSchemeInterceptor.trySelectAuthScheme and if you have configured AWS correctly, you'll be able to get some credentials from a cloud call.

This all comes from a call to IcebergRestCatalogApi.createTable.

Note that it's in DefaultAwsClientFactory where the s3FileIOProperties where the vended credentias live after being populated in the call to s3() in the Icebeg codebase.

After a lot of head scratching, this resource said my SparkConfig needed:

.set(s"spark.sql.catalog.$catalog.header.X-Iceberg-Access-Delegation", "vended-credentials")

This is defined here. The other options are remote signing where it seems Polaris will sign a credential and "unknown". But it's important for this to be set as only then will the table's credentials lookup path be used.

Thursday, September 4, 2025

Three things about Docker

Most of the time, Docker just works. But sometimes, you need to be a bit clever. In my case, I want Polaris to have he permission to write to the host filesystem, not just to see it. This proved hard. These are some lessons I learned.

What's in an image?

You can break down what is in a Docker image with something like:

docker save ph1ll1phenry/polaris_for_bdd:latest -o polaris_for_bdd.tar
mkdir polaris_fs
tar -xf polaris_for_bdd.tar -C polaris_fs

Then you can start untaring the blobs (where each blob is a Docker layer). In my case, I was trying to find where the bash binary was:

for BLOB in $(ls  blobs/sha256/ ) ; do { echo $BLOB ; tar -vxf blobs/sha256/$BLOB | grep bash;  } done

How was an image built?

You can reconstitute the steps made to create a Docker image with something like:

docker history --no-trunc apache/polaris

Restoring OS properties

The apache/polaris Docker image had a lot of extraneous Linux binaries removed, presumably to make it smaller and more secure. However, I needed them back as I need to grant the container certain permissions on the host. 

First off, the su command had been removed. You can canibalise binaries from other images in your Dockerfile like this:

FROM redhat/ubi9 AS donor
FROM apache/polaris AS final
...
COPY --from=donor /usr/bin/su /usr/bin/su

However, copying a binary over most of the time is a bit naive. Running su gave:

su: Critical error - immediate abort

Taking the parent Docker image before it was pruned, I could run:

[root@6230fb595115 ~]# ldd /usr/bin/su
linux-vdso.so.1 (0x00007fff87ae6000)
libpam.so.0 => /lib64/libpam.so.0 (0x0000733267a15000)
libpam_misc.so.0 => /lib64/libpam_misc.so.0 (0x0000733267a0f000)
libc.so.6 => /lib64/libc.so.6 (0x0000733267807000)
libaudit.so.1 => /lib64/libaudit.so.1 (0x00007332677d3000)
libeconf.so.0 => /lib64/libeconf.so.0 (0x00007332677c8000)
libm.so.6 => /lib64/libm.so.6 (0x00007332676ed000)
/lib64/ld-linux-x86-64.so.2 (0x0000733267a39000)
libcap-ng.so.0 => /lib64/libcap-ng.so.0 (0x00007332676e2000)

So, my Dockerfile had to COPY these files over too.

These libpam* shared objects refer to Linux's Pluggable Authentication Modules which is a centralized framework for permissioning arbitrary modules - eg MFA.

After a lot of faffing, I just COPYd the entire /etc/ folder from the donor to the final images. This is fine for integration tests but probably best avoided for prod :)