GKE and EKS forward audit logs from the Kubernetes API server to Cloud Audit Logs and Cloudwatch respectively. Unfortunately, however, the logs from each provider have a marginally different format, which means that you can’t simply apply the same rules to logs from both sources indiscriminately.
Taking a single operation - the creation of an nginx pod - in a vanilla installation of GKE and EKS, I have extracted the audit log record created for the operation. From this, I have created a simple mapping between GKE and EKS, which can be found here, along with the raw log data that I created the mapping from.
I chose GKE and EKS because they are probably the most popular choices for managed k8s deployments, and there’s a possibility that for various reasons you might have clusters on both providers.
The logs themselves are fairly similar, with some key differences:
- I couldn’t find a simple field in the Cloudwatch log record to tell me exactly which cluster and
region the operation was occurring in. I assume that you would need to correlate the operation
with other data, such as IAM logs, in order to work that out, but it seems like an obvious
nice-to-have. The data is clearly accessible in the
resource
field via the GKE Cloud Audit Logs. - The Cloud Audit Log throws a bunch of log data into a
protoPayload
object, which potentially reflects the fact that the log is being pushed as a protobuf. It’s a little bit messier than the EKS log, which is much easier to parse because fields are better named and better split up.
Regardless, from my research above it should be easy to write some sort of translation layer to unify GKE and EKS audit log data to ensure that you can then compare consistently between the two.
However as far as I’m aware you can’t control the formatting of either log source, so you’re probably still exposed to the whims of either provider in the long run.