Kubernetes


AWS IAM with SPIFFE & SPIRE

Recently I’ve been working on some material around workload identity and authentication in Kubernetes. SPIFFE and SPIRE are two really interesting projects in this area. SPIFFE (Secure Production Identity Framework For Everyone) is a standard spec defining a workload identifier (SPIFFE ID) that can be encoded into a SPIFFE Verifiable Identity Document (SVID), either in the form of x509 or JWT. The spec also defines a few APIs that must be satisfied in order to register nodes and workloads etc… SPIRE (SPIFFE Runtime Environment) is the reference implementation of the SPIRE spec.

A longer-form introduction to the projects is out-of-scope here but check out this great video by Andrew Jessup to learn more.

In this post I’m going to walk through configuring SPIFFE & SPIRE to provide fine-grained identity to Kubernetes pods that allow them access to AWS IAM roles. You may already be familiar with existing projects like kiam or kube2iam (which both contain fairly serious security issues) and try to achieve the same thing. The model for both of these tools is to proxy the EC2 metadata API and return AWS credentials as appropriate. The scope of access is defined by labels on the pod spec.


Least Privilege in Kubernetes Using Impersonation

Recently I implemented an auth[zn] solution for a customer using Dex & AD. I might write more about that implementation in another post (as there were some interesting new capabilities we needed to add to Dex for our use case), but in this post I’m going to cover the pretty simple but powerful RBAC setup that we designed and implemented to compliment it.

Kubernetes supports the concept of ‘impersonation’ and we’re going to look at the user & group configuration that we created using impersonation to enable a least-privilege type of access to the cluster, even as an administrator, to ensure that it was more difficult to accidentally perform unwanted actions, while keeping the complexity level low.


Kubernetes in Docker: Kind of a Big Deal

I’ve been playing a little bit with the Cluster API project recently (posts on that coming soon), and using Kind as an ephemeral bootstrap cluster. Kind is a super cool and fairly new project that I figured I’d explore a little bit in this post as some folks may not be aware of it or had a chance to get hands-on with it.

Kind was born out of the neccessity for a lightweight local Kubernetes setup that could be used for testing and conformance. It has uses now across several SIGs and the goals of the project are laid out in the official docs.


Dynamic Configuration Discovery in Grafana

A few of my colleagues have written posts recently on the Prometheus stack so I thought I’d get in on the action.

In this post I’ll walk through how Grafana uses sidecar containers to dynamically discover datasources and dashboards declared as ConfigMaps in Kubernetes to allow easy and extensible configuration for cluster operators.

Let’s dive in!

\