Ratan Tipirneni

Author Archives: Ratan Tipirneni

Cisco Acquires Isovalent: A Big Win for Cloud-Native Network Security and a Validation of Tigera’s Vision

This week’s news of Cisco’s intent to acquire Isovalent sends an important message to the cloud security ecosystem: network security is no longer an afterthought in the cloud-native world. It’s now a critical component of any robust security posture for cloud-native applications. This move not only validates the work of the Isovalent team in evangelizing this essential category but also underscores the vision Tigera has pioneered since 2016 with Project Calico.

I would first like to extend heartfelt congratulations to Isovalent and its founders on their well-deserved exit and thank them for their invaluable contributions to cloud-native network security.

Cisco’s acquisition recognizes that traditional perimeter security solutions simply don’t translate to the dynamic, distributed nature of cloud-native architectures and that network security is a critical part of a good cloud-native security design. This is a fundamental truth that Tigera identified early on with Project Calico. We saw the need for a fundamentally different approach to network security, one tailored to the unique demands of containerized and distributed applications running in the cloud.

Calico Open Source, born from this vision, has become the industry leader in container networking and security. It now powers over 100 million containers across 8 million+ Continue reading

Accelerating cloud-native development brings opportunities and challenges for enterprises

By 2025, Gartner estimates that over 95% of new digital workloads will be deployed on cloud-native platforms, up from 30% in 2021. This momentum of these workloads and solutions presents a significant opportunity for companies that can meet the challenges of the burgeoning industry.

As digitalization continues pushing applications and services to the cloud, many companies discover that traditional security, compliance, and observability approaches do not transfer directly to cloud-native architectures. This is the primary takeaway from Tigera’s recent The State of Cloud-Native Security report. As 75% of companies surveyed are focusing on cloud-native application development, it is imperative that leaders understand the differences, challenges, and opportunities of cloud-native environments to ensure they reap the efficiency, flexibility, and speed that these architectures offer.

Containers: Rethinking security

The flexibility container workloads provide makes the traditional ‘castle and moat’ approach to security obsolete. Cloud-native architectures do not have a single vulnerable entry point but many potential attack vectors because of the increased attack surface. Sixty-seven percent of companies named security as the top challenge regarding the speed of deployment cycles. Further, 69% of companies identified container-level firewall capabilities, such as intrusion detection and prevention, web application firewall, protection from “Denial of Service” Continue reading

Tigera 2023 predictions: Cloud native security and the shifting landscape in 2023

Cloud computing and the use of cloud native architectures enable unparalleled performance, flexibility, and velocity. The speed of innovation has driven significant advancements across industries, but as digitalization continues pushing applications and services to the cloud, bad actors’ intrusion techniques have also become more sophisticated. The burgeoning threat landscape is top of mind for enterprise and midmarket business and security leaders, and should lead their decision-making—from the right solutions to implement, to the right partners to engage.

Economic conditions tightening and macroeconomic forces will continue introducing challenges in the coming year, but businesses that sustainably provide value to their customers and make security a foundational aspect of their organization will thrive.

Here are some trends I anticipate for 2023:

Cloud-native inflection point

While the last few years were dominated by early adopters who thrive in the technical playgrounds of emerging technologies, 2023 will see the ‘early majority’ of mainstream users begin adopting cloud-native architectures as the market reaches an inflection point. This inflection is driven by the accelerating accessibility and usability of the tools and technologies available, as the early majority prioritizes platforms that work easily over those with advanced functions that they likely won’t use.

“Shift left” has become Continue reading

3 container security best practices to strengthen your overall security posture

Container environments are highly dynamic and require continuous monitoring, observability, and security. Since container security is a continuous practice, it should be fully integrated into the entire development and deployment cycle. Implementing security as an integral part of this cycle allows you to mitigate risk and reduce the number of vulnerabilities across the dynamic and complex attack surface containers present.

Let’s take a look at three best practices for ensuring containers remain secure during build, deployment, and runtime.

Securing container deployments

Securing containers during the build and deployment stages is all about vulnerability management. It’s important to continuously scan for vulnerabilities and misconfigurations in software before deployment, and block deployments that fail to meet security requirements. Assess container and registry image vulnerabilities by scanning first- and third-party images for vulnerabilities and misconfigurations, and using a tool that scans multiple registries to identify vulnerabilities from databases such as NVD. You also need to continuously monitor images, workloads, and infrastructure against common configuration security standards (e.g. CIS Benchmarks). This enables you to meet internal and external compliance standards, and also quickly detect and remediate misconfigurations in your environment, thereby eliminating potential attack vectors.

Securing containers at runtime

Containerized workloads require a Continue reading

Zero trust in the cloud: Best practices and potential pitfalls

Architecturally speaking, cloud-native applications are broken down into smaller components that are highly dynamic, distributed, and ephemeral. Because each of these components is communicating with other components inside or outside the cluster, this architecture introduces new attack vectors that are difficult to protect against using a traditional perimeter-based approach. A prudent way to secure cloud-native applications is to find a way to reduce the number of attack vectors, and this is where the principles of zero trust come into play.

With today’s multi-cloud and hybrid-cloud environments, networks are no longer restricted to a clear perimeter with clearly defined borders to defend—and cyber criminals are taking advantage of this fact by tricking users and systems into providing unauthorized access. While a lot of zero trust is focused on limiting access from users and devices, organizations are now also recognizing that in the world of distributed cloud-native applications, workloads themselves are communicating with each other and the same principles of zero trust need to be extended to cloud-native applications.

Because traditional security methods such as network firewalls rely on fixed network addresses, they are insufficient to protect dynamic, distributed, and ephemeral cloud-native workloads, which do not have fixed network addresses. They simply Continue reading

Rethinking security roles and organizational structure for the cloud

As more and more applications and application development move to the cloud, traditional security roles and organizational structures are being shaken up. Why is that and what are the benefits of a cloud-first approach for business?

Traditional vs. cloud model

Application development in the traditional model, especially in larger companies, can be thought of as a linear process—similar to a baton being passed between teammates (e.g. the application team hands off the baton to the security team). In this model, each team has their own area of expertise, such as networking, infrastructure, or security, and the application development process is self-contained within each team.

The downside to this model is that responsibilities are siloed, and interactions and hand-offs between teams create friction. For example, if one team needs something from another, they need to submit a ticket and deal with wait time. In the traditional model, it’s not unusual for the application development and deployment process to last weeks or months, and then there are bug fixes and new release rollouts to contend with.

A cloud model, on the other hand, offers several benefits, including automation, abstraction, and simplicity. The high degree of automation in cloud-native infrastructure in general Continue reading

Why your security teams are not ready for containers and Kubernetes, and what you can do about it

From a people perspective and an organizational standpoint, many CISOs have said that their security teams are not ready for containers and Kubernetes. This isn’t surprising, given the stark contrast between where we were less than a decade ago and where we are today in terms of systems architecture. I am of course referring to the cloud-native era, which has ushered in a whole new architectural approach.

With Kubernetes at the center asserting its domination, it’s time to start thinking about how we can best prepare security teams for this new era. To do that, let’s look at why they’re struggling in the first place (spoiler alert: it’s because organizations are struggling, too).

Security and organizational structure in the era of cloud-native computing

In the traditional software development and deployment model, things were quite static. We can think of the traditional model as a relay race where the baton was passed from the development team to the platform team to the security team. While this model works well for traditional application architectures, this type of organizational structure is less effective for new architectures for container orchestration and Kubernetes-native applications, where everything is dynamic and highly automated.

But perhaps the most Continue reading

Why you need Tigera’s new active cloud-native application security

First-generation security solutions for cloud-native applications have been failing because they apply a legacy mindset where the focus is on vulnerability scanning instead of a holistic approach to threat detection, threat prevention, and remediation. Given that the attack surface of modern applications is much larger than in traditional apps, security teams are struggling to keep up and we’ve seen a spike in breaches.

To better protect cloud-native applications, we need solutions that focus on threat prevention by reducing the attack surface. With this foundation, we can then layer on threat detection and threat mitigation strategies.

I have exciting news to share on this front! Today, Tigera launched new capabilities in its Calico product line to help you address your most urgent cloud security needs. Before getting into a discussion about the features themselves, I’d like to talk about the driving force behind the changes, our thought process, and why we’re well-positioned to bring these to market.

A new runtime security model

To properly secure modern cloud-native applications, we need to use a modern architecture that aligns with them. At Tigera, we’ve created a model we call active cloud-native application runtime security. This model has three components:

Why securing internet-facing applications is challenging in a Kubernetes environment

Internet-facing applications are some of the most targeted workloads by threat actors. Securing this type of application is a must in order to protect your network, but this task is more complex in Kubernetes than in traditional environments, and it poses some challenges. Not only are threats magnified in a Kubernetes environment, but internet-facing applications in Kubernetes are also more vulnerable than their counterparts in traditional environments. Let’s take a look at the reasons behind these challenges, and the steps you should take to protect your Kubernetes workloads.

 

Threats are magnified in a Kubernetes environment

One of the fundamental challenges in a Kubernetes environment is that there is no finite set of methods that exist in terms of how workloads can be attacked. This means there are a multitude of ways an internet-facing application could be compromised, and a multitude of ways that such an attack could propagate within the environment.

Kubernetes is designed in such a way that allows anything inside a cluster to communicate with anything else inside the cluster by default, essentially giving an attacker who manages to gain a foothold unlimited access and a large attack surface. Because of this design, any time you have Continue reading

Kubernetes Observability Challenges: The Need for an AI-Driven Solution

Kubernetes provides abstraction and simplicity with a declarative model to program complex deployments. However, this abstraction and simplicity create complexity when debugging microservices in this abstract layer. The following four vectors make it challenging to troubleshoot microservices.

  1. The first vector is the Kubernetes microservices architecture, where tens to hundreds of microservices communicate. Debugging such a componentized application is challenging and requires specialized tools.
  2. The second vector is the distributed infrastructure spread across heterogeneous on-premises and cloud environments.
  3. The third vector of complexity is the dynamic nature of Kubernetes infrastructure. The platform spins up required resources and provides an ephemeral infrastructure environment to scale the application based on demand.
  4. Lastly, in such a distributed environment, Kubernetes deployments need fine-grained security and an observability model with defense-in-depth to keep them secure. While modern security controls effectively protect your workloads, they can have unintended consequences by preventing applications from running smoothly and creating an additional layer of complexity when debugging applications.

Today, DevOps and SRE teams must stitch together an enormous amount of data from multiple, disparate systems that monitor infrastructure and services layers in order to troubleshoot Kubernetes microservices issues. Not only is it overwhelming to stitch this data, but troubleshooting using Continue reading