Archive

Category Archives for "Networking – The New Stack"

The Evolution to Service-Based Networking

At first glance, it seems clear that the cloud era has fundamentally changed the way we think about networking. We’re now operating outside defined perimeters, and networks can span multiple data centers or clouds. But has networking really changed all that much from the days when everything lived in on-premises data centers? Peter McCarron Peter is a senior product marketing manager for Consul at HashiCorp and based in San Francisco. If he's not studying the best way to discover and manage microservices or talking about cloud-based networking, you'll likely find him discovering real clouds in the great outdoors. After all, it’s still all about establishing consistent connectivity and enforcing security policies. So why does everything seem so different and complicated when it comes to the cloud? To better understand the evolution to modern networking, it’s important to step back and identify the core workflows that have defined those changes, including: Discovering services Securing networks Automating networking tasks Controlling access In this article, we will walk through each of these workflows and talk about how they are combined to achieve a modern service-based networking solution. Since I work at HashiCorp, I’m going to use

How Observability Helps Troubleshoot Incidents Faster

It all starts with the dreaded alert. Something went awry, and it needs to be fixed ASAP. Whether it’s the middle of the night and you’re the on-call responder, or it’s the middle of the afternoon, and your whole team is working together to ship a bundle of diffs, having an incident happen is extremely disruptive to your business — and often very expensive, making every minute count. So how can observability (o11y for short) help teams save precious time and resolve incidents faster? First, let’s explore the changing landscape from monitoring to observability. Debugging Using Traditional Monitoring Tools Savannah Morgan Savannah is senior technical customer success manager at Honeycomb. She is passionate about helping users find creative solutions for complex problems. When she is off the clock, Savannah can be found at the park with her family, binge-watching Netflix or spoiling her big pup, Bruce. The key to resolving an incident quickly is to rapidly understand why things went wrong, where in your code it’s happening, and most of all, who it affects and how to fix it. Most of us learned to debug using static dashboards powered by metrics-based monitoring tools like Prometheus or Datadog, plus a whole Continue reading

What Is Zero Trust Security?

Zero Trust is a framework for security in which all users of an application, software, system, or network, inside or outside of an organization, must be authenticated, verified, and frequently validated before being granted access to specific data or tools within the company’s network. In the zero trust framework, networks can be in the cloud, hybrid, or on-premise with employees in any location. The assumption is that no users or devices are to be trusted with access without meeting the necessary validation requirements. In today’s modern digital transformation forward environment, the zero-trust security framework helps to ensure infrastructure and data are kept safe, and more modern business challenges are handled appropriately. For example, as the pandemic has evolved, securing remote workers and their access will be of greater importance for organizations that want to scale their workforce. Ransomware threats and attacks are increasing, and zero trust implementation can detect these threats, from novel ones to custom-crafted malware, far before they cause harm. What Foundation Makes up Zero Trust? Zero Trust security is built on the architecture established by the National Institute of Standards & Technology (NIST). The

Use Multi-Availability Zone Kubernetes for Disaster Recovery

Nicolas Vermandé Nicolas is the principal developer advocate at Ondat. He is an experienced hands-on technologist, evangelist and product owner who has been working in the fields of cloud native technologies, open source software, virtualization and data center networking for the past 17 years. Passionate about enabling users and building cool tech solving real-life problems, you'll often see him speaking at global tech conferences and online events. Outages and degraded performance are inevitable. Operators make mistakes; new protocols introduce errors, natural disasters damage equipment and more. That’s why rather than trust Amazon’s ability to design a hurricane-proof data center, most platform managers opt to spread their application’s infrastructure across multiple availability zones (AZs). AZ outages aren’t terribly common, but

Simple Load Testing with GitHub Actions

Michael Kalantar Michael is a senior software engineer who has contributed to the design and development of a number of scalable distributed and cloud-based enterprise systems. He is a co-founder of the Iter8 project. In this article, we show how to use GitHub Actions to load-test, benchmark and validate HTTP and gRPC services with service-level objectives (SLOs). When developing a new version of an HTTP or gRPC service, it is desirable to benchmark its performance and to validate that it satisfies desired service-level objectives (SLOs) before upgrading the current version. We describe a no-code approach based on GitHub Actions that can be used to automate such testing at any point in a continuous integration/continuous delivery (CI/CD) pipeline. For example, at build time it can be used to validate the new version as soon as possible. Alternatively, at deployment time it can be used to validate SLOs in the production environment. HTTP Load Testing with the Iter8 GitHub Action The Iter8 GitHub Action, iter8-tools/iter8-action@v1, enables automated Iter8 experiments in a GitHub workflow. To use the action, specify an experiment chart and its configuration via a Helm valuesFile. No programming is necessary — all configuration is declarative. Typical use is to Continue reading

Tracing the History of the Internet, Layer by Layer

Grace Andrews An enthusiastic technologist with a cross-cultural focus and experience managing, facilitating and executing entrepreneurial training and processes, Grace has a keen eye for public relations, marketing, consulting and networking. Did you know that the fiber cables that helped bring you this web page may be buried inside a pipeline originally built to carry oil and gas? Or that Cold War military researchers were instrumental in birthing the concepts that gave rise to those cables in the first place? How about the fact that people once tried to build their own cellular phone networks using analog modems? Few of the people who use the internet daily, from those creating GitHub repos to those simply scrolling through Twitter, are aware of the fascinating backstory of the physical infrastructure that makes it all work. The idea behind Apple, RSS. For more creative content by and about the humans that build and scale the internet, follow Twitter, Instagram. Finally, be sure to check out the

Modernizing Network Monitoring with InfluxDB and Telegraf

Charles Mahler Charles is a technical marketing writer at InfluxData. Charles’ background includes working in digital marketing and full-stack software development. As the technology landscape continues to change at a rapid pace, enterprise companies are in a rush to catch up and modernize their legacy IT and network infrastructure to capture the benefits of newly developed tools and best practices. By adopting modern DevOps techniques, they can reduce their operational costs, increase the reliability of their services and improve the overall speed and agility at which their IT teams are able to move. Background

Iter8 Unifies Performance Validation for gRPC and HTTP

Srinivasan Parthasarathy Sri is an applied machine learning researcher with a track record of creating scalable AI/ML/advanced optimization-based enterprise solutions for hybrid cloud, cybersecurity and data-exploration problem domains. A co-founder of Iter8, he has presented at Kubecon 2020 and 2021, and at community meetups like Knative and KFServing. gRPC is an open source remote procedure call (RPC) system that is becoming increasingly popular for connecting microservices and connecting mobile/web clients to backend services. Benchmarking and performance validation is an essential building block in the continuous integration and delivery (CI/CD) of robust gRPC services. In this hands-on article, we show how Iter8 unifies performance validation for HTTP and gRPC services. What Is Iter8?

Multifactor Authentication Is Being Targeted by Hackers

It was only a matter of time. While multifactor authentication (MFA) makes logging into systems safer, it doesn’t make it “safe.” As well-known hacker KnownBe4, showed in 2018 it’s easy to Proofpoint has found transparent reverse proxy. Typically transparent reverse proxies, such as the open source man-in-the-middle (MitM) attacks to steal credentials and session cookies. Why go to this trouble? Because, as an MFA company 78% of users now use MFA, compared to just 28% in 2017. That’s good news, but it’s also given cybercrooks the incentive they needed to target MFA. A Range of Kits To make it easy for wannabe hackers. Proofpoint found today’s phishing kits range from “simple open-source kits with human-readable code and no-frills functionality Continue reading

How to Make the Most of Kubernetes Environment Variables

In traditional systems, environment variables play an important role, but not always a crucial one. Some applications make more use of environment variables than others. Some prefer configuration files over environment variables. However, when it comes to Kubernetes, environment variables are more important than you might think. It’s partially due to the way containers work in general and partially due to the specifics of Kubernetes. In this post, you’ll learn all about environment variables in Kubernetes. Traditionally, environment variables are dynamic key-value variables that are accessible to any process running on the system. The Basics Let’s start with the basics. What are environment variables and why do they exist? Traditionally, environment variables are dynamic key-value variables that are accessible to any process running on the system. The operating system itself will set many environment variables that help running processes understand the specifics of the system. Thanks to this, software developers can include logic in their software that makes the programs adjustable to a specific operating system. Environment variables also hold a lot of important information about the user, things like username, preferred language, user home directory path and many other useful bits of information. User-Defined Environment Variables Dawid Ziolkowski Dawid Continue reading

How We Built Preview Environments on Kubernetes and AWS

Romaric Philogène Romaric is CEO and co-founder of Qovery with more than 10 years of experience in site reliability engineering and software development. For two years at Qovery, we built a Qovery Engine. Environment On Qovery, every application and database belong to an environment. It is a logical entity that links all resources together. When turning on the Preview Environment feature, Qovery will duplicate an Continue reading

Microsoft Brings eBPF to Windows

If  you want to run code to provide observability, security or network functionality, running it in the kernel of your operating system gives you a lot of power because that kernel can see and control everything on the system. That’s powerful, but potentially intrusive or dangerous if you get it wrong, whether that’s introducing a vulnerability or just slowing the system down. If you’re looking for a way to take advantage of that kind of privileged context without the potential danger, eBPF is emerging as an alternative — and now it’s coming to Windows. Not Just Networking Originally eBPF stood for “extended Berkeley Packet Filter”, updating the open source networking tool that puts a packet filter in the Linux kernel for higher performance packet tracing (now often called cBPF for classic BPF). But it’s now a generic mechanism for running many kinds of code safely in a privileged context by using a sandbox, with application monitoring, profiling and security workloads as well as networking, so it’s not really an acronym anymore. That privileged context doesn’t even have to be an OS kernel, although it still tends to be, with eBPF being a more stable and secure alternative to kernel modules Continue reading

WAF: Securing Applications at the Edge

Sheraline Barthelmy Sheraline is the head of product, marketing and customer success at Cox Edge, an edge cloud startup from Cox Communications. At Cox Edge, she's focused on developing the tools and systems that customers and developers rely on to build the next generation of edge applications. These days, brick-and-mortar or television-based bank robberies and heists seem old-fashioned no matter how well planned or executed. What the new “money” criminals are after is personal data. And the “banks” being attacked are the growing number of web applications. Studies show that web application attacks have become the single most significant cause of data breaches. According to NTT’s 2020 Global Threat Intelligence Report (GTIR), more than half (55%) of all attacks in 2019 were a mix of web application and application-specific attacks, up from 32% the year before. As organizations move away from VPNs, virtual machines and centralized management systems to distributing and even running applications at the edge, conventional perimeter-based security like network firewalls isn’t enough. The best defense is a firewall that can mitigate application-layer attacks. Web Application Firewall (WAF) A WAF helps protect web applications from application-layer attacks like cross-site scripting, SQL injection attacks, remote file inclusion and cookie Continue reading

Cisco Brings Webex Collaboration to SD-WAN Cloud Program

The dramatic shift to remote work brought on two years ago by the onset of the COVID-19 pandemic forced companies to almost overnight not only adapt their business models but also to focus on technologies that would allow them and their employees to operate productively and securely. That included embracing connectivity solutions to ensure access to the applications and data critical for getting the job done and collaboration tools to enable employees to more easily work together even if they were located many miles apart. All that has accelerated the growth in such markets at software-defined networking (SD-WAN) and video conferencing and remote communications offerings like Microsoft Teams, Cisco System’s Webex and Zoom. Reliance on such technologies will only grow, given that many companies expect to continue a hybrid work environment even after the pandemic lifts. blog post this week pointed to numbers from Gartner that showed that 48% of employees are expected to work remotely post-pandemic and that hybrid workplaces will become commonplace. “In this new norm, seamless communication and collaboration will be the bare minimum for enterprises to achieve workforce productivity Continue reading

Confluent’s Q1 Updates: ‘Data Mesh vs. Data Mess’

Confluent says it will release a series of updates to its data streaming platform every quarter. In this quarter, the updates consist of a number of new features built from the Apache Kafka open source distributed event streaming platform. They include schema linking, new controls to shrink capacity for clusters on-demand and new fully managed Kafka connectors. The new capabilities “can make a huge difference in creating a data mesh versus a data mess,” Schema Linking gives organizations the freedom to develop without the risk of damaging production, Rosanova told The New Stack. “Dev and prod generally don’t talk to one another — because production environments are so sensitive, you don’t want to give everyone access,” Rosanova said. With Schema Linking, built on top of Cluster Linking, schemas can be shared that sync in real-time across teams, organizations and environments, such with hybrid and multicloud environments. “This is far more scalable and efficient compared to workarounds I’ve seen where people are literally sharing schemas through spreadsheets,” Rosanova said. Much verbiage is devoted to scaling, but how to dynamically adjust network resources for resource savings when needed to avoid redundancy is often not addressed. As Rosanova noted, organizations maintain high availability by beefing up their capacity to handle spikes in traffic and avoid downtime. “We added a simple, self-service way to scale back capacity so customers no longer have to worry about wasting resources on capacity they don’t use. These clusters also automatically rebalance your data every time you scale up or down,” Rosanova said. “This solves the really hard challenge of rebalancing workloads while they are running. It’s like changing the tires on a moving car. Now you can optimize data placement without disrupting the real-time flow of information.” New Connectors Confluent’s new release now features over 50 managed connectors for Confluent Cloud. The idea behind Confluent’s Apache Kafka connectors is to facilitate network connections for data streaming with data sources and sinks that organizations select. In the last six months, Confluent more than doubled the number of managed connectors it offers, Rosanova said. “Once one system is connected, two more need to be added, and so on,” he said. “We are bringing real-time data to traditional, non-real-time places to quickly modernize companies’ applications. This is a significant need that continues to grow.” Kafka has emerged as a leading data streaming platform and Confluent continues to evolve with it, Rosanova said. “We are improving what businesses can accomplish with Kafka through these new capabilities. Real-time data streaming continues to play an important role in the services and experiences that set organizations apart,” Rosanova said. “We want to make real-time data streaming within reach for any organization and are continuing to build a platform that is cloud native, complete, and available everywhere.” Confluent’s connector list now includes: Data warehouse connectors: Snowflake, Google BigQuery, Azure Synapse Analytics, Amazon Redshift. Database connectors: MongoDB Atlas, PostgreSQL, MySQL, Microsoft SQL Server, Azure. Cosmos DB, Amazon DynamoDB, Oracle Database, Redis, Google BigTable. Data lake connectors: Amazon S3, Google Cloud Storage, Azure Blob Storage, Azure Data. Lake Storage Gen 2, Databricks Delta Lake. Additionally, Confluent has improved access to popular tools for network monitoring. The platform now offers integrations with Datadog and Prometheus. “With a few clicks, operators have deeper, end-to-end visibility into Confluent Cloud within the monitoring tools they already use,”blog post. The post Confluent’s Q1 Updates: ‘Data Mesh vs. Data Mess’ appeared first on The New Stack.

Makings of a Web3 Stack: Agoric, IPFS, Cosmos Network

Want an easy way to get started in Web3? Download a Dietrich Ayala, IPFS Ecosystem Growth Engineer, Rowland Graus, head of product for Marko Baricevic, software engineer for Cosmos Network. an open source technology to help blockchains interoperate. Each participant describes the role in the Web3 ecosystem where their respective technologies play. These technologies are often used together, so they represent an emerging blockchain stack of sorts. TNS editor-in-chief Joab Jackson hosted the Continue reading

Anomaly Detection: Glimpse into the Future of IoT Data

Margaret Lee Margaret is senior vice president and general manager of digital service and operations management for BMC Software, Inc. She has P&L responsibility for the company’s full suite of BMC Helix solutions for IT service management and IT operations management. Big data and the internet-of-things go hand in hand. With the continued proliferation of IoT devices — one prognosticator estimates there will be

Lessons Learned from 6 Years of IO Scheduling at ScyllaDB

Pavel (Xemul) Emelyanov Pavel is a principal engineer at ScyllaDB. He is an ex-Linux kernel hacker now speeding up row cache, tweaking the IO scheduler and helping to pay back technical debt for component interdependencies. Scheduling requests of any kind always serves one purpose: gain control over the priorities of those requests. In the priority-less system, there’s no need to schedule; just putting whatever arrives into the queue and waiting until it finishes is enough. I’m a principal engineer for

Using Rustlang’s Async Tokio Runtime for CPU-Bound Tasks

Despite the term async and its association with asynchronous network I/O, this blog post argues that the Tokio.rs describes it as: “an asynchronous runtime for the Rust programming language. It provides the building blocks needed for writing network applications.” While this description emphasizes Tokio’s use for network communications, the runtime can be used for other purposes, as we will explore below. Why Use Tokio for CPU tasks? It turns out that modern analytics engines invariably need to Continue reading

Solo.io Brings ‘Docker-Like Experience’ to eBPF with BumbleBee

Service mesh integration software provider BumbleBee, a new open source project that it extended Berkeley Packet Filter (eBPF) in order to “shortcut the HTTP stack,” said Solo.io CEO and founder BPF Type Format (BTF), explained Levine, “(along with some smarts added to clang) enables the BPF program loader to fix the BPF byte code to work correctly on different versions of the kernel. For example, if a BPF program accesses a struct, clang now stores all these struct access in a special location in the BPF program binary. libbpf can go to each of these struct accesses, and use BTF information from the current kernel (obtained at runtime) to fix these accesses to the correct offset.” BumbleBee to the Rescue With the addition of BTF, Solo.io created BumbleBee, which not only uses BTF to parse and bring to the user space the maps of eBPF programs, but also uses the get started.

1 3 4 5 6 7 15