Archive

Category Archives for "Networking – The New Stack"

Wireshark Celebrates 25th Anniversary with a New Foundation

No doubt, countless engineers and hackers remember the first time they used newly-developed microscope to view cells for the first time ever: What was once just an inscrutable package had opened up to reveal a treasure trove of useful information. This year, the venerable Wireshark has turned 25, and its creators are taking a step back from this massively successful open source project, to let additional parties to help govern. This month, Sysdig, the current sponsor of Wireshark, launched a new foundation that will serve as the long-term custodian of the project. The

This Week in Computing: Malware Gone Wild

Malware is sneaky AF. It tries to hide itself and cover up its actions. It detects when it is being studied in a virtual sandbox, and so it sits still to evade detection. But when it senses a less secure environment — such as an unpatched Windows 7 box — it goes wild, as if Tudor Dumitras, in a recently posted talk from red pills, which helps malware detect when it is in a controlled environment, and change its behavior accordingly. As a result, many of the signatures used for commercial malware detection packages may not Continue reading

JWTs: Connecting the Dots: Why, When and How

JSON web tokens (JWTs) are great — they are easy to work with and stateless, requiring less communication with a centralized authentication server. JWTs are handy when you need to securely pass information between services. As such, they’re often used as ID tokens or access tokens. This is generally considered a secure practice as the tokens are usually signed and encrypted. However, when incorrectly configured or misused, JWTs can lead to broken object-level authorization or broken function-level authorization vulnerabilities. These vulnerabilities can expose a state where users can access other data or endpoints beyond their privileges. Therefore, it’s vital to follow best practices for using JWTs. Knowing and understanding the fundamentals of JWTs is essential when determining a behavior strategy. Curity is a leading IAM and API security technology provider that enables user authentication and authorization for digital services. The Curity Identity Server is highly scalable, handles the complexities of the leading identity standards, making them easier to use, customize and deploy. Learn More The latest from Curity $(document).ready(function() { $.ajax({ method: 'POST', url: '/no-cache/sponsors-rss-block/', headers: { 'Cache-Control': 'no-cache, no-store, must-revalidate', 'Pragma': 'no-cache', 'Expires': '0' }, data : { sponsorSlug : 'curity', numItems : 3 }, success : Continue reading

Palo Alto Networks Adds AI to Automate SASE Admin Operations

Whether one pronounces SASE as “sassy” or “sayce,” a secure access service edge is IT that is fast becoming central to enterprise systems as increasing amounts of data come into them from a multiplicity of channels. Palo Alto Networks this week revealed new capabilities to update its Prisma SASE platform by — you guessed it — adding Matt De Vincentes told The New Stack. “You can mix and match these components from multiple different vendors, and you get a potential stack when you have these capabilities kind of integrated together,” De Vincentes said. “But increasingly, we’re seeing a movement toward what we call single-vendor SASE, which is all of these capabilities brought together by a single thing that you can simplify. That’s exactly what we’re doing. “So all of the capabilities that a customer would need to build out this SASE deployment they can get through a single (SaaS) service. Then on top of that, with one vendor you can bring all the data together into one single data lake — and do some interesting AI on top of that.” AIOps Palo Alto Networks calls this Autonomous Digital Experience Management (ADEM), which also provides users end-to-end observability across their network, De Vincentes said. Since ADEM is integrated within Prisma SASE, it does not require additional appliances or agents to be deployed, De Vincentes said. Capabilities that AIOps for ADEM provides are, according to De Vincentes: proactively remediates issues that can cause service interruption through AI-based problem detection and predictive analytics; isolates issues faster (reduced mean time to repair) through an easy-to-use query interface; and discovers network anomalies from a single dashboard. PA Networks also announced three new SD-WAN (software-defined wide-area network) features for users to secure IoT devices, automate branch management, and manage their SD-WAN via on-premises controllers. Capabilities, according to the company, include: Prisma SD-WAN Command Center provides AI-powered and segment-wise insights and always-on monitoring for network and apps for proactive problem resolution at the branch level. Prisma SD-WAN with integrated IoT security enables existing Prisma SD-WAN appliances to help secure IoT devices. This enables accurate detection and identification of branch IoT devices. On-Prem Controller for Prisma SD-WAN helps meet customer regulatory and compliance requirements and works with on-prem and cloud controller deployments. Users can now elect to deploy Prisma SD-WAN using the cloud-management console, on-prem controllers, or both in a hybrid scenario, the company said. All new capabilities will be available by May 2023, except the Prisma SD-WAN Command Center, which will be available by July, the company said. The post Palo Alto Networks Adds AI to Automate SASE Admin Operations appeared first on The New Stack.

TrueNAS SCALE Network Attached Storage Meets High Demand

TrueNAS SCALE might not be a distribution on the radar of most cloud native developers, but it should be. Although TrueNAS SCALE is, by design, a network-attached storage solution (based on Debian), it is also possible to create integrated virtual machines and even Linux containers. TrueNAS SCALE can be deployed as a single node or even to a cluster. It can be expanded with third-party applications, offers snapshotting, and can be deployed on off-the-shelf hardware or as a virtual machine. Gluster for scalable ZFS features and data management. You’ll find support for KVM virtual machines, Kubernetes, and Docker. Even better TrueNAS SCALE is open-source and free to use. Latest Release Recently, the company launched TrueNAS SCALE 22.12.1 (Bluefin), which includes numerous improvements and bug fixes. The list of improvements to the latest release includes the following: SMB Share Proxy to provide a redirect mechanism for SMB shares in a common namespace. Improvements to rootless login. Fixes to ZFS HotPlug. Improved Dashboard for both Enterprise HA and Enclosure management. Continue reading

How Secure Is Your API Gateway?

Quick, how many APIs does your organization use? We’re talking for internal products, for external services and even for infrastructure management such as Amazon’s S3 object storage or Kubernetes. If you don’t know the answer, you are hardly alone. In survey after survey, CIOs and CISOs admit they don’t have an accurate catalog of all their APIs. Yet statistics shared by Mark O’Neill, chief of research for software engineering at Gartner, in 2022: 98% of organizations use or are planning to use internal APIs, up from 88% in 2019 94% of organizations use or are planning to use public APIs provided by third parties, up from 52% in 2019 90% of organizations use or are planning to use private APIs provided by partners, up from 68% in 2019 80% of organizations provide or are planning to provide publicly exposed APIs, up from 46% in 2019 API Gateways Remain Critical Infrastructure Components To deal with this rapid growth and the management and security challenges it creates, CIOs,

Bullet-Proofing Your 5G Security Plan

With latency improvements and higher data speeds, 5G represents exponential growth opportunities with the potential to transform entire industries — from fueling connected autonomous vehicles, smart cities, mixed reality technologies, robotics and more. As enterprises rethink connectivity, 5G will be a major investment area. However, according to Palo Alto Networks’

What David Flanagan Learned Fixing Kubernetes Clusters

People are mean. That’s one of the first things David Flanagan learned by fixing 50+ deliberately broken Kubernetes clusters on his YouTube series, “Klustered.” In one case, the submitter substituted a ‘c’ character with a unicode doppleganger — it looked identical to a c on the terminal output — thus causing an error that led to Flanagan doubting himself and his ability to fix clusters. “I really hate that guy,” Flanagan confided at the Civo Navigate conference last week in Tampa. “That was a long episode, nearly two hours we spent trying to fix this. And what I love about that clip — because I promise you, I’m quite smart and I’m quite good with Kubernetes — but it had me doubting things that I know are not the fault. The fact that I thought a six digit number is going to cause any sort of overflow on a 64 bit system — of course not. But debugging is hard.” After that show, Klustered adopted a policy of no Unicode breaks. “You only learn when things go wrong,” Flanagan said. “This is why I really love doing Klustered. If you just have a cluster that just works, Continue reading

API Gateway, Ingress Controller or Service Mesh: When to Use What and Why

In just about every conversation on ingress controllers and service meshes, we hear some variation of the questions, “How is this tool different from an API gateway?” or “Do I need both an API gateway and an ingress controller (or service mesh) in Kubernetes?” This confusion is understandable for two reasons: Ingress controllers and service meshes can fulfill many API gateway use cases. Some vendors position their API gateway tool as an alternative to using an ingress controller or service mesh — or they roll multiple capabilities into one tool. Here, we will tackle how these tools differ and which to use for Kubernetes-specific API gateway use cases. For a deeper dive, including demos, watch the webinar “API gateway routes API requests from a client to the appropriate services. But a big misunderstanding about this simple definition is the idea that an API gateway is a unique piece of technology. It’s not. Rather, “API gateway” describes a set Continue reading

13 Years Later, the Bad Bugs of DNS Linger on

It’s 2023, and we are still copying code without fully debugging. Did we not learn from the Great DNS Vulnerability of 2008? Fear not, internet godfather spoke about the cost of open source dependencies in an Open Source Summit Europe in Dublin talk — which he Dan Kaminsky discovered a fundamental design flaw in DNS code that allowed for arbitrary cache poisoning that affected nearly every DNS server on the planet. The patch was released in July 2008 followed by the permanent solution, Domain Name Security Extensions (DNSSEC), in 2010. The Domain Name System is the basic name-based global addressing system for The Internet, so vulnerabilities in DNS could spell major trouble for pretty much everyone on the Internet. Vixie and Kaminsky “set [their] hair on fire” to build the security vulnerability solution that “13 years later, is not widely enough deployed to solve this problem,” Vixie said. All of this software is open-source and inspectable but the DNS bugs are still being brought to Vixie’s attention in the present day. “This is never going to stop if we don’t start writing down the lessons people should know before they write software,”  Vixie said. How Did This Happen? It’s our fault, “the call is coming from inside the house.” Before internet commercialization and the dawn of the home computer room, publishers of Berkley Software Distribution (BSD) of UNIX decided to support the then-new DNS protocol. “Spinning up a new release, making mag tapes, and putting them all in shipping containers was a lot of work” so they published DNS as a patch and posted to Usenet newsgroups, making it available to anyone who wanted it via an FTP server and mailing list. When Vixie began working on DNS at Berkeley, DNS was for all intents abandonware insofar as all the original creators had since moved on. Since there was no concept of importing code and making dependencies, embedded systems vendors copied the original code and changed the API names to suit their local engineering needs… this sound familiar? And then Linux came along. The internet E-X-P-L-O-D-E-D. You get an AOL account. And you get an AOL account… Distros had to build their first C library and copied some version of the old Berkeley code whether they knew what it was or not. It was a copy of a copy that some other distro was using, they made a local version forever divorced from the upstream. DSL modems are an early example of this. Now the Internet of Things is everywhere and “all of this DNS code in all of the billions of devices are running on some fork of a fork of a fork of code that Berkeley published in 1986.” Why does any of this matter? The original DNS bugs were written and shipped by Vixie. He then went on to fix them in the  90s but some still appear today. “For embedded systems today to still have that problem, any of those problems, means that whatever I did to fix it wasn’t enough. I didn’t have a way of telling people.” Where Do We Go from Here? “Sure would have been nice if we already had an internet when we were building one,” Vixie said. But, try as I might, we can’t go backward we can only go forward. Vixie made it very clear, “if you can’t afford to do these things [below] then free software is too expensive for you.” Here is some of Vixie’s advice for software producers: Do the best you can with the tools you have but “try to anticipate what you’re going to have.” Assume all software has bugs “not just because it always has, but because that’s the safe position to take.” Machine-readable updates are necessary because “you can’t rely on a human to monitor a mailing list.” Version numbers are must-haves for your downstream. “The people who are depending on you need to know something more than what you thought worked on Tuesday.” It doesn’t matter what it is as long as it uniquely identifies the bug level of the software. Cite code sources in the README files in source code comments. It will help anyone using your code and chasing bugs. Automate monitoring of your upstreams, review all changes, and integrate patches. “This isn’t optional.” Let your downstream know about changes automatically “otherwise these bugs are going to do what the DNS bugs are doing.” Here is the advice for software consumers: Your software’s dependencies are your dependencies. “As a consumer when you import something, remember that you’re also importing everything it depends on… So when you check your dependencies, you’d have to do it recursively you have to go all the way up.” Uncontracted dependencies can make free software incredibly expensive but are an acceptable operating risk because “we need the software that everybody else is writing.” Orphaned dependencies require local maintenance and therein lies the risk because that’s a much higher cost than monitoring the developments that are coming out of other teams. “It’s either expensive because you hire enough people and build enough automation or it’s expensive because you don’t.” Automate dependency upgrades (mostly) because sometimes “the license will change from one you could live with to one that you can’t or at some point someone decides they’d like to get paid” [insert adventure here]. Specify acceptable version numbers. If versions 5+ have the fix needed for your software, say that to make sure you don’t accidentally get an older one. Monitor your supply chain and ticket every release. Have an engineer review every update to determine if it’s “set my hair on fire, work over the weekend or we’ll just get to it when we get to it” priority level. He closed with “we are all in this together but I think we could get it organized better than we have.” And it sure is one way to say it. There is a certain level of humility and grace one has to have after being on the tiny team that prevented the potential DNS collapse, is a leader in their field for over a generation, but still has their early career bugs (that they solved 30 years ago) brought to their attention at regular intervals because adopters aren’t inspecting the source code. The post 13 Years Later, the Bad Bugs of DNS Linger on appeared first on The New Stack.

EU Analyst: The End of the Internet Is Near

The internet as we know it may no longer be a thing, warns a European Union-funded researcher. If it continues to fray, our favorite “network of networks” will just go back to being a bunch of networks again. And it will be the fault of us all. “The idea of an open and global internet is progressively deteriorating and the internet itself is changing,” writes Internet Fragmentation: Why It Matters for Europe” posted Tuesday by the

Turbocharging Host Workloads with Calico eBPF and XDP

In Linux, network-based applications rely on the kernel’s networking stack to establish communication with other systems. While this process is generally efficient and has been optimized over the years, in some cases it can create unnecessary overhead that can affect the overall performance of the system for network-intensive workloads such as web servers and databases. Calico Open Source offer an easier way to tame these technologies. Calico Open Source is a networking and security solution that seamlessly integrates with Kubernetes and other cloud orchestration platforms. While infamous for its policy engine and security capabilities, there are many other features that can be used in an environment by installing Continue reading

Azure Went Dark

And down went all Microsoft 365 services around the world. One popular argument against putting your business trust in the cloud is that if your hyper-cloud provider goes down, so does your business. Well, on the early U.S. East coast morning, it happened. Microsoft Azure went down and along with it went Microsoft 365, Exchange Online, Outlook, SharePoint Online, OneDrive for Business, GitHub, Microsoft Authenticator, and Teams. In short, pretty much everything running on Azure went boom. issues impacting multiple Microsoft 365 services.” Of course, by that time, users were already screaming. As one Reddit user on the sysadmin subreddit, wrote, “rolled back a network change that we believe is causing impact. We’re monitoring the service as the rollback takes effect.” By 9:31 a.m., Microsoft said the disaster was over. “We’ve confirmed that

Performance Measured: How Good Is Your WebAssembly?

WebAssembly adoption is exploding. Almost every week at least one startup, SaaS vendor or established software platform provider is either beginning to offer Wasm tools or has already introduced Wasm options in its portfolio, it seems. But how can all of the different offerings compare performance-wise? The good news is that given Wasm’s runtime simplicity, the actual performance at least for runtime can be compared directly among the different WebAssembly offerings. This direct comparison is certainly much easier to do when benchmarking distributed applications that run on or with Kubernetes, containers and microservices. This means whether a Wasm application is running on a browser, an edge device or a server, the computing optimization that Wasm offers in each instance is end-to-end and, and its runtime environment is in a tunnel of sorts — obviously good for security — and not affected by the environments in which it runs as it runs directly on a machine level on the CPU. Historically, Wasm has also been around for a while, before the World Wide Web Consortium (W3C) named it as a web standard in 2019, thus becoming the fourth web standard with HTML, CSS and JavaScript. But while web browser applications have Continue reading

How to Overcome Challenges in an API-Centric Architecture

This is the second in a two-part series. For an overview of a typical architecture, how it can be deployed and the right tools to use, please refer to Part 1.  Most APIs impose usage limits on number of requests per month and rate limits, such as a maximum of 50 requests per minute. A third-party API can be used by many parts of the system. Handling subscription limits requires the system to track all API calls and raise alerts if the limit will be reached soon. Often, increasing the limit requires human involvement, and alerts need to be raised well in advance. The system deployed must be able to track API usage data persistently to preserve data across service restarts or failures. Also, if the same API is used by multiple applications, collecting those counts and making decisions needs careful design. Rate limits are more complicated. If handed down to the developer, they will invariably add sleep statements, which will solve the problem in the short term; however, in the long run, this leads to complicated issues when the timing changes. A better approach is to use a concurrent data structure that limits rates. Even then, if the Continue reading

How to Use Time-Stamped Data to Reduce Network Downtime 

Increased regulations and emerging technologies forced telecommunications companies to evolve quickly in recent years. These organizations’ engineers and site reliability engineering (SRE) teams must use technology to improve performance, reliability and service uptime. Learn how WideOpenWest challenges that vary depending on where the company is in their life cycle. Across the industry, businesses must modernize their infrastructure while also maintaining legacy systems. At the same time, new regulations at both the local and federal levels increase the competition within the industry, and new businesses challenge the status quo set by current industry leaders. In recent years, the surge in people working from home requires a more reliable internet connection to handle their increased network bandwidth needs. The increased popularity of smartphones and other devices means there are more devices requiring network connectivity — all without a reduction in network speeds. Latency issues or poor uptime lead to unhappy customers, who then become flight risks. Add to this situation more frequent security breaches, which then  requires all businesses to monitor their networks to detect potential breaches faster. InfluxData is the Continue reading

The Right Stuff for Really Remote Edge Computing

Suppose you operate popup clinics in rural villages and remote locations where there is no internet. You need to capture and share data across the clinic to provide vital healthcare, but if the apps you use require an internet connection to work, they can’t operate in these areas. Or perhaps you’re an oil and gas operator that needs to analyze critical warning data from a pressure sensor on a platform in the North Sea. If the data needs to be processed in cloud data centers, it has to travel incredible distances — at great expense — over unreliable networks. This incurs high degrees of latency, or network slowness, so by the time a result is sent back to the platform, it could be too late to take any action. These kinds of use cases represent a growing class of apps that require 100% uptime and real-time speed, guaranteed — regardless of where they are operating in the world. A fundamental challenge in meeting these requirements remains the network — there are still huge swaths of the globe with little or no internet — meaning apps that depend on connectivity cannot operate in those areas. Emerging advances in network technology are Continue reading

The Next Wave of Network Orchestration: MDSO

Demand for network automation and orchestration continues to rise as organizations reap the business and technical benefits it brings to their operations, including significant improvements in productivity, cost reduction and efficiency. As a result, many organizations are now looking to the next wave of network orchestration: orchestration across technology domains, more commonly known as Multi-Domain Service Orchestration (MDSO). Early adopters have learned that effectively leveraging automation and orchestration at the domain level doesn’t necessarily translate to the MDSO layer due to the different capabilities required to effectively coordinate and communicate across different technologies. While the potential benefits of MDSO are high, there are unique challenges in multidomain deployments that organizations must tackle. The most obvious difference when orchestrating across domains versus within specific domains is the need to design around the direction your network data will travel. Within a single domain, the activities are primarily focused north to south, and vice versa. Instructions are sent to the domain controller which executes the changes to the network functions. This makes single-domain orchestration relatively straightforward. When you start orchestrating across domains, however, things get a little more complex. Now you need to account for both north/south activities and also for a large Continue reading

Sidecars are Changing the Kubernetes Load-Testing Landscape

As your infrastructure is scaling and you start to get more traffic, it’s important to make sure everything works as expected. This is most commonly done through testing, with load testing being the optimal way of verifying the resilience of your services. Traditionally, load testing has been accomplished via standalone clients, like JMeter. However, as the world of infrastructure has gotten more modern, and organizations are using tools like Kubernetes, it’s important to have a modern toolset as well. With traditional load testing, you’ll commonly run into one of three major issues: Scripting load tests takes a lot of time Load tests typically run in large, complex, end-to-end environments, that are difficult to provision, as well as being expensive for production-scale infrastructure Data and realistic use cases are impossible to mirror one-to-one, unless you have production data A more modern approach is to integrate your load-testing tools directly into your infrastructure. If you’re using Kubernetes, that can be accomplished via something like an 

Confluent: Have We Entered the Age of Streaming?

Three years ago, when we posed the question, “Apache Kafka was emerging as the default go-to-publish/subscribe messaging engine for the cloud era. At the time, we drew comparisons with IPO’ed while Databricks continues Pulsar recently emerged as a competing project, but is it game over? Hyperscalers are offering alternatives like Azure Event Hub, and AWS co-markets Confluent Cloud, with a similar arrangement with Jay Kreps evangelized streaming using electricity as the metaphor. Kreps positioned streaming as pivotal to the next wave of apps in chicken and egg terms. That is, when electricity Continue reading