Archive

Category Archives for "Networking"

IBM embraces zero trust with upgraded Cloud Pak service

IBM has taken the wraps off a version of its Cloud Pak for Security that aims to help customers looking to deploy zero-trust security facilities for enterprise resource protection.IBM Cloud Paks are bundles of Red Hat’s Kubernetes-based OpenShift Container Platform along with Red Hat Linux and a variety of connecting technologies to let enterprise customers deploy and manage containers on their choice of private or public infrastructure, including AWS, Microsoft Azure, Google Cloud Platform, Alibaba and IBM Cloud.To read this article in full, please click here

IBM embraces zero trust with upgraded Cloud Pak service

IBM has taken the wraps off a version of its Cloud Pak for Security that aims to help customers looking to deploy zero-trust security facilities for enterprise resource protection.IBM Cloud Paks are bundles of Red Hat’s Kubernetes-based OpenShift Container Platform along with Red Hat Linux and a variety of connecting technologies to let enterprise customers deploy and manage containers on their choice of private or public infrastructure, including AWS, Microsoft Azure, Google Cloud Platform, Alibaba and IBM Cloud.To read this article in full, please click here

Use Containerlab to emulate open-source routers

Containerlab is a new open-source network emulator that quickly builds network test environments in a devops-style workflow. It provides a command-line-interface for orchestrating and managing container-based networking labs and supports containerized router images available from the major networking vendors.

More interestingly, Containerlab supports any open-source network operating system that is published as a container image, such as the Free Range Routing (FRR) router. This post will review how Containerlab works with the FRR open-source router.

While working through this example, you will learn about most of Containerlab’s container-based features. Containerlab also supports VM-based network devices so users may run commercial router disk images in network emulation scenarios. I’ll write about building and running VM-based labs in a future post.

While it was initially developed by Nokia engineers, Containerlab is intended to be a vendor-neutral network emulator and, since its first release, the project has accepted contributions from other individuals and companies.

The Containerlab project provides excellent documentation so I don’t need to write a tutorial. But, Containerlab does not yet document all the steps required to build an open-source router lab that starts in a pre-defined state. This post will cover that scenario so I hope it adds something of Continue reading

Application Performance in the Age of SD-WAN

Mike Hicks Mike is a principal solutions analyst at ThousandEyes, a part of Cisco, and a recognized expert with more than 30 years of experience in network and application performance. In the olden days, users were in offices and all apps lived in on-premises data centers. The WAN (wide area network) was what connected all of them. Today, with the adoption of SaaS apps and associated dependencies such as cloud services and third-party API endpoints, the WAN is getting stretched beyond recognition. In its place, the internet is directly and exclusively carrying a large — if not majority — share of all enterprise traffic flows. Enterprises are increasingly moving away from legacy WANs in favor of internet-centric, software-defined WANs, also called SD-WANs or software-defined networking in a wide area network. Architected for interconnection with cloud and external services, adopting SD-WANs can play a critical role in making enterprise networks cloud-ready, more cost-efficient and better suited to delivering quality digital experiences to customers and employees at all locations. But the transformation brings new visibility needs, and ensuring that SD-WAN delivers on expectations requires a new approach to monitoring that addresses network visibility and application performance equally. WAN in the Light of Continue reading

Lambada Community of Tamil Nadu Now Connected to the Internet

It’s been decades since the development of the Internet. Yet there are still many people around the world without any kind of connectivity. Some villages don’t know about popular services like Facebook, WhatsApp, and Instagram, and there are tribal communities who have lived their whole lives completely unconnected to the outside world. When information as […]

The post Lambada Community of Tamil Nadu Now Connected to the Internet appeared first on Internet Society.

Juniper takes SASE security control to the cloud

Juniper Networks has laid a key part of its Secure Access Services Edge (SASE) foundation with a cloud-based security-control service that provides a central way to control and protect on-premises or cloud-based enterprise resources.Called Security Director Cloud, the service focuses Juniper's SASE efforts by providing a central point to manage enterprise security services including policy setting, and threat-detection and -prevention.Juniper (like other key enterprise networking vendors such as Cisco, Hewlitt-Packard Enterprise (Aruba) and VMware, as well as service providers including Cato Networks, Akamai, and Zscaler) has pledged allegiance to growing SASE support in its product families.To read this article in full, please click here

Juniper takes SASE security control to the cloud

Juniper Networks has laid a key part of its Secure Access Services Edge (SASE) foundation with a cloud-based security-control service that provides a central way to control and protect on-premises or cloud-based enterprise resources.Called Security Director Cloud, the service focuses Juniper's SASE efforts by providing a central point to manage enterprise security services including policy setting, and threat-detection and -prevention.Juniper (like other key enterprise networking vendors such as Cisco, Hewlitt-Packard Enterprise (Aruba) and VMware, as well as service providers including Cato Networks, Akamai, and Zscaler) has pledged allegiance to growing SASE support in its product families.To read this article in full, please click here

Avesha Deploys Machine Learning for More Efficient Load Balancing

When Avesha. To his surprise, the industry hadn’t changed much over the past twenty years. This week, Avesha is demonstrating its new AI-based load balancing technology at KubeCon+CloudNativeCon 2021. Load balancing still mostly happens at a local level, within a particular cloud or cluster, and uses the same formulas that he helped popularize more than two decades ago. For example, a load balancer can use a “round-robin” formula, where requests go to each server in turn, and then back to the first one. A “weighted round-robin” is similar except that some servers get more requests than others because they have more available capacity. A “sticky cookie load balancer” is one where all the requests from a particular session are sent to the same server so that, say, customers don’t get moved around in the middle of shopping sessions and lose their shopping carts. “There are a few other variations, but they’re all based on fixed settings,” said Nair. “The state of the art hasn’t moved much in this area.” A very simple change that would make load balancers immediately more effective is to automatically adjust the weights based on server performance. “It’s actually a very low-hanging fruit,” he said. “I don’t know why they aren’t all doing this.” That’s what Avesha started looking at. Then, in addition to server performance, the company also added in other factors, like travel path times. The resulting service, the Smart Application Cloud Framework, was launched Tuesday. Deployment Structure Avesha is deployed with an agent that sits in its owner container inside a cluster or private cloud. It talks to its fellow agents and to Avesha’s back end systems via secure virtual private networks. The backend system collects information about traffic paths and server performance then uses machine learning to determine optimal routing strategies. The specific AI technique used is reinforcement learning. The system makes a recommendation and looks at how the recommendation works in practice, then adjusts its model accordingly. “Is it continuously tuning your network,” said Nair. “The network is constantly undergoing lots of changes, with traffic and congestion.” It also looks at the performance of individual servers and if some are having problems handling requests it automatically routes them elsewhere. And it works across all types of deployments — multiple public clouds, private clouds, and edge computing installations. Sponsor Note LaunchDarkly is a feature management platform that empowers all teams to safely deliver and control software through feature flags. By separating code deployments from feature releases, LaunchDarkly enables you to deploy faster, reduce risk, and iterate continuously. “The methods currently in use in Kubernetes are static,” he said. “You set fixed thresholds with a lower bound and an upper bound. But nobody even knows how to set those thresholds.” People wind up guessing, he said, set some basic targets, and then leave them in place. “You end up wasting resources,” he said. The Avesha technology is more like a self-driving car, he said. There are still parameters and guard rails, but, within those constraints, the system continually optimizes for the desired outcome, whether it be the lowest latency, or maximum cost savings, or even compliance-related data movement restrictions. “You want your data traffic to be managed in accordance with your policies,” he said. “For example, there might be regulations about where your data is and isn’t allowed to go.” Performance Improvements In internal studies, Avesha has seen improvements of 20% to 30% in the number of requests that are handled within their performance targets compared to standard weighted round-robin— approaches. When some clusters have hundreds of thousands of nodes, 30% is a big number, he said. Companies will see improvements in customer experience, lower bandwidth costs, and less need for manual intervention when things go wrong in the middle of the night. And it’s not just about the business bottom line, he added. “If you translate that into wasted energy, wasted natural resources, there are lots of benefits,” he said. For some applications, like video streaming, better performance would translate to competitive advantage, he said. “It’s like the difference between getting high definition and standard definition video.” There’s no commercial product currently on the market that offers AI-powered load balancing, he said, though some companies probably have their own proprietary technology to do something similar. “Netflix is an example of a company that’s a leader in the cloud native world,” he said. “I would say there’s a fairly good chance that they’ve already incorporated AI into their load balancing.” Other large cloud native technology companies with AI expertise may have also built their own platforms, he said. “Nobody has said anything publicly,” he said. “But it’s such an obvious thing to do that I am willing to believe that they have something, but are just keeping it to themselves.” There are also some narrow use cases, like that of content delivery networks. CDNs typically deliver content, like web pages, to users. They work by distributing copies of the content across the internet and optimize for the shortest possible distance between the end user and the source of the content. Avesha’s approach is more general, supporting connections between individual microservices. “It’s a little bigger than what a CDN is trying to do,” he said. “We are literally at the cutting edge with this.” AI-Powered Load Balancing as a Feature At some point, cloud vendors and third-party service providers will begin offering intelligent load balancing to their enterprise customers, either by building their own technology or by buying or partnering with Avesha or any competitors who might appear on the scene. “One way or the other, you’re going to be able to take advantage of it,” said Nair. Avesha itself is currently working with partners, he said, including some major industry players, and he is expecting to be making announcements this summer. But enterprises can also work directly with Avesha and get a jump on the competition, he added. Enterprises who deploy workloads to multiple clouds would find the technology of most interest, he added. Avesha is currently working with several companies on proof of concept projects. These are companies that typically are at $50 million in revenues or above in verticals such as media, manufacturing, health care and telecom. “We have also engaged with some partners who are big cloud players,” he said. More information, as well as return on investment analyses, will be released in the next few months. Verizon and AWS Serve Doctors at the Edge One case study that has been made public was a

How to best set up command aliases on Linux

Used frequently, bash aliases can make working on the Linux command line a lot smoother and easier, but they can also be complicated and hard to remember. This post examines how you might make your aliases work for you rather than vice versa.In general, aliases are especially good for: simplifying commands that are long and overly complex remembering commands with odd or complicated names saving time using commands that you use very often What you need to keep in mind is that: aliases can themselves be hard to remember giving an alias the same name as a regular command can be a good thing or a bad thing (more on this shortly) How to create an alias Use the alias command and remember to add it to your ~/.bashrc file so that it will still be waiting for you whenever you login.To read this article in full, please click here

Google announces custom video transcoding chip

You know Google has more money than it could ever spend when it invests in a custom chip to do one task. And now they’ve done it for the third time.The search giant has developed a new chip and deployed it in its data centers to compress video content. The chips, called Video (Trans)Coding Units, or VCUs, do that faster and more efficiently than traditional CPUs.In a blog post discussing the project, Jeff Calow, a lead software engineer at Google said the VCU gives the highest YouTube video quality possible on your device while consuming less bandwidth than before.To read this article in full, please click here

How to best set up command aliases on Linux

Used frequently, bash aliases can make working on the Linux command line a lot smoother and easier, but they can also be complicated and hard to remember. This post examines how you might make your aliases work for you rather than vice versa.In general, aliases are especially good for: simplifying commands that are long and overly complex remembering commands with odd or complicated names saving time using commands that you use very often What you need to keep in mind is that: aliases can themselves be hard to remember giving an alias the same name as a regular command can be a good thing or a bad thing (more on this shortly) How to create an alias Use the alias command and remember to add it to your ~/.bashrc file so that it will still be waiting for you whenever you login.To read this article in full, please click here

Dell delivers lineup of on-prem, pay-per-use hardware

Dell is launching a new offering of managed storage, server, and hyperconverged infrastructure that can be deployed in a company's own data center, at edge locations or in colocation facilities, and enterprises pay for capacity as needed.Dubbed Dell Apex, it includes storage, cloud services, and a console for streamlined management. The launch coincides with the kickoff of Dell Technologies World 2021, which is being held virtually this year.Now see "How to manage your power bill while adopting AI" Pay-per-use hardware models such as Dell Apex and HPE GreenLake are designed to deliver cloud-like pricing structures and flexible capacity to private data centers. The concept of pay-per-use hardware isn't new, but the buzz around it is growing. Enterprises are looking for alternatives to buying equipment outright for workloads that aren't a fit for public cloud environments.To read this article in full, please click here

Dell delivers lineup of on-prem, pay-per-use hardware

Dell is launching a new offering of managed storage, server, and hyperconverged infrastructure that can be deployed in a company's own data center, at edge locations or in colocation facilities, and enterprises pay for capacity as needed.Dubbed Dell Apex, it includes storage, cloud services, and a console for streamlined management. The launch coincides with the kickoff of Dell Technologies World 2021, which is being held virtually this year.Now see "How to manage your power bill while adopting AI" Pay-per-use hardware models such as Dell Apex and HPE GreenLake are designed to deliver cloud-like pricing structures and flexible capacity to private data centers. The concept of pay-per-use hardware isn't new, but the buzz around it is growing. Enterprises are looking for alternatives to buying equipment outright for workloads that aren't a fit for public cloud environments.To read this article in full, please click here

HPE kicks off software-defined storage-as-a-service

Hewlett Packard Enterprise took a big step toward delivering on its “entire-portfolio-as-a-service” strategy this week by unveiling cloud-based storage and data service that will help manage storage needs in distributed IT enterpises.HPE said in 2019 that by 2022 it wanted to remake itself into a more service-oriented company and announced plans to transition its entire portfolio to subscription based, pay-per-use, and as-a-service offerings. It has since made headway, for example, recently adding HPE GreenLake cloud services for HPC.To read this article in full, please click here

Back to Basics: Do We Need Interface Addresses?

In the world of ubiquitous Ethernet and IP, it’s common to think that one needs addresses in packet headers in every layer of the protocol stack. We have MAC addresses, IP addresses, and TCP/UDP port numbers… and low-level addresses are assigned to individual interfaces, not nodes.

Turns out that’s just one option… and not exactly the best one in many scenarios. You could have interfaces with no addresses, and you could have addresses associated with nodes, not interfaces.