Archive

Category Archives for "Networking"

The Week in Internet News: Facebook Calls for New Internet Regulations

More regulation, please: Facebook CEO Mark Zuckerberg, in an op-ed in the Washington Post, called on governments to get more involved in Internet regulation, including defining harmful content and making rules on how sites should handle it. Governments should also look at new laws to protect elections, to improve consumer privacy, and to guarantee data portability, Zuckerberg said. His ideas weren’t universally embraced, however. The Electronic Frontier Foundation, in a blog post, said there were “fundamental problems” with governments policing harmful content, particularly in defining what’s harmful.

Hold my beer: Australia’s parliament didn’t take long to look at new regulations, with lawmakers passing legislation that would create three-year jail terms for social media executives and operators of other websites that do not remove violent content in an “expeditious” manner, NPR reports. Web-based services could also be fined up to 10 percent of their annual revenue for not complying with the law.

Even more laws: Singapore is the latest country to consider legislation attacking fake news. A proposed law there would require online news sites to publish corrections or warnings about stories the government decides are fake news and remove articles in extreme cases, the Straits Times reports. The Continue reading

BIER Basics

Multicast is, at best, difficult to deploy in large scale networks—PIM sparse and BIDIR are both complex, adding large amounts of state to intermediate devices. In the worst case, there is no apparent way to deploy any existing version of PIM, such as large-scale spine and leaf networks (variations on the venerable Clos fabric). BEIR, described in RFC8279, aims to solve the per-device state of traditional multicast.

In this network, assume A has some packet that needs to be delivered to T, V, and X. A could generate three packets, each one addressed to one of the destinations—but replicating the packet at A is wastes network resources on the A->B link, at least. Using PIM, these three destinations could be placed in a multicast group (a multicast address can be created that describes T, V, and X as a single destination). After this, a reverse shortest path tree can be calculated from each of the destinations in the group towards the source, A, and the correct forwarding state (the outgoing interface list) be installed at each of the routers in the network (or at least along the correct paths). This, however, adds a lot of state to the network.
Continue reading

Beyond SD-WAN: VMware’s vision for the network edge

VeloCloud is now a Business Unit within VMware since being acquired in December 2017. The two companies have had sufficient time to integrate their operations and fit their technologies together to build a cohesive offering. In January, Neal Weinberg provided an overview of where VMware is headed with its reinvention. Now let’s look at it from the VeloCloud SD-WAN perspective.I recently talked to Sanjay Uppal, vice president and general manager of the VeloCloud Business Unit. He shared with me where VeloCloud is heading, adding that it’s all possible because of the complementary products that VMware brings to VeloCloud’s table.To read this article in full, please click here

[Sponsored] Short Take – Network Reliability Engineering

In this Network Collective Short Take, Matt Oswalt joins us to talk about the value of network reliability engineering and the unique approach Juniper is taking to empower engineers to learn the tools and techniques of automation with NRE Labs.

Thank you to Juniper Networks for sponsoring today’s episode and supporting the content we’re creating here at Network Collective. If you would like to take the next steps in your automation journey, NRE Labs is a no-strings-attached resource to help you in that journey. You can find NRE Labs at https://labs.networkreliability.engineering.

 

Matt Oswalt
Guest
Jordan Martin
Host

The post [Sponsored] Short Take – Network Reliability Engineering appeared first on Network Collective.

How to quickly deploy, run Linux applications as unikernels

Building and deploying lightweight apps is becoming an easier and more reliable process with the emergence of unikernels. While limited in functionality, unikernals offer many advantages in terms of speed and security.What are unikernels? A unikernel is a very specialized single-address-space machine image that is similar to the kind of cloud applications that have come to dominate so much of the internet, but they are considerably smaller and are single-purpose. They are lightweight, providing only the resources needed. They load very quickly and are considerably more secure -- having a very limited attack surface. Any drivers, I/O routines and support libraries that are required are included in the single executable. The resultant virtual image can then be booted and run without anything else being present. And they will often run 10 to 20 times faster than a container.To read this article in full, please click here

How to quickly deploy, run Linux applications as unikernels

Building and deploying lightweight apps is becoming an easier and more reliable process with the emergence of unikernels. While limited in functionality, unikernals offer many advantages in terms of speed and security.What are unikernels? A unikernel is a very specialized single-address-space machine image that is similar to the kind of cloud applications that have come to dominate so much of the internet, but they are considerably smaller and are single-purpose. They are lightweight, providing only the resources needed. They load very quickly and are considerably more secure -- having a very limited attack surface. Any drivers, I/O routines and support libraries that are required are included in the single executable. The resultant virtual image can then be booted and run without anything else being present. And they will often run 10 to 20 times faster than a container.To read this article in full, please click here

IDG Contributor Network: Performance-Based Routing (PBR) – The gold rush for SD-WAN

BGP (Border Gateway Protocol) is considered the glue of the internet. If we view through the lens of farsightedness, however, there’s a question that still remains unanswered for the future. Will BGP have the ability to route on the best path versus the shortest path?There are vendors offering performance-based solutions for BGP-based networks. They have adopted various practices, such as, sending out pings to monitor the network and then modifying the BGP attributes, such as the AS prepending to make BGP do the performance-based routing (PBR). However, this falls short in a number of ways.The problem with BGP is that it's not capacity or performance aware and therefore its decisions can sink the application’s performance. The attributes that BGP relies upon for path selection are, for example, AS-Path length and multi-exit discriminators (MEDs), which do not always correlate with the network’s performance.To read this article in full, please click here

ACI MultiPod – Enable Standby APIC

APIC Controller Cluster You actually need three APIC controller servers to get the cluster up and running in complete and redundant ACI system. You can actually work with only two APICs and you will still have a cluster quorum and will be able to change ACI Fabric configuration. Loosing One Site In the MultiPod, those three controllers need to be distributed so that one of them is placed in the secondary site. The idea is that you still have a chance to keep your configuration on one remaining APIC while losing completely primary site with two APICs. On the other

The post ACI MultiPod – Enable Standby APIC appeared first on How Does Internet Work.

Last Week on ipSpace.net (2019W14)

Last Thursday I started another experiment: a series of live webinar sessions focused on business aspects of networking technologies. The first session expanded on the idea of three paths of enterprise IT. It covered the commoditization of IT and networking in particular, vendor landscape, various attempts at segmenting customers, and potential long-term Enterprise IT paths. Recording is already online and currently available with standard subscription.

Although the attendance was lower than usual, attendees thoroughly enjoyed it – one of them sent me this: “the value of ipSpace.net is that you cut through the BS”. Mission accomplished ;)

DNS Privacy at IETF 104

From time to time the IETF seriously grapples with its role with respect to technology relating to users' privacy. Should the IETF publish standard specifications of technologies that facilitate third party eavesdropping on communications or should it refrain from working on such technologies? Should the IETF take further steps and publish standard specifications of technologies that directly impede various forms of third party eavesdropping on communications? Is a consistent position from the IETF on personal privacy preferred? Or should the IETF be as agnostic as possible and publish protocol specifications based solely on technical coherency and interoperability without particular regard to issues of personal privacy? This issue surfaced at IETF 104 in the context of discussions of DNS over HTTPS, or DOH.

Celebrating 50 Years of the RFCs That Define How the Internet Works

First page of RFC 1

50 years ago today, on 7 April 1969, the very first “Request for Comments” (RFC) document was published. Titled simply “Host Software”, RFC 1 was written by Steve Crocker to document how packets would be sent from computer to computer in what was then the very early ARPANET. [1]

Steve and the other early authors were just circulating ideas and trying to figure out how to connect the different devices and systems of the early networks that would evolve into the massive network of networks we now call the Internet. They were not trying to create formal standards – they were just writing specifications that would help them be able to connect their computers. Little did they know then that the system they developed would come to later define the standards used to build the Internet.

Today there are over 8,500 RFCs whose publication is managed through a formal process by the RFC Editor team. The Internet Engineering Task Force (IETF) is responsible for the vast majority (but not all) of the RFCs – and there is strong process through which documents move within the IETF from ideas (“Internet-Drafts” or “I-Ds”) into published standards or informational documents[2].

50 years Continue reading