Archive

Category Archives for "Networking"

Kathy Brown’s Op-Ed in the Hill Times: Canada’s Unique Opportunity to Lead the Future of the Internet

Kathy Brown, CEO of the Internet Society, recently penned an Op-Ed for Canada’s the Hill Times calling for a multistakeholder approach to Internet governance: “an approach that is collaborative, one that engages the entire Internet community.” According to Brown, “The time has come to expand this inclusive model of governance to more places around the world.”

“No
 one party, government, corporation, or non-profit controls the Internet and we are all better for it. Nor does any one party have the knowledge or the ability to identify the solutions to these complex policy challenges. It has been this approach—what we call the multistakeholder model—that has allowed humankind’s most advanced and powerful communications tool to spread so far and so fast.”

She cites the partnership between the Internet Society, Innovation, Science and Economic Development, the Canadian Internet Registration AuthorityCANARIE, and CIPPIC as an example of the multistakeholder approach working successfully. “[Canada] is addressing cybersecurity head-on by working with the Internet Society to engage the Canadian Internet community in a process to develop recommendations to secure the Internet of Things.”

Read the entire Op-Ed, then learn how you can participate in the Collaborative Governance Project, Continue reading

BrandPost: Introducing the Adaptive Network Vision

Why now? The networking industry is being disrupted.There is an explosion in network demand, driven by ultramobile users who want the ability to access the cloud and consume high-definition content, video, and applications when and where they choose. This disruption of the network will only be exacerbated by the adoption of the Internet of Things (IoT) and 5G, the use of which involves billions of devices interacting with machines, users, and clouds to drive consumer and business interactions.For instance, what happens when users want to engage in a 4K-based virtual reality session hosted in the cloud, while traveling at high speed in their driverless cars? What happens when the physical devices currently used to support networking functions become virtual—and so do the user end-points? Network providers are now realizing the level of complexity and variability this type of demand will introduce, and that their current networks are not up to the challenge.To read this article in full, please click here

Riga, Tallinn and Vilnius: Launching three new European Cloudflare data centers

Riga, Tallinn and Vilnius: Launching three new European Cloudflare data centers

Riga, Tallinn and Vilnius: Launching three new European Cloudflare data centers
Cloudflare announces the turn up of our newest data centers located in Riga (Latvia), Tallinn (Estonia) and Vilnius (Lithuania). They represent the 140th, 141st and 142nd cities across our growing global network, and our 37th, 38th, 39th cities in Europe. We are very excited to help improve the security and performance of over 7 million Internet properties across 72 countries including the Baltic states.

We will be interconnecting with local networks over multiple Internet exchanges: Baltic Internet Exchange (BALT-IX), Lithuanian Internet eXchange Point (LIXP), LITIX, Tallinn Internet Exchange (TLLIX), Tallinn Governmental Internet Exchange (RTIX), Santa Monica Internet Local Exchange (SMILE-LV), and potentially, the Latvian Internet Exchange (LIX-LV).

If you are an entrepreneur anywhere in the world selling your product in these markets, or a Baltic entrepreneur reaching a global audience, we've got your back.

Baltic Region

Riga, Tallinn and Vilnius: Launching three new European Cloudflare data centers
Photo by Siim Lukka / Unsplash
Latvia, Estonia and Lithuania join the list of other countries with shorelines along the Baltic Sea and Cloudflare data centers. That list includes Denmark, Finland, Germany, Poland, Russia and Sweden.

Of the five countries that in the drainage basin but do not border the sea, Cloudflare has deployments Continue reading

Nvidia packs 2 petaflops of performance in a single compact server

At its GPU Technology Conference this week, Nvidia took the wraps off a new DGX-2 system it claims is the first to offer multi-petaflop performance in a single server, thus greatly reducing the footprint to get to true high-performance computing (HPC).DGX-2 comes just seven months after the DGX-1 was introduced, although it won’t ship until the third quarter. However, Nvidia claims it has 10 times the compute power as the previous generation thanks to twice the number of GPUs, much more memory per GPU, faster memory, and a faster GPU interconnect.[ Learn how server disaggregation can boost data center efficiency. | Get regularly scheduled insights by signing up for Network World newsletters. ] The DGX-2 uses a Tesla V100 CPU, the top of the line for Nvidia’s HPC and artificial intelligence-based cards. With the DGX-2, it has doubled the on-board memory to 32GB. Nvidia claims the DGX-2 is the world’s first single physical server with enough computing power to deliver two petaflops, a level of performance usually delivered by hundreds of servers networked into clusters.To read this article in full, please click here

Nvidia packs 2 petaflops of performance in a single compact server

At its GPU Technology Conference this week, Nvidia took the wraps off a new DGX-2 system it claims is the first to offer multi-petaflop performance in a single server, thus greatly reducing the footprint to get to true high-performance computing (HPC).DGX-2 comes just seven months after the DGX-1 was introduced, although it won’t ship until the third quarter. However, Nvidia claims it has 10 times the compute power as the previous generation thanks to twice the number of GPUs, much more memory per GPU, faster memory, and a faster GPU interconnect.[ Learn how server disaggregation can boost data center efficiency. | Get regularly scheduled insights by signing up for Network World newsletters. ] The DGX-2 uses a Tesla V100 CPU, the top of the line for Nvidia’s HPC and artificial intelligence-based cards. With the DGX-2, it has doubled the on-board memory to 32GB. Nvidia claims the DGX-2 is the world’s first single physical server with enough computing power to deliver two petaflops, a level of performance usually delivered by hundreds of servers networked into clusters.To read this article in full, please click here

Simplifying Linux with … fish?

No, the title for this post is not a mistake. I’m not referring to the gill-bearing aquatic craniate animals that lack limbs with digits or the shark-shaped magnetic Linux emblem that you might have stuck to your car. The “fish” that I’m referring to is a Linux shell and one that’s been around since 2005. Even so, it’s a shell that a lot of Linux users may not be familiar with.The primary reason is that fish isn't generally installed by default. In fact, on some distributions, the repository that provides it is one your system probably doesn't access. If you type "which fish" and your system responds simply with another prompt, you might be missing out on an interesting alternative shell. And, if your apt-get or yum command can't find what you're looking for, you will probably have to use commands like those shown below to get fish loaded onto your system.To read this article in full, please click here

Cognitive Cloud Networking with Arista X3 Series

At Arista we have always embraced open networking trends by designing our hardware and software to be as programmable as possible, driving the use of merchant silicon and diversity for the broader industry. It has allowed our customers to select their favorite silicon architectures for the switch pipeline and choose the suite of software and hardware they want to form their cognitive network systems. 

Cognitive Cloud Networking with Arista X3 Series

At Arista we have always embraced open networking trends by designing our hardware and software to be as programmable as possible, driving the use of merchant silicon and diversity for the broader industry. It has allowed our customers to select their favorite silicon architectures for the switch pipeline and choose the suite of software and hardware they want to form their cognitive network systems. 

eBPF, Sockets, Hop Distance and manually writing eBPF assembly

A friend gave me an interesting task: extract IP TTL values from TCP connections established by a userspace program. This seemingly simple task quickly exploded into an epic Linux system programming hack. The result code is grossly over engineered, but boy, did we learn plenty in the process!

3845353725_7d7c624f34_z

CC BY-SA 2.0 image by Paul Miller

Context

You may wonder why she wanted to inspect the TTL packet field (formally known as "IP Time To Live (TTL)" in IPv4, or "Hop Count" in IPv6)? The reason is simple - she wanted to ensure that the connections are routed outside of our datacenter. The "Hop Distance" - the difference between the TTL value set by the originating machine and the TTL value in the packet received at its destination - shows how many routers the packet crossed. If a packet crossed two or more routers, we know it indeed came from outside of our datacenter.

Screen-Shot-2018-03-29-at-10.52.49-AM-1

It's uncommon to look at TTL values (except for their intended purpose of mitigating routing loops by checking when the TTL reaches zero). The normal way to deal with the problem we had would be to blacklist IP ranges of our servers. But it’s not that Continue reading