BrandPost: Introducing the Adaptive Network Vision

Why now? The networking industry is being disrupted.There is an explosion in network demand, driven by ultramobile users who want the ability to access the cloud and consume high-definition content, video, and applications when and where they choose. This disruption of the network will only be exacerbated by the adoption of the Internet of Things (IoT) and 5G, the use of which involves billions of devices interacting with machines, users, and clouds to drive consumer and business interactions.For instance, what happens when users want to engage in a 4K-based virtual reality session hosted in the cloud, while traveling at high speed in their driverless cars? What happens when the physical devices currently used to support networking functions become virtual—and so do the user end-points? Network providers are now realizing the level of complexity and variability this type of demand will introduce, and that their current networks are not up to the challenge.To read this article in full, please click here

Riga, Tallinn and Vilnius: Launching three new European Cloudflare data centers

Riga, Tallinn and Vilnius: Launching three new European Cloudflare data centers

Riga, Tallinn and Vilnius: Launching three new European Cloudflare data centers
Cloudflare announces the turn up of our newest data centers located in Riga (Latvia), Tallinn (Estonia) and Vilnius (Lithuania). They represent the 140th, 141st and 142nd cities across our growing global network, and our 37th, 38th, 39th cities in Europe. We are very excited to help improve the security and performance of over 7 million Internet properties across 72 countries including the Baltic states.

We will be interconnecting with local networks over multiple Internet exchanges: Baltic Internet Exchange (BALT-IX), Lithuanian Internet eXchange Point (LIXP), LITIX, Tallinn Internet Exchange (TLLIX), Tallinn Governmental Internet Exchange (RTIX), Santa Monica Internet Local Exchange (SMILE-LV), and potentially, the Latvian Internet Exchange (LIX-LV).

If you are an entrepreneur anywhere in the world selling your product in these markets, or a Baltic entrepreneur reaching a global audience, we've got your back.

Baltic Region

Riga, Tallinn and Vilnius: Launching three new European Cloudflare data centers
Photo by Siim Lukka / Unsplash
Latvia, Estonia and Lithuania join the list of other countries with shorelines along the Baltic Sea and Cloudflare data centers. That list includes Denmark, Finland, Germany, Poland, Russia and Sweden.

Of the five countries that in the drainage basin but do not border the sea, Cloudflare has deployments Continue reading

Nvidia packs 2 petaflops of performance in a single compact server

At its GPU Technology Conference this week, Nvidia took the wraps off a new DGX-2 system it claims is the first to offer multi-petaflop performance in a single server, thus greatly reducing the footprint to get to true high-performance computing (HPC).DGX-2 comes just seven months after the DGX-1 was introduced, although it won’t ship until the third quarter. However, Nvidia claims it has 10 times the compute power as the previous generation thanks to twice the number of GPUs, much more memory per GPU, faster memory, and a faster GPU interconnect.[ Learn how server disaggregation can boost data center efficiency. | Get regularly scheduled insights by signing up for Network World newsletters. ] The DGX-2 uses a Tesla V100 CPU, the top of the line for Nvidia’s HPC and artificial intelligence-based cards. With the DGX-2, it has doubled the on-board memory to 32GB. Nvidia claims the DGX-2 is the world’s first single physical server with enough computing power to deliver two petaflops, a level of performance usually delivered by hundreds of servers networked into clusters.To read this article in full, please click here

Nvidia packs 2 petaflops of performance in a single compact server

At its GPU Technology Conference this week, Nvidia took the wraps off a new DGX-2 system it claims is the first to offer multi-petaflop performance in a single server, thus greatly reducing the footprint to get to true high-performance computing (HPC).DGX-2 comes just seven months after the DGX-1 was introduced, although it won’t ship until the third quarter. However, Nvidia claims it has 10 times the compute power as the previous generation thanks to twice the number of GPUs, much more memory per GPU, faster memory, and a faster GPU interconnect.[ Learn how server disaggregation can boost data center efficiency. | Get regularly scheduled insights by signing up for Network World newsletters. ] The DGX-2 uses a Tesla V100 CPU, the top of the line for Nvidia’s HPC and artificial intelligence-based cards. With the DGX-2, it has doubled the on-board memory to 32GB. Nvidia claims the DGX-2 is the world’s first single physical server with enough computing power to deliver two petaflops, a level of performance usually delivered by hundreds of servers networked into clusters.To read this article in full, please click here

Simplifying Linux with … fish?

No, the title for this post is not a mistake. I’m not referring to the gill-bearing aquatic craniate animals that lack limbs with digits or the shark-shaped magnetic Linux emblem that you might have stuck to your car. The “fish” that I’m referring to is a Linux shell and one that’s been around since 2005. Even so, it’s a shell that a lot of Linux users may not be familiar with.The primary reason is that fish isn't generally installed by default. In fact, on some distributions, the repository that provides it is one your system probably doesn't access. If you type "which fish" and your system responds simply with another prompt, you might be missing out on an interesting alternative shell. And, if your apt-get or yum command can't find what you're looking for, you will probably have to use commands like those shown below to get fish loaded onto your system.To read this article in full, please click here

Aruba Networks Leads HPE to the Edge

When pre-split Hewlett-Packard bought Aruba Networks three years ago for $3 billion, the goal was to create a stronger and larger networking business that combined both wired and wireless networking capabilities and could challenge market leader Cisco Systems at a time when enterprises were more fully embracing mobile computing and public clouds.

Aruba was launched in 2002 and by the time of the acquisition had established itself as a leading vendor in the wireless networking market and had an enthusiastic following of users who call themselves “Airheads.” The worry among many of them was that once the deal was closed,

Aruba Networks Leads HPE to the Edge was written by Nicole Hemsoth at The Next Platform.

Cognitive Cloud Networking with Arista X3 Series

At Arista we have always embraced open networking trends by designing our hardware and software to be as programmable as possible, driving the use of merchant silicon and diversity for the broader industry. It has allowed our customers to select their favorite silicon architectures for the switch pipeline and choose the suite of software and hardware they want to form their cognitive network systems. 

Cognitive Cloud Networking with Arista X3 Series

At Arista we have always embraced open networking trends by designing our hardware and software to be as programmable as possible, driving the use of merchant silicon and diversity for the broader industry. It has allowed our customers to select their favorite silicon architectures for the switch pipeline and choose the suite of software and hardware they want to form their cognitive network systems. 

eBPF, Sockets, Hop Distance and manually writing eBPF assembly

A friend gave me an interesting task: extract IP TTL values from TCP connections established by a userspace program. This seemingly simple task quickly exploded into an epic Linux system programming hack. The result code is grossly over engineered, but boy, did we learn plenty in the process!

3845353725_7d7c624f34_z

CC BY-SA 2.0 image by Paul Miller

Context

You may wonder why she wanted to inspect the TTL packet field (formally known as "IP Time To Live (TTL)" in IPv4, or "Hop Count" in IPv6)? The reason is simple - she wanted to ensure that the connections are routed outside of our datacenter. The "Hop Distance" - the difference between the TTL value set by the originating machine and the TTL value in the packet received at its destination - shows how many routers the packet crossed. If a packet crossed two or more routers, we know it indeed came from outside of our datacenter.

Screen-Shot-2018-03-29-at-10.52.49-AM-1

It's uncommon to look at TTL values (except for their intended purpose of mitigating routing loops by checking when the TTL reaches zero). The normal way to deal with the problem we had would be to blacklist IP ranges of our servers. But it’s not that Continue reading

VXLAN Limitations of Data Center Switches

One of my readers found this Culumus Networks article that explains why you can’t have more than a few hundred VXLAN-based VLAN segments on every port of 48-port Trident-2 data center switch.

Expect to see similar limitations in most other chipsets. There’s a huge gap between millions of segments enabled by 24-bit VXLAN Network Identifier and reality of switching silicon. Most switching hardware is also limited to 4K VLANs.

Read more ...

Cloudflare is adding Drupal WAF Rule to Mitigate Critical Drupal Exploit

Drupal has recently announced an update to fix a critical remote code execution exploit (SA-CORE-2018-002/CVE-2018-7600). In response we have just pushed out a rule to block requests matching these exploit conditions for our Web Application Firewall (WAF). You can find this rule in the Cloudflare ruleset in your dashboard under the Drupal category with the rule ID of D0003.

Drupal Advisory: https://www.drupal.org/sa-core-2018-002

An Inside Look at What Powers Microsoft’s Internal Systems for AI R&D

For those who might expect Microsoft to favor its own Windows-centric platforms and tools to power comprehensive infrastructure for serving AI compute and software services for internal R&D groups, plan on being surprised.

While Microsoft does rely on some core windows features and certainly its Azure cloud services, much of its infrastructure is powered by a broad suite of open source tools. As Jim Jernigan, senior R&D systems engineer at Microsoft Research told us at the GPU Technology Conference (GTC18) this week, the highest volume of workloads running on the diverse research clusters Microsoft uses for AI development are running

An Inside Look at What Powers Microsoft’s Internal Systems for AI R&D was written by Nicole Hemsoth at The Next Platform.