Riga, Tallinn and Vilnius: Launching three new European Cloudflare data centers

Riga, Tallinn and Vilnius: Launching three new European Cloudflare data centers

Riga, Tallinn and Vilnius: Launching three new European Cloudflare data centers
Cloudflare announces the turn up of our newest data centers located in Riga (Latvia), Tallinn (Estonia) and Vilnius (Lithuania). They represent the 140th, 141st and 142nd cities across our growing global network, and our 37th, 38th, 39th cities in Europe. We are very excited to help improve the security and performance of over 7 million Internet properties across 72 countries including the Baltic states.

We will be interconnecting with local networks over multiple Internet exchanges: Baltic Internet Exchange (BALT-IX), Lithuanian Internet eXchange Point (LIXP), LITIX, Tallinn Internet Exchange (TLLIX), Tallinn Governmental Internet Exchange (RTIX), Santa Monica Internet Local Exchange (SMILE-LV), and potentially, the Latvian Internet Exchange (LIX-LV).

If you are an entrepreneur anywhere in the world selling your product in these markets, or a Baltic entrepreneur reaching a global audience, we've got your back.

Baltic Region

Riga, Tallinn and Vilnius: Launching three new European Cloudflare data centers
Photo by Siim Lukka / Unsplash
Latvia, Estonia and Lithuania join the list of other countries with shorelines along the Baltic Sea and Cloudflare data centers. That list includes Denmark, Finland, Germany, Poland, Russia and Sweden.

Of the five countries that in the drainage basin but do not border the sea, Cloudflare has deployments Continue reading

Nvidia packs 2 petaflops of performance in a single compact server

At its GPU Technology Conference this week, Nvidia took the wraps off a new DGX-2 system it claims is the first to offer multi-petaflop performance in a single server, thus greatly reducing the footprint to get to true high-performance computing (HPC).DGX-2 comes just seven months after the DGX-1 was introduced, although it won’t ship until the third quarter. However, Nvidia claims it has 10 times the compute power as the previous generation thanks to twice the number of GPUs, much more memory per GPU, faster memory, and a faster GPU interconnect.[ Learn how server disaggregation can boost data center efficiency. | Get regularly scheduled insights by signing up for Network World newsletters. ] The DGX-2 uses a Tesla V100 CPU, the top of the line for Nvidia’s HPC and artificial intelligence-based cards. With the DGX-2, it has doubled the on-board memory to 32GB. Nvidia claims the DGX-2 is the world’s first single physical server with enough computing power to deliver two petaflops, a level of performance usually delivered by hundreds of servers networked into clusters.To read this article in full, please click here

Nvidia packs 2 petaflops of performance in a single compact server

At its GPU Technology Conference this week, Nvidia took the wraps off a new DGX-2 system it claims is the first to offer multi-petaflop performance in a single server, thus greatly reducing the footprint to get to true high-performance computing (HPC).DGX-2 comes just seven months after the DGX-1 was introduced, although it won’t ship until the third quarter. However, Nvidia claims it has 10 times the compute power as the previous generation thanks to twice the number of GPUs, much more memory per GPU, faster memory, and a faster GPU interconnect.[ Learn how server disaggregation can boost data center efficiency. | Get regularly scheduled insights by signing up for Network World newsletters. ] The DGX-2 uses a Tesla V100 CPU, the top of the line for Nvidia’s HPC and artificial intelligence-based cards. With the DGX-2, it has doubled the on-board memory to 32GB. Nvidia claims the DGX-2 is the world’s first single physical server with enough computing power to deliver two petaflops, a level of performance usually delivered by hundreds of servers networked into clusters.To read this article in full, please click here

Simplifying Linux with … fish?

No, the title for this post is not a mistake. I’m not referring to the gill-bearing aquatic craniate animals that lack limbs with digits or the shark-shaped magnetic Linux emblem that you might have stuck to your car. The “fish” that I’m referring to is a Linux shell and one that’s been around since 2005. Even so, it’s a shell that a lot of Linux users may not be familiar with.The primary reason is that fish isn't generally installed by default. In fact, on some distributions, the repository that provides it is one your system probably doesn't access. If you type "which fish" and your system responds simply with another prompt, you might be missing out on an interesting alternative shell. And, if your apt-get or yum command can't find what you're looking for, you will probably have to use commands like those shown below to get fish loaded onto your system.To read this article in full, please click here

Aruba Networks Leads HPE to the Edge

When pre-split Hewlett-Packard bought Aruba Networks three years ago for $3 billion, the goal was to create a stronger and larger networking business that combined both wired and wireless networking capabilities and could challenge market leader Cisco Systems at a time when enterprises were more fully embracing mobile computing and public clouds.

Aruba was launched in 2002 and by the time of the acquisition had established itself as a leading vendor in the wireless networking market and had an enthusiastic following of users who call themselves “Airheads.” The worry among many of them was that once the deal was closed,

Aruba Networks Leads HPE to the Edge was written by Nicole Hemsoth at The Next Platform.

Cognitive Cloud Networking with Arista X3 Series

At Arista we have always embraced open networking trends by designing our hardware and software to be as programmable as possible, driving the use of merchant silicon and diversity for the broader industry. It has allowed our customers to select their favorite silicon architectures for the switch pipeline and choose the suite of software and hardware they want to form their cognitive network systems. 

Cognitive Cloud Networking with Arista X3 Series

At Arista we have always embraced open networking trends by designing our hardware and software to be as programmable as possible, driving the use of merchant silicon and diversity for the broader industry. It has allowed our customers to select their favorite silicon architectures for the switch pipeline and choose the suite of software and hardware they want to form their cognitive network systems. 

eBPF, Sockets, Hop Distance and manually writing eBPF assembly

A friend gave me an interesting task: extract IP TTL values from TCP connections established by a userspace program. This seemingly simple task quickly exploded into an epic Linux system programming hack. The result code is grossly over engineered, but boy, did we learn plenty in the process!

3845353725_7d7c624f34_z

CC BY-SA 2.0 image by Paul Miller

Context

You may wonder why she wanted to inspect the TTL packet field (formally known as "IP Time To Live (TTL)" in IPv4, or "Hop Count" in IPv6)? The reason is simple - she wanted to ensure that the connections are routed outside of our datacenter. The "Hop Distance" - the difference between the TTL value set by the originating machine and the TTL value in the packet received at its destination - shows how many routers the packet crossed. If a packet crossed two or more routers, we know it indeed came from outside of our datacenter.

Screen-Shot-2018-03-29-at-10.52.49-AM-1

It's uncommon to look at TTL values (except for their intended purpose of mitigating routing loops by checking when the TTL reaches zero). The normal way to deal with the problem we had would be to blacklist IP ranges of our servers. But it’s not that Continue reading

VXLAN Limitations of Data Center Switches

One of my readers found this Culumus Networks article that explains why you can’t have more than a few hundred VXLAN-based VLAN segments on every port of 48-port Trident-2 data center switch.

Expect to see similar limitations in most other chipsets. There’s a huge gap between millions of segments enabled by 24-bit VXLAN Network Identifier and reality of switching silicon. Most switching hardware is also limited to 4K VLANs.

Read more ...

Cloudflare is adding Drupal WAF Rule to Mitigate Critical Drupal Exploit

Drupal has recently announced an update to fix a critical remote code execution exploit (SA-CORE-2018-002/CVE-2018-7600). In response we have just pushed out a rule to block requests matching these exploit conditions for our Web Application Firewall (WAF). You can find this rule in the Cloudflare ruleset in your dashboard under the Drupal category with the rule ID of D0003.

Drupal Advisory: https://www.drupal.org/sa-core-2018-002

An Inside Look at What Powers Microsoft’s Internal Systems for AI R&D

For those who might expect Microsoft to favor its own Windows-centric platforms and tools to power comprehensive infrastructure for serving AI compute and software services for internal R&D groups, plan on being surprised.

While Microsoft does rely on some core windows features and certainly its Azure cloud services, much of its infrastructure is powered by a broad suite of open source tools. As Jim Jernigan, senior R&D systems engineer at Microsoft Research told us at the GPU Technology Conference (GTC18) this week, the highest volume of workloads running on the diverse research clusters Microsoft uses for AI development are running

An Inside Look at What Powers Microsoft’s Internal Systems for AI R&D was written by Nicole Hemsoth at The Next Platform.

Aruba co-founder: We want to live on the edge

Tech companies of every stripe are staking their claim to the internet of things, and networking vendors like Aruba are no exception. But to hear co-founder and president Keerti Melkote tell it, his company’s pitch might have a little more heat on it than others.Aruba’s IoT credentials are based on a relatively simple premise – by definition, IoT devices have to be on the network, and they’re one of the bigger fish in that particular pool.[ Find out how 5G wireless could change networking as we know it and how to deal with networking IoT. | Get regularly scheduled insights by signing up for Network World newsletters. ] The company has a lot of experience in onboarding devices – hard-won during the era of BYOD, covering provisioning, credentials, privilege levels and monitoring – which translates well to the world of IoT, particularly given the urgent need to secure those devices.To read this article in full, please click here

BrandPost: How network automation moves AI from science fiction to reality

Artificial intelligence (AI) has become a buzzword, and what once was realized only in sci-fi movies, is now a burgeoning reality in IT processes.There are significant savings — both in terms of time and money — to be had, as well as an increase in mission delivery.However, before organizations can take advantage of advancements like AI today, they must take a few key steps. One area is in the network. Let’s explore how enterprises can begin to evolve their network technology to leverage AI capabilities in the near future.AutomationNetwork automation is a meaningful step towards AI that can provide enhanced mission delivery today. By leveraging automation capabilities within the network, immediate efficiencies can be realized.To read this article in full, please click here

BrandPost: Mobile user engagement apps: Trends & requirements

The mobile engagement app has emerged as a way to acquire, retain, and monetize loyal user bases. When designed properly, everyone gains from the app. Users are more satisfied, productive, and even safer. Businesses can enjoy larger and more predictable revenue streams. Executed poorly, mobile apps can have low download rates, and become abandoned, forgotten or deleted.To learn more about how businesses are using these apps and their plans for the future, we surveyed companies across all industries. A high percentage of organizations have already determined they need an engagement app. To date, most of the apps in use are being developed in-house; commercial off-the-shelf versions are up and coming, but not yet well-known. We learned there is still lots of room for improvement and that an important requirement of the apps is to track location.To read this article in full, please click here