Building A Network (no, the other kind)

A conversation in the Network Collective Slack prompted some conversation about how to build a network. No, not the packet switched networks that we’re all so familiar with, but rather a personal network of peers. Not everyone has the privilege to attend trade shows and conferences throughout the year, and all of us have lacked that privilege for a while now due to Covid, so how does one build a pervasive network without in-person events. We also discuss some methods to set yourself apart from the crowd in ways that don’t include peer relationships.

 

Thank you to Bluecat Networks for sponsoring today’s episode. Bluecat is putting together some great content and a great community surrounding the topics of DNS, DHCP, and IPAM. You can join the Network VIP community and register for the next roundtable by going to bluecatnetworks.com/certainty.
Thank you to Unimus for sponsoring today’s episode. Unimus is a fast to deploy and easy to use Network Automation and Configuration Management solution. You can learn more about how you can start automating your network in under 15 minutes at unimus.net/nc.
Tony Efantis
Host
Jordan Martin
Host

Outro Music:
Danger Storm Kevin MacLeod (incompetech.com)
Licensed under Continue reading

AWS enlists partners to encourage mainframe-to-cloud migration

AWS has turned up the drumbeat to move workloads off of the mainframe and into its cloud. At its weeks-long re:Invent virtual event, Amazon Web Services said it would soon expand its AWS Competency Program to include even more services to migrate mainframe workloads to the cloud. The services are an expansion of mainframe migration services AWS has had on its menu for the past few years.[Get regularly scheduled insights by signing up for Network World newsletters.] AWS says its Competency Program is designed to identify, validate, and promote AWS partners with demonstrated technical expertise in a given area.  In this case users looking to migrate will have access to products and services from core AWS partners, the company wrote in a blog about the new service.To read this article in full, please click here

AWS enlists partners to encourage mainframe-to-cloud migration

AWS has turned up the drumbeat to move workloads off of the mainframe and into its cloud. At its weeks-long re:Invent virtual event, Amazon Web Services said it would soon expand its Competency Program to include even more services to migrate mainframe workloads to the cloud. The services are an expansion of mainframe migration services AWS has had on its menu for the past few years.[Get regularly scheduled insights by signing up for Network World newsletters.] AWS says its Competency Program is designed to identify, validate, and promote AWS partners with demonstrated technical expertise in a given area.  In this case users looking to migrate will have access to products and services from core AWS partners, the company wrote in a blog about the new service.To read this article in full, please click here

Pure Storage offers on-demand storage service

Pure Storage, the all-flash array storage provider, has expanded its Pure-as-a-Service offering to include flexible, pay-as-you-go options for bridging public and private clouds.The company launched Pure-as-a-Service late last year, but it was based on its previous Evergreen service, which had a per-use model for clients looking to move from capex to opex economics. It provides block, file, and object data management capabilities under a single unified subscription.First stage Pure-as-a-Service was formally known as Evergreen Storage Service (ES2), which was launched out of a pilot program begun in 2016. The company notes that one of the challenges facing the industry is that "products on subscription" is often used interchangeably with true services, the difference being the former is a financial model while the latter is more of a cloud economic, operational, and customer experience model. To read this article in full, please click here

Getting grounded in AWS cloud skills

With more and more data-center workloads being shifted to the cloud, it’s important for enterprise IT staff to learn cloud skills not only to stay relevant within their organizations but also to prepare for career advancement and better salaries.One way to accomplish this is to learn the ins and outs of working in specific cloud providers’ environments. This is a brief description of how to get grounded in AWS.According to training firm Global Knowledge, the pay associated with two of the dozens of AWS certifications ranks among the top 15 IT certifications—AWS Certified Solutions Architect—Associate ($149,446) and AWS Certified Cloud Practitioner ($131,465).To read this article in full, please click here

New White Paper: Considerations for Mandating Open Interfaces

People all around the world depend on the Internet to live their lives and do their jobs. Behind the surface of applications, online services depend on “interoperability” – the ability of software to work together.

For instance, this is what allows you to send a document from the Outlook account on your iPhone to a friend’s Gmail, then edit the document on a Samsung tablet before saving it in Alibaba cloud, and finally posting it on Twitter using an application like Hootsuite.

But as we recognized in the 2019 Global Internet Report, trends of consolidation in the Internet economy, particularly at the application layer and in web services, have spurred concerns and public debates on the need to regulate Big Tech. Among the proposed measures by policymakers, academics, and other thought leaders across the world is for software services and systems to be legally required to provide interoperability or open interfaces. Today we release a new white paper on this topic, with the aim to support and add depth to the discussions about the key considerations involved.

The general sentiment among competition experts, policymakers and other stakeholders is that existing competition policy is not addressing the economic and societal Continue reading

An introduction to three-phase power and PDUs

An introduction to three-phase power and PDUs

Our fleet of over 200 locations comprises various generations of servers and routers. And with the ever changing landscape of services and computing demands, it’s imperative that we manage power in our data centers right. This blog is a brief Electrical Engineering 101 session going over specifically how power distribution units (PDU) work, along with some good practices on how we use them. It appears to me that we could all use a bit more knowledge on this topic, and more love and appreciation of something that’s critical but usually taken for granted, like hot showers and opposable thumbs.

A PDU is a device used in data centers to distribute power to multiple rack-mounted machines. It’s an industrial grade power strip typically designed to power an average consumption of about seven US households. Advanced models have monitoring features and can be accessed via SSH or webGUI to turn on and off power outlets. How we choose a PDU depends on what country the data center is and what it provides in terms of voltage, phase, and plug type.

An introduction to three-phase power and PDUs

For each of our racks, all of our dual power-supply (PSU) servers are cabled to one of the two vertically mounted PDUs. Continue reading

Diversity and The Digital Divide: Thoughts From Tech Leaders

Diversity and The Digital Divide: Thoughts From Tech Leaders

Leaders from across the tech industry and beyond recently joined us for Cloudflare’s Birthday Week, helping us celebrate Cloudflare’s 10th birthday. Many of them touched on the importance of diversity and making the Internet accessible to everyone.

Here are some of the highlights.

On the value of soliciting feedback

Selina Tobaccowala
Chief Digital Officer at Openfit, Co-Founder of Gixo
Former President & CTO of SurveyMonkey

Diversity and The Digital Divide: Thoughts From Tech Leaders

When you think about diversity and inclusion, unfortunately, it's often only the loudest voice, the squeakiest wheel [who gets heard]. And what a survey allows you to do is let people's voices be heard who are not always willing to raise their hand or speak the loudest.

So at SurveyMonkey, we always made sure that when we were thinking about user testing and we were thinking about usability testing — that it was that broad swath of the customer because you wanted people across all different segments to submit their opinion.

I think that collecting data in a way that can be anonymized, collecting data in a way that lets people have a thoughtful versus always off the cuff conversation is really important. And what we also provided was a benchmarking product, because if you Continue reading

AWS offers “bare-metal” Mac cloud services

Amazon Web Services has announced that it is offering what it calls bare-metal Macs in its cloud, although Amazon’s definition of “bare metal” doesn’t exactly jibe with the generally accepted definition.“Bare metal” typically means no operating system. It’s very popular as a means of what is known as “lift and shift,” where a company takes its custom operating environment, starting with the operating system, libraries, apps, databases, and so on, and moves it from on-premises to the cloud without needing to make a modification to its software stack.Here, Amazon is offering Macs running macOS 10.14 (Mojave) or 10.15 (Catalina) on an eighth generation, six-core Intel Core i7 (Coffee Lake) processor running at 3.2 GHz. (Amusingly, the instances are run on Mac Minis. What I wouldn’t give to see a data center with racks full of Mac Minis.)To read this article in full, please click here

One Way To Bring DPU Acceleration To Supercomputing

That is not a typo in the title. We did not mean to say GPU in title above, or even make a joke that in hybrid CPU_GPU systems, the CPU is more of a serial processing accelerator with a giant slow DDR4 cache for GPUs in hybrid supercomputers these days – therefore making the CPU a kind of accelerator for the GPU.

One Way To Bring DPU Acceleration To Supercomputing was written by Timothy Prickett Morgan at The Next Platform.

What Does A Good Network Design Look Like? – James Bensley, Senior Network Design Engineer

Is a good network design just about technical specifications or should you take into account business drivers and needs? James is a network design veteran and presented on this topic at UKNOF45. We talk about design considerations, tips and tricks, drivers and motivations, asking the question behind the question and even about a book that is ‘in the works’. James is very active on Twitter, LinkedIn and can be reached via [email protected].

Lenovo Spreads The AI Message Far And Wide

Artificial intelligence and machine learning are foundational to many of the modernization efforts that enterprises are embracing, from leveraging them to more quickly analyze the mountains of data they’re generating and automating operational processes to running the advanced applications – like natural language processing, speech and image recognition, and machine vision – needed by a broad array of industries, from financial services, agriculture, healthcare and automotive.

Lenovo Spreads The AI Message Far And Wide was written by Jeffrey Burt at The Next Platform.

What developers need to know about Docker, Docker Engine, and Kubernetes v1.20

The latest version of Kubernetes Kubernetes v1.20.0-rc.0 is now available. The Kubernetes project plans to deprecate Docker Engine support in the kubelet and support for dockershim will be removed in a future release, probably late next year. The net/net is support for your container images built with Docker tools is not being deprecated and will still work as before.

Even better news however, is that Mirantis and Docker have agreed to partner to maintain the shim code standalone outside Kubernetes, as a conformant CRI interface for Docker Engine. We will start with the great initial prototype from Dims, at https://github.com/dims/cri-dockerd and continuing to make it available as an open source project, at https://github.com/Mirantis/cri-dockerd. This means that you can continue to build Kubernetes based on Docker Engine as before, just switching from the built in dockershim to the external one. Docker and Mirantis will work together on making sure it continues to work as well as before and that it passes all the conformance tests and works just like the built in version did. Docker will continue to ship this shim in Docker Desktop as this gives a great developer experience, and Mirantis will be Continue reading

MANRS Welcomes 500th Network Operator

Today, we are glad to share a milestone for the Mutually Agreed Norms for Routing Security (MANRS) initiative: the number of participants in the network operator program has reached 500.

By joining the community-driven initiative, these network operators, big and small, from around the world have taken specific, concrete actions to improve the resilience and security of the Internet’s inherently insecure routing infrastructure.

Systemic security issues about how traffic is routed on the Internet make it a relatively easy target for criminals. MANRS helps reduce the most common routing threats and increase efficiency and transparency among Internet service providers (ISPs) on peering relationships.

The growth of the network operator program – the oldest among three today – has been accelerating in recent years. Launched in 2014 with a group of nine operators, the number of participants in the program took four years to reach 100 in 2018 and has risen sharply in the last two years, with 156 joining in 2019 and 244 so far in 2020.

The 500 network operators manage 651 autonomous systems in total, as some of them manage multiple networks.

Meanwhile, the Internet Exchange Point (IXP) program, which we launched in 2018, now has 60 Continue reading

Isovalent Harnesses eBPF for Cloud Native Security, Visibility

Veteran networking pros at Extended Berkeley Packet Filter (eBPF) technology, which makes the Linux kernel programmable, to address the ephemeral challenges of Kubernetes and microservices. “If you think about the Linux kernel, traditionally, it’s a static set of functionality that some Linux kernel developer over the course of the last 20 or 30 years decided to build and they compiled it into the Linux kernel. And it works the way that kernel developer thought about, but may not be applicable to the use case that we need to do today,” said Isovalent CEO