Archive

Category Archives for "Networking"

SD-WAN – What it means for enterprise networking, security, cloud computing

There have been significant changes in wide-area networks over the past few years, none more important than software-defined WAN or SD-WAN, which is changing how network pros think about optimizing the use of connectivity that is as varied as Multiprotocol Label Switching (MPLS), frame relay and even DSL.What is SD-WAN? As the name states, software-defined wide-area networks use software to control the connectivity, management and services between data centers and remote branches or cloud instances. Like its bigger technology brother, software-defined networking, SD-WAN decouples the control plane from the data plane.To read this article in full, please click here

SD-WAN: What is it and why you’ll use it one day

There have been significant changes in wide-area networks over the past few years, none more important than software-defined WAN or SD-WAN, which is changing how network pros think about optimizing the use of connectivity that is as varied as Multiprotocol Label Switching (MPLS), frame relay and even DSL.What is SD-WAN? As the name states, software-defined wide-area networks use software to control the connectivity, management and services between data centers and remote branches or cloud instances. Like its bigger technology brother, software-defined networking, SD-WAN decouples the control plane from the data plane.To read this article in full, please click here

The biggest risk to uptime? Your staff

There was an old joke: "To err is human, but to really foul up you need a computer." Now it seems the reverse is true. The reliability of data center equipment is vastly improved but the humans running them have not kept up and it's a threat to uptime.The Uptime Institute has surveyed thousands of IT professionals throughout the year on outages and said the vast majority of data center failures are caused by human error, from 70 percent to 75 percent.[Get regularly scheduled insights by signing up for Network World newsletters. ] And some of them are severe. It found more than 30 percent of IT service and data center operators experienced downtime that they called a “severe degradation of service” over the last year, with 10 percent of the 2019 respondents reporting that their most recent incident cost more than $1 million.To read this article in full, please click here

The biggest risk to uptime? Your staff

There was an old joke: "To err is human, but to really foul up you need a computer." Now it seems the reverse is true. The reliability of data center equipment is vastly improved but the humans running them have not kept up and it's a threat to uptime.The Uptime Institute has surveyed thousands of IT professionals throughout the year on outages and said the vast majority of data center failures are caused by human error, from 70 percent to 75 percent.[Get regularly scheduled insights by signing up for Network World newsletters. ] And some of them are severe. It found more than 30 percent of IT service and data center operators experienced downtime that they called a “severe degradation of service” over the last year, with 10 percent of the 2019 respondents reporting that their most recent incident cost more than $1 million.To read this article in full, please click here

Viewing files and processes as trees on Linux

Linux provides several handy commands for viewing both files and processes in a branching, tree-like format that makes it easy to view how they are related. In this post, we'll look at the ps, pstree and tree commands along with some options they provide to help focus your view on what you want to see.ps The ps command that we all use to list processes has some interesting options that many of us never take advantage of. While the commonly used ps -ef provides a complete listing of running processes, the ps -ejH command adds a nice effect. It indents related processes to make the relationship between these processes visually more clear  – as in this excerpt:To read this article in full, please click here

Reimagining-the-Internet project gets funding

The Internet of Things and 5G could be among the benefactors of an upcoming $20 million U.S. government cash injection designed to come up with new architectures to replace existing public internet.FABRIC, as the National Science Foundation-funded umbrella project is called, aims to come up with a proving ground to explore new ways to move, keep and compute data in shared infrastructure such as the public internet. The project “will allow scientists to explore what a new internet could look like at scale,” says the lead institution, the University of North Carolina at Chapel Hill, in a media release. And it “will help determine the internet architecture of the future.”To read this article in full, please click here

Oracle Ups Ante Against Cloud Giants Amazon, Microsoft

The company plans to hire 2,000 employees worldwide to join its Cloud Infrastructure business as it...

Read More »

© SDxCentral, LLC. Use of this feed is limited to personal, non-commercial use and is governed by SDxCentral's Terms of Use (https://www.sdxcentral.com/legal/terms-of-service/). Publishing this feed for public or commercial use and/or misrepresentation by a third party is prohibited.

Versa SD-WAN License Sales Top 200,000

Since last year the SD-WAN vendor has sold 50,000 new licenses, doubled its service provider...

Read More »

© SDxCentral, LLC. Use of this feed is limited to personal, non-commercial use and is governed by SDxCentral's Terms of Use (https://www.sdxcentral.com/legal/terms-of-service/). Publishing this feed for public or commercial use and/or misrepresentation by a third party is prohibited.

Spend less time fumbling and more time landing sales with PipelineDeals

Common sense dictates that if your business wants to scale upwards, it needs to secure more sales. However, building a solid base of satisfied customers who will recommend your services is impossible if your sales team struggles with an overly complicated CRM platform. Rather than spending thousands of hours fumbling with complex CRM tools, you can optimize your sales efforts with PipelineDeals’ easy-to-use platform, and your business can sign up for a 14-day free trial or customized demo now. To read this article in full, please click here

AT&T Abandons Puerto Rico and US Virgin Islands

“This is one of the first transactions driven by climate change,” said industry analyst Roger...

Read More »

© SDxCentral, LLC. Use of this feed is limited to personal, non-commercial use and is governed by SDxCentral's Terms of Use (https://www.sdxcentral.com/legal/terms-of-service/). Publishing this feed for public or commercial use and/or misrepresentation by a third party is prohibited.

The case for open standards: an M&A perspective

Very few organizations use IT equipment supplied by a single vendor. Where heterogeneous IT environments exist, interoperability is key to achieving maximum value from existing investments. Open networking is the most cost effective way to ensure interoperability between devices on a network.

Unless your organization was formed very recently, chances are that your organization’s IT has evolved over time. Even small hardware upgrades are disruptive to an organization’s operations, making network-wide “lift and shift” upgrades nearly unheard of.

While loyalty to a single vendor can persist through regular organic growth and upgrade cycles, organizations regularly undergo mergers and acquisitions (M&As). M&As almost always introduce some level of heterogeneity into a network, meaning that any organization of modest size is almost guaranteed to have to integrate IT from multiple vendors.

While every new type of device from every different vendor imposes operational management overhead, the impact of heterogeneous IT isn’t universal across device types. The level of automation within an organization for different device classes, as well as the ubiquity and ease of use of management abstraction layers, both play a role in determining the impact of heterogeneity.

The Impact of Standards

Consider, for a moment, the average x86 server. Each Continue reading

Cato and the Secure Access Service Edge: Where Your Digital Business Network Starts

In this blog explore how the Secure Access Service Edge (SASE) converges enterprise security and...

Read More »

© SDxCentral, LLC. Use of this feed is limited to personal, non-commercial use and is governed by SDxCentral's Terms of Use (https://www.sdxcentral.com/legal/terms-of-service/). Publishing this feed for public or commercial use and/or misrepresentation by a third party is prohibited.

Terraforming Cloudflare: in quest of the optimal setup

Terraforming Cloudflare: in quest of the optimal setup

This is a guest post by Dimitris Koutsourelis and Alexis Dimitriadis, working for the Security Team at Workable, a company that makes software to help companies find and hire great people.

Terraforming Cloudflare: in quest of the optimal setup

This post is about our introductive journey to the infrastructure-as-code practice; managing Cloudflare configuration in a declarative and version-controlled way. We’d like to share the experience we’ve gained during this process; our pain points, limitations we faced, different approaches we took and provide parts of our solution and experimentations.

Terraform world

Terraform is a great tool that fulfills our requirements, and fortunately, Cloudflare maintains its own provider that allows us to manage its service configuration hasslefree.

On top of that, Terragrunt, is a thin wrapper that provides extra commands and functionality for keeping Terraform configurations DRY, and managing remote state.

The combination of both leads to a more modular and re-usable structure for Cloudflare resources (configuration), by utilizing terraform and terragrunt modules.
We’ve chosen to use the latest version of both tools (Terraform-v0.12 & Terragrunt-v0.19 respectively) and constantly upgrade to take advantage of the valuable new features and functionality, which at this point in time, remove important limitations.

Workable context

Our set up includes Continue reading

InfluxDB 2.0

Introducing the Next-Generation InfluxDB 2.0 Platform mentions that InfluxDB 2.0 will be able to scrape Prometheus exporters. Get started with InfluxDB provides instructions for running an alpha version of the new software using Docker:
docker run --name influxdb -p 9999:9999 quay.io/influxdb/influxdb:2.0.0-alpha
Prometheus exporter describes an application that runs on the sFlow-RT analytics platform that converts real-time streaming telemetry from industry standard sFlow agents. Host, Docker, Swarm and Kubernetes monitoring describes how to deploy agents on popular container orchestration platforms.
The screen capture above shows three scrapers configured in InfluxDB 2.0:
  1. sflow-rt-analyzer,
    URL: http://10.0.0.70:8008/prometheus/analyzer/txt
  2. sflow-rt-dump,
    URL: http://10.0.0.70:8008/prometheus/metrics/ALL/ALL/txt
  3. sflow-rt-flow-src-dst,
    URL: http://10.0.0.70:8008/app/prometheus/scripts/export.js/flows/ALL/txt?metric=flow_src_dst_bps&key=ipsource,ipdestination&value=bytes&aggMode=max&maxFlows=100&minValue=1000&scale=8
The first collects metrics about the performance of the sFlow-RT analytics engine, the second, all the metrics exported by the sFlow agents, and the third, is a flow metric (see Flow metrics with Prometheus and Grafana).

Updated 19 October 2019, native support for Prometheus export added to sFlow-RT, URLs 1 and 2 modified to reflect new API.
InfluxDB 2.0 now includes the data exploration and dashboard building capabilities that were previously in the separate Chronograf application. The screen Continue reading

Islamabad Chapter Brings First Internet Governance Event to Quetta, Pakistan

The 5th Pakistan School on Internet Governance (pkSIG 2019) was successfully held last month in Quetta, Pakistan. This represents a significant achievement for the Internet Society Pakistan Islamabad Chapter as it played an instrumental role in bringing the first-ever Internet Governance event to the provincial capital of Balochistan.

For those who may not know, Balochistan has the largest land area among the four provinces of Pakistan, yet it is the least populated and least developed. Only 27% of its population lives in urban areas and Internet penetration is low. Finding adequate sponsors, and more importantly, diversity among the students to participate was a critical concern. But, pkSIG 2019 in Quetta proved to be one of the best editions of this school.

Over 60 people (one-third of them female) registered for the event, including students, professionals, startup founders, speakers, and some guests who showed keen interest in the program. Following a four-week long process of registration and shortlisting, 35 students were selected for pkSIG 2019 and five were awarded fellowships. Since all the sessions were livestreamed, a sizeable audience participated online as well. (The sessions and presentations are available online.)

“It’s our fifth consecutive year conducting pkSIG – Continue reading

VMware NSX Killed My EVPN Fabric

A while ago I had an interesting discussion with someone running VMware NSX on top of VXLAN+EVPN fabric - a pretty common scenario considering:

  • NSX’s insistence on having all VXLAN uplink from the same server in the same subnet;
  • Data center switching vendors being on a lemming-like run praising EVPN+VXLAN;
  • Non-FANG environments being somewhat reluctant to connect a server to a single switch.

His fabric was running well… apart from the weird times when someone started tons of new VMs.

Read more ...

Ansible + ServiceNow Part 3: Making outbound RESTful API calls to Red Hat Ansible Tower

blog_ansible-and-service-now-3

Red Hat Ansible Tower offers value by allowing automation to scale in a checked manner - users can run playbooks for only the processes and targets they need access to, and no further. 

Not only does Ansible Tower provide automation at scale, but it also integrates with several external platforms. In many cases, this means that users can use the interface they are accustomed to while launching Ansible Tower templates in the background. 

One of the most ubiquitous self service platforms in use today is ServiceNow, and many of the enterprise conversations had with Ansible Tower customers focus on ServiceNow integration. With this in mind, this blog entry walks through the steps to set up your ServiceNow instance to make outbound RESTful API calls into Ansible Tower, using OAuth2 authentication. 

This is part 3 in a multi-part series, feel free to refer to part 1 and part 2 for more context.

The following software versions are used:

  • Ansible Tower: 3.4, 3.5
  • ServiceNow: London, Madrid

If you sign up for a ServiceNow Developer account, ServiceNow offers a free instance that can be used for replicating and testing this functionality. Your ServiceNow instance needs to be able Continue reading

Dark Traffic

This a report on a four-year long experiment in advertising a 'dark' prefix on the internet and examining the profile of unsolicited traffic that is sent to a traffic collector.

Aparavi’s Storage Focus Is on Data, Not Devices

The offering features a hybrid and multi-cloud file backup tool that enables long-term retention...

Read More »

© SDxCentral, LLC. Use of this feed is limited to personal, non-commercial use and is governed by SDxCentral's Terms of Use (https://www.sdxcentral.com/legal/terms-of-service/). Publishing this feed for public or commercial use and/or misrepresentation by a third party is prohibited.

Schneider Electric launches wall-mounted server rack

Floor space is often at a premium in a cramped data center, and Schneider Electric believes it has a fix for that: a wall-mounted server rack.The EcoStruxure Micro Data Center Wall Mount is a 6U design, meaning it has the capacity of six rack units. Schneider is pushing its space-saving option as an edge solution. The company's EcoStruxure IT Expert remote management and vulnerability assessment service will be available for the wall-mount units, even when installed in non-secured edge locations. READ MORE: Micro-modular data centers set to multiplyTo read this article in full, please click here