Interested in Ubuntu? Check Out Andrew Mallet’s New Course

Tune into our newest addition to our course library: Ubuntu Server (18.04): Installing and Managing openLDAP Directories, with Andrew Mallet to learn the basics of Ubuntu server.

About the Course:

OpenLDAP is a directory service that predates many proprietary systems and provides a universal authentication mechanism for client system to authenticate to and white pages system to search. In this course we will take you through the basics of openLDAP Leading to an install. From there you will learn how to develop your system further by adding and searching entries. Next we will move on to authentication to openLDAP from other Linux clients and services such as Apache HTTPD. Finally we will look at scaling out our system by including replication to add failover and fault-tolerance to the Directly.

IDG Contributor Network: 5 ways to derail IT transformation projects

Projects involving virtualization, cloud architectures, advanced networking and cutting-edge digital technologies are critical to pushing a company into the future. As a result, missteps can be costly. Take a good idea on paper, execute it poorly, and your desire to create value will end up squandering value.Given the complexity of IT transformation projects, there are many ways to get them wrong. If you've ever been called upon to assist companies stuck in the middle of such projects (as has our team, many times) it’s easy enough to identify several sure-fire ways to derail them – and corresponding ways to keep them on track. Here are five:1. Trivialize the effort required Have you ever sat in a meeting and heard an executive dismiss the difficulty of a project? "That sounds easy!," he or she might say. Whether it’s a desire for the project to be completed, a lack of knowledge about the details, the planning fallacy or some other error, following that lead is a good way to set yourself up for failure.To read this article in full, please click here

IDG Contributor Network: 5 ways to derail IT transformation projects

Projects involving virtualization, cloud architectures, advanced networking and cutting-edge digital technologies are critical to pushing a company into the future. As a result, missteps can be costly. Take a good idea on paper, execute it poorly, and your desire to create value will end up squandering value.Given the complexity of IT transformation projects, there are many ways to get them wrong. If you've ever been called upon to assist companies stuck in the middle of such projects (as has our team, many times) it’s easy enough to identify several sure-fire ways to derail them – and corresponding ways to keep them on track. Here are five:1. Trivialize the effort required Have you ever sat in a meeting and heard an executive dismiss the difficulty of a project? "That sounds easy!," he or she might say. Whether it’s a desire for the project to be completed, a lack of knowledge about the details, the planning fallacy or some other error, following that lead is a good way to set yourself up for failure.To read this article in full, please click here

IDG Contributor Network: 4 ways next generation NPMD solutions reduce risk in network transitions

Forced to keep pace with rapidly emerging business requirements, networks are changing faster than ever. The business-facing side of networking is under continuous pressure to do more, in more places, faster. Challenging as it is, the network-to-business interaction is simpler than what is going on behind the scenes, as network professionals transform almost every area of their networks to meet new demands.New technologies such as cloud, NFV and SDN are turning traditional networks into hybrid ones. In fact, Gartner predicts that cloud infrastructure services will grow 35.9 percent in 2018, and IDC predicts that SD-WAN adoption will grow at a 40.4 percent CAGR from 2017 to 2022. These numbers imply a great deal of change in networks, change that introduces significant risk of service disruption from minor – a few inconvenienced users – to major – significant outages visible to customers and executives. Reducing the risk during significant transitions is critical. That’s where network performance management and diagnostics (NPMD) products play a significant role.To read this article in full, please click here

IDG Contributor Network: 4 ways next generation NPMD solutions reduce risk in network transitions

Forced to keep pace with rapidly emerging business requirements, networks are changing faster than ever. The business-facing side of networking is under continuous pressure to do more, in more places, faster. Challenging as it is, the network-to-business interaction is simpler than what is going on behind the scenes, as network professionals transform almost every area of their networks to meet new demands.New technologies such as cloud, NFV and SDN are turning traditional networks into hybrid ones. In fact, Gartner predicts that cloud infrastructure services will grow 35.9 percent in 2018, and IDC predicts that SD-WAN adoption will grow at a 40.4 percent CAGR from 2017 to 2022. These numbers imply a great deal of change in networks, change that introduces significant risk of service disruption from minor – a few inconvenienced users – to major – significant outages visible to customers and executives. Reducing the risk during significant transitions is critical. That’s where network performance management and diagnostics (NPMD) products play a significant role.To read this article in full, please click here

Research: Tail Attacks on Web Applications

When you think of a Distributed Denial of Service (DDoS) attack, you probably think about an attack which overflows the bandwidth available on a single link; or overflowing the number of half open TCP sessions a device can have open at once, preventing the device from accepting more sessions. In all cases, a DoS or DDoS attack will involve a lot of traffic being pushed at a single device, or across a single link.

TL;DR
  • Denial of service attacks do not always require high volumes of traffic
  • An intelligent attacker can exploit the long tail of service queues deep in a web application to bring the service down
  • These kinds of attacks would be very difficult to detect

 

But if you look at an entire system, there are a lot of places where resources are scarce, and hence are places where resources could be consumed in a way that prevents services from operating correctly. Such attacks would not need to be distributed, because they could take much less traffic than is traditionally required to deny a service. These kinds of attacks are called tail attacks, because they attack the long tail of resource pools, where these pools are much Continue reading

IDG Contributor Network: Cutting complexity at the edge

The edge is top-of-mind for many IT and OT professionals across a wide range of industries and sectors. This interest is driven by the need to use data more effectively to maintain operations, optimize performance and increase uptime.Existing IT and OT infrastructures typically don’t collect, store and analyze data at the edge. They instead either send this data to the cloud or to enterprise-level computing systems for storage and analysis, the domain of IT personnel.A better solution, specifically for applications where access to data needs to happen quickly, is to perform data collection, storage and analysis at the edge using technologies designed to perform these specific tasks. The benefits of this approach include reduced latency, improved data security and more efficient use of bandwidth.To read this article in full, please click here

IDG Contributor Network: Cutting complexity at the edge

The edge is top-of-mind for many IT and OT professionals across a wide range of industries and sectors. This interest is driven by the need to use data more effectively to maintain operations, optimize performance and increase uptime.Existing IT and OT infrastructures typically don’t collect, store and analyze data at the edge. They instead either send this data to the cloud or to enterprise-level computing systems for storage and analysis, the domain of IT personnel.A better solution, specifically for applications where access to data needs to happen quickly, is to perform data collection, storage and analysis at the edge using technologies designed to perform these specific tasks. The benefits of this approach include reduced latency, improved data security and more efficient use of bandwidth.To read this article in full, please click here

6G will achieve terabits-per-second speeds

The first of the upcoming 5G network technologies won’t provide significant reliability gains over existing wireless, such as 4G LTE, according to a developer involved in 5G.Additionally, the millisecond levels of latency that the new 5G wireless will attempt to offer—when some of it is commercially launched, possibly later this year—isn’t going to be enough of an advantage for a society that’s now completely data-driven and needs near-instant, microsecond connectivity.“Ultra-reliability will be basically not there,” Ari Pouttu, professor for Dependable Wireless at the University of Oulu, told me during a visit to the university in Finland. 5G’s principal benefits over current wireless platforms are touted as latency reduction and improved reliability by marketers who are pitching the still-to-be-released technology.To read this article in full, please click here

6G will achieve terabits-per-second speeds

The first of the upcoming 5G network technologies won’t provide significant reliability gains over existing wireless, such as 4G LTE, according to a developer involved in 5G.Additionally, the millisecond levels of latency that the new 5G wireless will attempt to offer—when some of it is commercially launched, possibly later this year—isn’t going to be enough of an advantage for a society that’s now completely data-driven and needs near-instant, microsecond connectivity.“Ultra-reliability will be basically not there,” Ari Pouttu, professor for Dependable Wireless at the University of Oulu, told me during a visit to the university in Finland. 5G’s principal benefits over current wireless platforms are touted as latency reduction and improved reliability by marketers who are pitching the still-to-be-released technology.To read this article in full, please click here

Norfolk and Richmond, Virginia: Cloudflare’s 152nd and 153rd cities

Norfolk and Richmond, Virginia: Cloudflare's 152nd and 153rd cities
Norfolk and Richmond, Virginia: Cloudflare's 152nd and 153rd cities

Virginia has a very important place in Internet history, as well as the history of Cloudflare’s network.

Northern Virginia, in the area around Ashburn VA, has for a long time been core to Internet infrastructure. In the early 1990’s, MAE-East (Metropolitan-Area-Exchange East) , an Internet Exchange Point (IXP) was established. MAE-East and West were some of the earliest IXPs. Internet Exchange Points are crucial interconnection points for ISPs and other Internet Networks to interconnect and exchange traffic. Eco-systems have grown around these through new data center offerings and new Internet platforms. Like many pieces of the Internet, MAE-East had a humble beginning, though not many humble beginnings grew to handle around 50% of Internet traffic exchange.

Cloudflare’s second Data Center, and one that still plays a critical component in our Global Network was Ashburn, Virginia. Similarly across many organizations, the Northern Virginia area has become a Data Center mecca. Many of the largest Clouds have a substantial amount of their footprint in Northern Virginia. Although MAE-East no longer exists, other Internet Exchange Points have come and grown in its place.

Cloudflare’s network has grown beyond what was traditional Interconnection points, like Ashburn/Northern VA, to a new Edge of the Continue reading

Dell EMC puts big data as a service on premises

To get up and running on a self-service, big-data analytics platform efficiently, many data-center and network managers these days would likely think about using a cloud service. But not so fast – there is some debate about whether the public cloud is the way to go for certain big-data analytics.For some big-data applications, the public cloud may be more expensive in the long run, and because of latency issues, slower than on-site private cloud solutions. In addition, having data storage reside on premises often makes sense due to regulatory and security considerations. [ Also see How to plan a software-defined data-center network and Efficient container use requires data-center software networking.] With all this in mind, Dell EMC has teamed up with BlueData, the provider of a container-based software platform for AI and big-data workloads, to offer Ready Solutions for Big Data, a big data as a service (BDaaS) package for on-premises data centers. The offering brings together Dell EMC servers, storage, networking and services along with BlueData software, all optimized for big-data analytics. To read this article in full, please click here

Dell EMC puts big data as a service on premises

To get up and running on a self-service, big-data analytics platform efficiently, many data-center and network managers these days would likely think about using a cloud service. But not so fast – there is some debate about whether the public cloud is the way to go for certain big-data analytics.For some big-data applications, the public cloud may be more expensive in the long run, and because of latency issues, slower than on-site private cloud solutions. In addition, having data storage reside on premises often makes sense due to regulatory and security considerations. [ Also see How to plan a software-defined data-center network and Efficient container use requires data-center software networking.] With all this in mind, Dell EMC has teamed up with BlueData, the provider of a container-based software platform for AI and big-data workloads, to offer Ready Solutions for Big Data, a big data as a service (BDaaS) package for on-premises data centers. The offering brings together Dell EMC servers, storage, networking and services along with BlueData software, all optimized for big-data analytics. To read this article in full, please click here

IDG Contributor Network: Network engineers are from Mars, application engineers are from Venus

Application and network engineers see the world differently. Unfortunately, these differences often result in resentment, with each party keeping score. Recently, application engineers have encroached on networking in a much bigger way. Sadly, if technical history repeats itself, we will revisit many of the long-ago problems again as application engineers rediscover the wisdom held by networking engineers.There are many areas of network engineering and application engineering where there is no overlap or contention. However, the number of overlapping areas is increasing as the roles of network and application engineers expand and evolve.Application engineers will try to do anything they can with code. I’ve spoken to many network engineers who struggle to support multi-cast. When I ask them why they are using multi-cast, they nearly always say, “the application engineers chose it, because it's in the Unix Network Programming book.” The Berkley Socket programming interface permits using multi-cast. The application engineers then provide lost packet recovery techniques to deliver files and real-time media using unicast and multicast. The Berkeley Socket does not easily support VLANs. Thus VLANs have always been the sole property of the network engineer. Linux kernel network programming capabilities in recent years become much more Continue reading