Everything you need to know about 10G, the future of broadband technology

With the emergence of more connected devices and immersive content experiences happening alongside an unparalleled boom of video streaming platforms, there’s never been a more critical time to have a powerful and reliable network than can meet the demands of the future. And that next big leap for digital mankind is 10G.What is 10G?  Not to be confused with the cellular industry’s 5G (meaning fifth generation), the “G” in 10G means gigabits per second, a blazing fast internet speed. 10G is the cable industry’s vision for delivering a remarkable 10 gigabits per second to homes in the U.S. and around the globe. Home internet speeds will be 10 times faster than today’s fastest broadband networks, staying ahead of your digital demands, especially as technology becomes even more integral in our day-to-day lives. To read this article in full, please click here

Oracle Chases A Huge HPC Opportunity In The Cloud

A few years ago, about two dozen Oracle employees began work in downtown office space in Seattle to start mapping out how to make the enterprise software giant and occasional system maker a competitive player in a crowded public cloud space that is dominated by the likes of Amazon Web Services and Microsoft Azure, and also includes a host of other high-profile companies, from Google to Salesforce to IBM.

Oracle Chases A Huge HPC Opportunity In The Cloud was written by Jeffrey Burt at The Next Platform.

Observability in Data Center Networks


Observability in Data Center Networks: In this session, you’ll learn how the sFlow protocol provides broad visibility in modern data center environments as they migrate to highly meshed topologies. Our data center workloads are shifting to take advantage of higher speeds and bandwidth, so visibility to east-west traffic within the data center is becoming more important. Join Peter Phaal—one of the inventors of sFlow—and Joe Reves from SolarWinds product management as they discuss how sFlow differs from other flow instrumentation to deliver visibility in the switching fabric.
THWACKcamp is SolarWinds’ free, annual, worldwide virtual IT learning event connecting thousands of skilled IT professionals with industry experts and SolarWinds technical staff. This video was one of the sessions.

Qualcomm-Backed Particle Banks $40M to Expand IoT Platform

“We have the largest developer community in the [IoT] industry. Almost 200,000 folks build their...

Read More »

© SDxCentral, LLC. Use of this feed is limited to personal, non-commercial use and is governed by SDxCentral's Terms of Use (https://www.sdxcentral.com/legal/terms-of-service/). Publishing this feed for public or commercial use and/or misrepresentation by a third party is prohibited.

Day Two Cloud 021: Nice Design; We Need To Change It – The Reality Of Building A Cloud Service

How does technical implementation and user feedback shape a cloud-based solution? When is it time to make a significant change in your design? And how do you know you’re headed in the right direction? This Day Two Cloud podcast episode tackles these questions with guest Michael Fraser, co-founder and CEO of Refactr.

The post Day Two Cloud 021: Nice Design; We Need To Change It – The Reality Of Building A Cloud Service appeared first on Packet Pushers.

The TLS Post-Quantum Experiment

The TLS Post-Quantum Experiment
The TLS Post-Quantum Experiment

In June, we announced a wide-scale post-quantum experiment with Google. We implemented two post-quantum (i.e., not yet known to be broken by quantum computers) key exchanges, integrated them into our TLS stack and deployed the implementation on our edge servers and in Chrome Canary clients. The goal of the experiment was to evaluate the performance and feasibility of deployment in TLS of two post-quantum key agreement ciphers.

In our previous blog post on post-quantum cryptography, we described differences between those two ciphers in detail. In case you didn’t have a chance to read it, we include a quick recap here. One characteristic of post-quantum key exchange algorithms is that the public keys are much larger than those used by "classical" algorithms. This will have an impact on the duration of the TLS handshake. For our experiment, we chose two algorithms: isogeny-based SIKE and lattice-based HRSS. The former has short key sizes (~330 bytes) but has a high computational cost; the latter has larger key sizes (~1100 bytes), but is a few orders of magnitude faster.

During NIST’s Second PQC Standardization Conference, Nick Sullivan presented our approach to this experiment and some initial results. Quite accurately, Continue reading

Learning certifiably optimal rule lists for categorical data

Learning certifiably optimal rule lists for categorical data Angelino et al., JMLR 2018

Today we’re taking a closer look at CORELS, the Certifiably Optimal RulE ListS algorithm that we encountered in Rudin’s arguments for interpretable models earlier this week. We’ve been able to create rule lists (decision trees) for a long time, e.g. using CART, C4.5, or ID3 so why do we need CORELS?

…despite the apparent accuracy of the rule lists generated by these algorithms, there is no way to determine either if the generated rule list is optimal or how close it is to optimal, where optimality is defined with respect to minimization of a regularized loss function. Optimality is important, because there are societal implications for lack of optimality.

Rudin proposed a public policy that for high-stakes decisions no black-box model should be deployed when there exists a competitive interpretable model. For the class of logic problems addressable by CORELS, CORELS’ guarantees provide a technical foundation for such a policy:

…we would like to find both a transparent model that is optimal within a particular pre-determined class of models and produce a certificate of its optimality, with respect Continue reading

The ease and importance of scaling in the enterprise

Networks are growing, and growing fast. As enterprises adopt IoT and mobile clients, VPN technologies, virtual machines (VMs), and massively distributed compute and storage, the number of devices—as well as the amount of data being transported over their networks—is rising at an explosive rate. It’s becoming apparent that traditional, manual ways of provisioning don’t scale. Something new needs to be used, and for that, we look toward hyperscalers; companies like Google, Amazon and Microsoft, who’ve been dealing with huge networks almost since the very beginning.

The traditional approach to IT operations has been focused on one server or container at a time. Any attempt at management at scale frequently comes with being locked into a single vendor’s infrastructure and technologies. Unfortunately, today’s enterprises are finding that even the expensive, proprietary management solutions provided by the vendors who have long supported traditional IT practices simply cannot scale, especially when you consider the rapid growth of containerization and VMs that enterprises are now dealing with.

In this blog post, I’ll take a look at how an organization can use open, scalable network technologies—those first created or adopted by the aforementioned hyperscalers—to reduce growing pains. These issues are increasingly relevant as new Continue reading

Watson IoT chief: AI can broaden IoT services

IBM thrives on the complicated, asset-intensive part of the enterprise IoT market, according to Kareem Yusuf, GM of the company’s Watson IoT business unit. From helping seaports manage shipping traffic to keeping technical knowledge flowing within an organization, Yusuf said that the idea is to teach artificial intelligence to provide insights from the reams of data generated by such complex systems.Predictive maintenance is probably the headliner in terms of use cases around asset-intensive IoT, and Yusuf said that it’s a much more complicated task than many people might think. It isn’t simply a matter of monitoring, say, pressure levels in a pipe somewhere and throwing an alert when they move outside of norms. It’s about aggregate information on failure rates and asset planning, that a company can have replacements and contingency plans ready for potential failures.To read this article in full, please click here

Getting the Unified Cloud Experience

Learn how Lenovo Open Cloud (LOC) provides cloud deployment and cloud management services, and...

Read More »

© SDxCentral, LLC. Use of this feed is limited to personal, non-commercial use and is governed by SDxCentral's Terms of Use (https://www.sdxcentral.com/legal/terms-of-service/). Publishing this feed for public or commercial use and/or misrepresentation by a third party is prohibited.

Adaptiv Delivers SD-WAN to SkySwitch uCaaS Customers

By leveraging Adaptiv Networks' SD-WAN, SkySwitch aims to capitalize on small-to-medium size...

Read More »

© SDxCentral, LLC. Use of this feed is limited to personal, non-commercial use and is governed by SDxCentral's Terms of Use (https://www.sdxcentral.com/legal/terms-of-service/). Publishing this feed for public or commercial use and/or misrepresentation by a third party is prohibited.

Understanding Kubernetes Security on Docker Enterprise 3.0

This is a guest post by Javier Ramírez, Docker Captain and IT Architect at Hopla Software. You can follow him on Twitter @frjaraur or on Github.

Docker began including Kubernetes with Docker Enterprise 2.0 last year. The recent 3.0 release includes CNCF Certified Kubernetes 1.14, which has many additional security features. In this blog post, I will review Pod Security Policies and Admission Controllers.

What are Kubernetes Pod Security Policies?

Pod Security Policies are rules created in Kubernetes to control security in pods. A pod will only be scheduled on a Kubernetes cluster if it passes these rules. These rules are defined in the  “PodSecurityPolicy” resource and allow us to manage host namespace and filesystem usage, as well as privileged pod features. We can use the PodSecurityPolicy resource to make fine-grained security configurations, including:

  • Privileged containers.
  • Host namespaces (IPC, PID, Network and Ports).
  • Host paths and their permissions and volume types.
  • User and group for containers process execution and setuid capabilities inside container.
  • Change default containers capabilities.
  • Behaviour of Linux security modules.
  • Allow host kernel configurations using sysctl.

The Docker Universal Control Plane (UCP) 3.2 provides two Pod Security Policies by default – which is helpful Continue reading