W3C embraces DRM—puts itself on the wrong side of history

Last week, the World Wide Web Consortium (W3C)—the organization with the purpose of standardizing aspects of the "Web"—voted to endorse DRM on the web. It’s a move that is in direct opposition to the W3C's mission statement—and puts them squarely on the wrong side of history.Specifically, what the W3C is approving is a specification called Encrypted Media Extensions (EME)—an extension to existing HTML to make implementing playback restrictions a "standard" across all web browsers. Contradictory statements from the W3C These sorts of restrictions (DRM) are, by definition, created for the sole purpose of making it harder for people to see/hear/consume some piece of content—a movie, a song, a book, an image, etc. —often based on their hardware, software or geographical location.To read this article in full or to leave a comment, please click here

Ethernet Getting Back On The Moore’s Law Track

It would be ideal if we lived in a universe where it was possible to increase the capacity of compute, storage, and networking at the same pace so as to keep all three elements expanding in balance. The irony is that over the past two decades, when the industry needed for networking to advance the most, Ethernet got a little stuck in the mud.

But Ethernet has pulls out of its boots and left them in the swamp and is back to being barefoot again on much more solid ground where it can run faster. The move from 10 Gb/sec

Ethernet Getting Back On The Moore’s Law Track was written by Timothy Prickett Morgan at The Next Platform.

Complexity and the Thin Waist

In recent years, we have become accustomed to—and often accosted by—the phrase software eats the world. It’s become a mantra in the networking world that software defined is the future. full stop This research paper by Microsoft, however, tells a different story. According to Baumann, hardware is the new software. Or, to put it differently, even as software eats the world, hardware is taking over an ever increasing amount of the functionality software is doing. In showing this point, the paper also points out the complexity problems involved in dissolving the thin waist of an architecture.

The specific example used in the paper is the Intel x86 Instruction Set Architecture (ISA). Many years ago, when I was a “youngster” in the information technology field, there were a number of different processor platforms; the processor wars waged in full. There were, primarily, the x86 platform, by Intel, beginning with the 8086, and its subsequent generations, the 8088, 80286, 80386, then the Pentium, etc. On the other side of the world, there were the RISC based processors, the kind stuffed into Apple products, Cisco routers, and Sun Sparc workstations (like the one that I used daily while in Cisco TAC). The argument Continue reading

ISOC Rough Guide to IETF 99: Internet Infrastructure Resilience

IETF 99 is next week in Prague, and I’d like to take a moment to discuss some of the interesting things happening there related to Internet infrastructure resilience in this installment of the Rough Guide to IETF 99.

Simple solutions sometimes have a huge impact. Like a simple requirement that “routes are neither imported nor exported unless specifically enabled by configuration”, as specified in an Internet draft “Default EBGP Route Propagation Behavior Without Policies”. The draft is submitted to IESG and expected to be published as a Standards Track RFC soon.

Andrei Robachevsky

MIT IoT and wearable project foretells the future of industrial safety

The IoT in the commercial sector might better be called the Internet of Prototypes, the IoP.Few of the components for building the ubiquitous IoT that the future holds are available today. The best way to envision the future is by prototyping. Prototypes of mission-critical or high-ROI applications will tease money out of research budgets to build them. All the prototypes will lead to a greater understanding, and when the cost of the problem matches the development investment  the prototypes will become products. With cost reduction and standardization, products could become generalized extensible platforms.+ Also on Network World: How industrial IoT is making steel production smarter + MIT built a fitting prototype that could, with further development, scale into a platform. A multidisciplinary team from the MIT Design Lab led by MIT Media Lab researcher Guillermo Bernal won best research paper at the Petra Conference last month for the team’s work applying IoT and wearables to industrial safety. The sophisticated and purpose-built prototype at the center of the research makes the paper “Safety++. Designing IoT and Wearable Systems for Industrial Safety through a User-Centered Design Approach” extremely tangible and predictive about how the IoT will unfold.To Continue reading

MIT IoT and wearable project foretells the future of industrial safety

The IoT in the commercial sector might better be called the Internet of Prototypes, the IoP.Few of the components for building the ubiquitous IoT that the future holds are available today. The best way to envision the future is by prototyping. Prototypes of mission-critical or high-ROI applications will tease money out of research budgets to build them. All the prototypes will lead to a greater understanding, and when the cost of the problem matches the development investment  the prototypes will become products. With cost reduction and standardization, products could become generalized extensible platforms.+ Also on Network World: How industrial IoT is making steel production smarter + MIT built a fitting prototype that could, with further development, scale into a platform. A multidisciplinary team from the MIT Design Lab led by MIT Media Lab researcher Guillermo Bernal won best research paper at the Petra Conference last month for the team’s work applying IoT and wearables to industrial safety. The sophisticated and purpose-built prototype at the center of the research makes the paper “Safety++. Designing IoT and Wearable Systems for Industrial Safety through a User-Centered Design Approach” extremely tangible and predictive about how the IoT will unfold.To Continue reading

We created a culture of visionaries. Here’s how you can, too.

We’re both honored and thrilled to announce that Cumulus Networks has been recognized as a “Visionary” in the Gartner Magic Quadrant for Data Center Networking. You can download this highly-anticipated report here, and learn about other major trends in the industry.

So, what’s it mean to be a visionary? According to Gartner, “Visionaries have demonstrated an ability to increase the features in their offerings to provide a unique and differentiated approach to the market. A visionary has innovated in one or more of the key areas of data center infrastructure, such as management (including virtualization), security (including policy enforcement), SDN and operational efficiency, and cost reductions.”

We couldn’t be happier to be recognized, and to us, it means our company vision has paid off. We’ve created a culture of visionaries through inquisitive, innovative and bold leadership, and these same traits are seen in both our philosophy and our technology. As more and more organizations embrace web-scale IT, we expect to keep pushing the technology forward — always striving for a better network.

With 96% of Gartner’s survey respondents finding open networking to be a relevant buying criterion, and with the adoption of white-box switching to reach 22% by 2020, it’s Continue reading

Parameter Encoding on FPGAs Boosts Neural Network Efficiency

The key to creating more efficient neural network models is rooted in trimming and refining the many parameters in deep learning models without losing accuracy. Much of this work is happening on the software side, but devices like FPGAs that can be tuned for trimmed parameters are offering promising early results for implementation.

A team from UC San Diego has created a reconfigurable clustering approach to deep neural networks that encodes the parameters the network according the accuracy requirements and limitations of the platform—which are often bound by memory access bandwidth. Encoding the trimmed parameters in an FPGA resulted in

Parameter Encoding on FPGAs Boosts Neural Network Efficiency was written by Nicole Hemsoth at The Next Platform.

Using The Network To Break Down Server Silos

Virtual machines and virtual network functions, or VMs and VNFs for short, are the standard compute units in modern enterprise, cloud, and telecommunications datacenters. But varying VM and VNF resource needs as well as networking and security requirements often force IT departments to manage servers in separate silos, each with their own respective capabilities.

For example, some VMs or VNFs may require a moderate number of vCPU cores and lower I/O bandwidth, while VMs and VNFs associated with real-time voice and video, IoT, and telco applications require a moderate-to-high number of vCPU cores, rich networking services, and high I/O bandwidth,

Using The Network To Break Down Server Silos was written by Timothy Prickett Morgan at The Next Platform.

Real-time DDoS mitigation using sFlow and BGP FlowSpec

Remotely Triggered Black Hole (RTBH) Routing describes how native BGP support in the sFlow-RT real-time sFlow analytics engine can be used to blackhole traffic in order to mitigate a distributed denial of service (DDoS) attack. Black hole routing is effective, but there is significant potential for collateral damage since ALL traffic to the IP address targeted by the attack is dropped.

The BGP FlowSpec extension (RFC 5575: Dissemination of Flow Specification Rules) provides a method of transmitting traffic filters that selectively block the attack traffic while allowing normal traffic to pass. BGP FlowSpec support has recently been added to sFlow-RT and this article demonstrates the new capability.

This demonstration uses the test network described in Remotely Triggered Black Hole (RTBH) Routing. The network was constructed using free components: VirtualBox, Cumulus VX, and Ubuntu LinuxBGP FlowSpec on white box switch describes how to implement basic FlowSpec support on Cumulus Linux.

The following flowspec.js sFlow-RT script detects and blocks UDP-Based Amplification attacks:
var router = '10.0.0.141';
var id = '10.0.0.70';
var as = 65141;
var thresh = 1000;
var block_minutes = 1;

setFlow('udp_target',{keys:'ipdestination,udpsourceport',value:'frames'});

setThreshold('attack',{metric:'udp_target', value:thresh, byFlow:true});

bgpAddNeighbor(router,as,id,{flowspec:true});

var Continue reading

48% off Anker 15W Dual USB Solar Charger – Deal Alert

This solar charger from Anker delivers the fastest possible charge up to 2.1 amps under direct sunlight. 15 watt SunPower solar array is provides enough power to charge two devices simultaneously. Industrial-strength PET polymer faced solar panels are sewn into a rugged polyester canvas for weather-resistant outdoor durability. Clip it to your backpack, or attach to your tent or a tree. The charger currently averages 4.3 out of 5 stars from over 340 people on Amazon (read reviews), where its typical list price of $79.99 has been reduced 48% to $41.99. See this deal on Amazon.To read this article in full or to leave a comment, please click here

NEC claims new vector processor speeds data processing 50-fold

It seems more vendors are looking beyond the x86 architecture for the big leaps in performance needed to power things like artificial intelligence (AI) and machine learning. Google and IBM have their processor projects, Nvidia and AMD are positioning their GPUs as an alternative, and now Japan’s NEC has announced a vector processor accelerates that data processing by more than a factor of 50 compared to the Apache Spark cluster-computing framework. + Also on Network World: NVM Express spec updated for data-intensive operations + The company said its vector processor, called the Aurora Vector Engine, leverages “sparse matrix” data structures to accelerate processor performance in executing machine learning tasks. Vector-based computers are basically supercomputers built specifically to handle large scientific and engineering calculations. Cray used to build them in previous decades before shifting to x86 processors. To read this article in full or to leave a comment, please click here