Full Stack Journey 15 features the ever-informative Dr. J Metz. We talk about storage, technology changes in storage, and what these changes mean for IT professionals.
The post Full Stack Journey 015: J Metz appeared first on Packet Pushers.
Distributed telecommunications cloud environments offer service providers a way to more quickly, efficiently and cost-effectively deliver services to end users, but they come with their share of complexity, management headaches, integration challenges and coordinating operations among multiple cloud vendors.
In a recent survey by Juniper Networks, service providers noted that a lack of visibility into all parts of the network cloud was the most difficult challenge facing as they migrate to the cloud, and that more than half of respondents said they use two or more cloud vendors in their distributed environments, adding to the complexity and the lack …
Juniper Dons Red Hat To Ease Cloud Migration was written by Jeffrey Burt at The Next Platform.
See how this free tool can help identify applications or processes linked to packets in your trace.
Third vendor in this year’s series of data center switching updates: Cisco.
As expected, Cisco launched a number of new switches in 2017, and EOL’d older models … for pretty varying value of old. For example, most of the original Nexus 9300 models are gone.
Read more ...The Internet Society is excited to announce that four workshops will be held in conjunction with the upcoming Network and Distributed System Security (NDSS) Symposium on 18 February 2018 in San Diego, CA. The workshop topics this year are:
A quick overview of each of the workshops is provided below. Submissions are currently being accepted for emerging research in each of these areas. Watch for the final program details in early January!
The first workshop is a new one this year on Binary Analysis Research (BAR). It is exploring the reinvigorated field of binary code analysis in light of the proliferation of interconnected embedded devices. In recent years there has been a rush to develop binary analysis frameworks. This has occurred in a mostly uncoordinated manner with researchers meeting on an ad-hoc basis or working in obscurity and isolation. As a result, there is little sharing or results and solution reuse among tools. The importance of formalized and properly vetted methods and tools for binary code analysis in order to deal with the scale of growth in these interconnected embedded devices cannot be overstated. Continue reading
As we approach IETF 100 in Singapore next week, this post in the Rough Guide to IETF 100 has much progress to report in the world of Internet Infrastructure Resilience. After several years of hard work, the last major deliverable of the Secure Inter-Domain Routing (SIDR) WG is done – RFC 8205, the BGPSec Protocol Specification, was published in September 2017 as standard. BGPsec is an extension to the Border Gateway Protocol (BGP) that provides security for the path of autonomous systems (ASes) through which a BGP update message propagates.
There are seven RFCs in the suite of BGPsec specifications:
You can read more Continue reading
When Hewlett Packard Enterprise bought supercomputer maker SGI back in August 2016 for $275 million, it had already invested years in creating its own “DragonHawk” chipset to build big memory Superdome X systems that were to be the follow-ons to its PA-RISC and Itanium Superdome systems. The Superdome X machines did not support HPE’s own VMS or HP-UX operating systems, but venerable Tandem NonStop fault tolerant distributed database platform was put on the road to Intel’s Xeon processors four years ago.
Now, HPE is making another leap, as we suspected it would, and anointing the SGI UV-300 platform as its …
HPE’s Superdome Gets An SGI NUMAlink Makeover was written by Timothy Prickett Morgan at The Next Platform.
News summary Dell EMC expands its No. 1 market-leading midrange storage portfolio with new SC All-Flash appliances offering premium performance and data services to help accelerate data center modernization Free software upgrades for Dell EMC Unity customers; includes inline data deduplication, synchronous file replication and ability to perform online “data-in-place” storage controller upgrades New Future-Proof... Read more →
The cards target data center deployments including cloud, security, and NFV.
The initial ONAP Amsterdam release is expected this month.
This is a liveblog of the session titled “How to deploy 800 nodes in 8 hours automatically”, presented by Tao Chen with T2Cloud (Tencent).
Chen takes a few minutes, as is customary, to provide an overview of his employer, T2cloud, before getting into the core of the session’s content. Chen explains that the drive to deploy such a large number of servers was driven in large part by a surge in travel due to the Spring Festival travel rush, an annual event that creates high traffic load for about 40 days.
The “800 servers” count included 3 controller nodes, 117 storage nodes, and 601 compute nodes, along with some additional bare metal nodes supporting Big Data workloads. All these nodes needed to be deployed in 8 hours or less in order to allow enough time for T2cloud’s customer, China Railway Corporation, to test and deploy applications to handle the Spring Festival travel rush.
To help with the deployment, T2cloud developed a “DevOps” platform consisting of six subsystems: CMDB, OS installation, OpenStack deployment, task management, automation testing, and health check/monitoring. Chen doesn’t go into great deal about any of these subsystems, but the slide he shows does give away some information:
This is a liveblog of the session titled “How to deploy 800 nodes in 8 hours automatically”, presented by Tao Chen with T2Cloud (Tencent).
Chen takes a few minutes, as is customary, to provide an overview of his employer, T2cloud, before getting into the core of the session’s content. Chen explains that the drive to deploy such a large number of servers was driven in large part by a surge in travel due to the Spring Festival travel rush, an annual event that creates high traffic load for about 40 days.
The “800 servers” count included 3 controller nodes, 117 storage nodes, and 601 compute nodes, along with some additional bare metal nodes supporting Big Data workloads. All these nodes needed to be deployed in 8 hours or less in order to allow enough time for T2cloud’s customer, China Railway Corporation, to test and deploy applications to handle the Spring Festival travel rush.
To help with the deployment, T2cloud developed a “DevOps” platform consisting of six subsystems: CMDB, OS installation, OpenStack deployment, task management, automation testing, and health check/monitoring. Chen doesn’t go into great deal about any of these subsystems, but the slide he shows does give away some information:
5G use cases will rely heavily on multi-access edge computing.