Why IBM’s a Fan of Mellanox’s Innova-2 Network Adapter
The cards target data center deployments including cloud, security, and NFV.
The cards target data center deployments including cloud, security, and NFV.
The initial ONAP Amsterdam release is expected this month.
This is a liveblog of the session titled “How to deploy 800 nodes in 8 hours automatically”, presented by Tao Chen with T2Cloud (Tencent).
Chen takes a few minutes, as is customary, to provide an overview of his employer, T2cloud, before getting into the core of the session’s content. Chen explains that the drive to deploy such a large number of servers was driven in large part by a surge in travel due to the Spring Festival travel rush, an annual event that creates high traffic load for about 40 days.
The “800 servers” count included 3 controller nodes, 117 storage nodes, and 601 compute nodes, along with some additional bare metal nodes supporting Big Data workloads. All these nodes needed to be deployed in 8 hours or less in order to allow enough time for T2cloud’s customer, China Railway Corporation, to test and deploy applications to handle the Spring Festival travel rush.
To help with the deployment, T2cloud developed a “DevOps” platform consisting of six subsystems: CMDB, OS installation, OpenStack deployment, task management, automation testing, and health check/monitoring. Chen doesn’t go into great deal about any of these subsystems, but the slide he shows does give away some information:
This is a liveblog of the session titled “How to deploy 800 nodes in 8 hours automatically”, presented by Tao Chen with T2Cloud (Tencent).
Chen takes a few minutes, as is customary, to provide an overview of his employer, T2cloud, before getting into the core of the session’s content. Chen explains that the drive to deploy such a large number of servers was driven in large part by a surge in travel due to the Spring Festival travel rush, an annual event that creates high traffic load for about 40 days.
The “800 servers” count included 3 controller nodes, 117 storage nodes, and 601 compute nodes, along with some additional bare metal nodes supporting Big Data workloads. All these nodes needed to be deployed in 8 hours or less in order to allow enough time for T2cloud’s customer, China Railway Corporation, to test and deploy applications to handle the Spring Festival travel rush.
To help with the deployment, T2cloud developed a “DevOps” platform consisting of six subsystems: CMDB, OS installation, OpenStack deployment, task management, automation testing, and health check/monitoring. Chen doesn’t go into great deal about any of these subsystems, but the slide he shows does give away some information:
5G use cases will rely heavily on multi-access edge computing.
This is a liveblog of the OpenStack Summit Sydney session titled “IPv6 Primer for Deployments”, led by Trent Lloyd from Canonical. IPv6 is a topic with which I know I need to get more familiar, so attending this session seemed like a reasonable approach.
Lloyd starts with some history. IPv6 was released in 1980, and uses 32-bit address (with a total address space of around 4 billion). IPv4, as most people know, is still used for the majority of Internet traffic. IPv6 was released in 1998, and uses 128-bit addresses (for a theoretical total address space of 3.4 x 10 to the 38th power). IPv5 was an experimental protocol, which is why the IETF used IPv6 as the version number for the next production version of the IP protocol.
Lloyd shows a graph showing the depletion of IPv4 address space, to help attendees better understand the situation with IPv4 address allocation. The next graph Lloyd shows illustrates IPv6 adoption, which—according to Google—is now running around 20% or so. (Lloyd shared that he naively estimated IPv4 would be deprecated in 2010.) In Australia it’s still pretty difficult to get IPv6 support, according to Lloyd.
Next, Lloyd reviews decimal and Continue reading
This is the first liveblog from day 2 of the OpenStack Summit in Sydney, Australia. The title of the session is “Battle Scars from OpenStack Deployments.” The speakers are Anupriya Ramraj, Rick Mathot, and Farhad Sayeed (two vendors and an end-user, respectively, if my information is correct). I’m hoping for some useful, practical, real-world information out of this session.
Ramraj starts the session, introducing the speakers and setting some context for the presentation. Ramraj and Mathot are with DXC, a managed services provider. Ramraj starts with a quick review of some of the tough battles in OpenStack deployments:
Ramraj recommends using an OpenStack distribution versus “pure” upstream OpenStack, and recommends using new-ish hardware as opposed to older hardware. Given the last bullet, this complicates rolling out OpenStack and resolving OpenStack issues. A lack of DevOps skills and a lack of understanding around OpenStack APIs can impede the process of porting applications Continue reading
Much of the technology comes from recent cloud security acquisitions.
Kubernetes is the top container platform of OpenStack users.
Security and familiarity were cited as tipping the choice in favor of VMs.
Juniper realigns workforce; Dell EMC and Nutanix fight for No. 1 HCI spot; Cisco's $1.9B purchase.
Red Hat is no stranger to Linux containers, considering the work its engineers have done in creating the OpenShift application development and management platform.
As The Next Platform has noted over the past couple of years, Red Hat has rapidly expanded the capabilities within OpenShift for developing and deploying Docker containers and managing them with the open source Kubernetes orchestrator, culminating with OpenShift 3.0, which was based on Kubernetes and Docker containers. It has continued to enhance the platform since. Most recently, Red Hat in September launched OpenShift Container Platform 3.6, which added upgraded security features and more consistency across …
Red Hat Wraps OpenStack In Containers was written by Jeffrey Burt at The Next Platform.
In 2003, the world of network engineering was far different than it is today. For instance, EIGRP was still being implemented on the basis of its ability to support multi-protocol routing. SONET, and other optical technologies, were just starting to come into their own, and all optical switching was just beginning to be considered for large scale deployment. What Hartley says of history holds true when looking back at what seems to be a former age: “The past is a foreign country; they do things differently there.”
In the midst of this change, the Association for Computing Machinery (the ACM) published a paper entitled “Will IP really take over the world (of communications)?” This paper, written during the ongoing discussion within the engineering community over whether packet switching or circuit switching is the “better” technology, provides a lot of insight into the thinking of the time. Specifically, as the author say, the belief that IP is better:
…is based on our collective belief that packet switching is inherently superior to circuit switching because of the efficiencies of statistical multiplexing, and the ability of IP to route around failures. It is widely assumed that IP is simpler than circuit Continue reading
If accepted, it would create a WiFi, automotive, and location chip powerhouse.
The technology comes from HPE's $275 million SGI acquisition last year.

This is a guest post by Russell Sullivan, founder and CTO of Kuhirō.
Serverless is an emerging Infrastructure-as-a-Service solution poised to become an Internet-wide ubiquitous compute platform. In 2014 Amazon Lambda started the Serverless wave and a few years later Serverless has extended to the CDN-Edge and beyond the last mile to mobile, IoT, & storage.
This post examines recent innovations in Serverless at the CDN Edge (SAE). SAE is a sea change, it’s a really big deal, it marks the beginning of moving business logic from a single Cloud-region out to the edges of the Internet, which may eventually penetrate as far as servers running inside cell phone towers. When 5G arrives SAE will be only a few milliseconds away from billions of devices, the Internet will be transformed into a global-scale real-time compute-platform.
The journey of being a founder and then selling a NOSQL company, along the way architecting three different NOSQL data-stores, led me to realize that computation is currently confined to either the data-center or the device: the vast space between the two is largely untapped. So I teamed up with some Continue reading