IPv6 Primer for Deployments

This is a liveblog of the OpenStack Summit Sydney session titled “IPv6 Primer for Deployments”, led by Trent Lloyd from Canonical. IPv6 is a topic with which I know I need to get more familiar, so attending this session seemed like a reasonable approach.

Lloyd starts with some history. IPv6 was released in 1980, and uses 32-bit address (with a total address space of around 4 billion). IPv4, as most people know, is still used for the majority of Internet traffic. IPv6 was released in 1998, and uses 128-bit addresses (for a theoretical total address space of 3.4 x 10 to the 38th power). IPv5 was an experimental protocol, which is why the IETF used IPv6 as the version number for the next production version of the IP protocol.

Lloyd shows a graph showing the depletion of IPv4 address space, to help attendees better understand the situation with IPv4 address allocation. The next graph Lloyd shows illustrates IPv6 adoption, which—according to Google—is now running around 20% or so. (Lloyd shared that he naively estimated IPv4 would be deprecated in 2010.) In Australia it’s still pretty difficult to get IPv6 support, according to Lloyd.

Next, Lloyd reviews decimal and Continue reading

Fujitsu, NetApp partner for hyperconverged systems

Smaller players in the IT hardware space can often be overlooked because dominant players cast such a long shadow. So as someone who roots for the underdog, I do enjoy shining a little light on an overlooked bit of news.Fujitsu is not the first name in data center hardware here in the U.S. Its primary place of business is its native Japan. For example, it built the RIKEN supercomputer, one of the 10 fastest in the world. But it has some good hardware offerings, such as its Primergy and Primequest server line. Well, now the company has partnered with NetApp to offer them converged and hyperconverged systems.Also on Network World: Hyperconvergence: What’s all the hype about? Converged and hyperconverged infrastructure (CI/HCI) is a fancy way of saying tightly integrated systems that combine compute, networking and storage into pre-tested and pre-configured stacks for a single turnkey solution rather than buying and assembling one.To read this article in full or to leave a comment, please click here

Fujitsu, NetApp partner for hyperconverged systems

Smaller players in the IT hardware space can often be overlooked because dominant players cast such a long shadow. So as someone who roots for the underdog, I do enjoy shining a little light on an overlooked bit of news.Fujitsu is not the first name in data center hardware here in the U.S. Its primary place of business is its native Japan. For example, it built the RIKEN supercomputer, one of the 10 fastest in the world. But it has some good hardware offerings, such as its Primergy and Primequest server line. Well, now the company has partnered with NetApp to offer them converged and hyperconverged systems.Also on Network World: Hyperconvergence: What’s all the hype about? Converged and hyperconverged infrastructure (CI/HCI) is a fancy way of saying tightly integrated systems that combine compute, networking and storage into pre-tested and pre-configured stacks for a single turnkey solution rather than buying and assembling one.To read this article in full or to leave a comment, please click here

Battle Scars from OpenStack Deployments

This is the first liveblog from day 2 of the OpenStack Summit in Sydney, Australia. The title of the session is “Battle Scars from OpenStack Deployments.” The speakers are Anupriya Ramraj, Rick Mathot, and Farhad Sayeed (two vendors and an end-user, respectively, if my information is correct). I’m hoping for some useful, practical, real-world information out of this session.

Ramraj starts the session, introducing the speakers and setting some context for the presentation. Ramraj and Mathot are with DXC, a managed services provider. Ramraj starts with a quick review of some of the tough battles in OpenStack deployments:

  • Months to deploy OpenStack at scale
  • Chaos during incidents due to lack of OpenStack skills and knowledge
  • Customers spend lengthy periods with support for troubleshooting basic issues
  • Applications do not get onboarded to OpenStack
  • Marooned on earlier version of OpenStack
  • OpenStack skills are hard to recruit and retain

Ramraj recommends using an OpenStack distribution versus “pure” upstream OpenStack, and recommends using new-ish hardware as opposed to older hardware. Given the last bullet, this complicates rolling out OpenStack and resolving OpenStack issues. A lack of DevOps skills and a lack of understanding around OpenStack APIs can impede the process of porting applications Continue reading

Red Hat Wraps OpenStack In Containers

Red Hat is no stranger to Linux containers, considering the work its engineers have done in creating the OpenShift application development and management platform.

As The Next Platform has noted over the past couple of years, Red Hat has rapidly expanded the capabilities within OpenShift for developing and deploying Docker containers and managing them with the open source Kubernetes orchestrator, culminating with OpenShift 3.0, which was based on Kubernetes and Docker containers. It has continued to enhance the platform since. Most recently, Red Hat in September launched OpenShift Container Platform 3.6, which added upgraded security features and more consistency across

Red Hat Wraps OpenStack In Containers was written by Jeffrey Burt at The Next Platform.

A glance back at the looking glass: Will IP really take over the world?

In 2003, the world of network engineering was far different than it is today. For instance, EIGRP was still being implemented on the basis of its ability to support multi-protocol routing. SONET, and other optical technologies, were just starting to come into their own, and all optical switching was just beginning to be considered for large scale deployment. What Hartley says of history holds true when looking back at what seems to be a former age: “The past is a foreign country; they do things differently there.”

In the midst of this change, the Association for Computing Machinery (the ACM) published a paper entitled “Will IP really take over the world (of communications)?” This paper, written during the ongoing discussion within the engineering community over whether packet switching or circuit switching is the “better” technology, provides a lot of insight into the thinking of the time. Specifically, as the author say, the belief that IP is better:

…is based on our collective belief that packet switching is inherently superior to circuit switching because of the efficiencies of statistical multiplexing, and the ability of IP to route around failures. It is widely assumed that IP is simpler than circuit Continue reading

Birth of the NearCloud: Serverless + CRDTs @ Edge is the New Next Thing


Kuhiro 10X Faster than Amazon Lambda

 

This is a guest post by Russell Sullivan, founder and CTO of Kuhirō.

Serverless is an emerging Infrastructure-as-a-Service solution poised to become an Internet-wide ubiquitous compute platform. In 2014 Amazon Lambda started the Serverless wave and a few years later Serverless has extended to the CDN-Edge and beyond the last mile to mobile,  IoT, & storage.

This post examines recent innovations in Serverless at the CDN Edge (SAE). SAE is a sea change, it’s a really big deal, it marks the beginning of moving business logic from a single Cloud-region out to the edges of the Internet, which may eventually penetrate as far as servers running inside cell phone towers. When 5G arrives SAE will be only a few milliseconds away from billions of devices, the Internet will be transformed into a global-scale real-time compute-platform.

The journey of being a founder and then selling a NOSQL company, along the way architecting three different NOSQL data-stores, led me to realize that computation is currently confined to either the data-center or the device: the vast space between the two is largely untapped. So I teamed up with some Continue reading

Can Wave 2 handle the wireless tsunami heading toward us?

There seems to be a shift in our industry from wireless N to AC, as we have seen large leaps forward in bandwidth and client saturation handling. With more wireless options going in the workplace, widespread connectivity continues to rise and wireless requirements are becoming greater and greater.Now, with Wave 2 becoming more common, is AC really able to handle the tsunami-like wave of wireless internet requests to meet this growing demand?Also on Network World: REVIEW: Early Wave 2 Wi-Fi access points show promise There's only one way to find out. We need to step out of the comfort zone provided by past wireless technologies and expand the idea of what wireless is capable of providing to meet these demands.To read this article in full or to leave a comment, please click here

Can Wave 2 handle the wireless tsunami heading toward us?

There seems to be a shift in our industry from wireless N to AC, as we have seen large leaps forward in bandwidth and client saturation handling. With more wireless options going in the workplace, widespread connectivity continues to rise and wireless requirements are becoming greater and greater.Now, with Wave 2 becoming more common, is AC really able to handle the tsunami-like wave of wireless internet requests to meet this growing demand?Also on Network World: REVIEW: Early Wave 2 Wi-Fi access points show promise There's only one way to find out. We need to step out of the comfort zone provided by past wireless technologies and expand the idea of what wireless is capable of providing to meet these demands.To read this article in full or to leave a comment, please click here

IDG Contributor Network: ElectOS uses open source to restore trust in voting machines

When people doubt that an election will be conducted fairly, their trust in the outcome and their leaders naturally erodes. That’s the challenge posed by electronic voting machines. Technology holds the promise of letting people vote more easily and remotely. But, they’re also prone to hacking and manipulation. How can trust be restored in voting machines and election results?Voting demands the ultimate IoT machine (to borrow a line from BMW). The integrity of these machines with their combination of sensors, security and data analysis produce the results that impact every aspect of all our lives.To read this article in full or to leave a comment, please click here

How The Largest Tech Deal In History Might Affect Systems

Private equity firm Silver Lake Partners has an appetite for tech, and securing funding for Dell to take itself private and then go out and buy EMC and VMware is now going to take a backseat in terms of deal size – and in potential ripple effects in the datacenter – now that chip giant Broadcom is making an unsolicited bid, backed by Silver Lake, to take over often-times chip rival Qualcomm.

Should this deal pass shareholder and regulatory, it could finally create a chip giant that can counterbalance Intel in the datacenter – something that Broadcom and Qualcomm both

How The Largest Tech Deal In History Might Affect Systems was written by Timothy Prickett Morgan at The Next Platform.