Lessons Learnt from Running a Container-Native Cloud

This is a liveblog of the session titled “Lessons Learnt from Running a Container-Native Cloud,” led by Xu Wang. Wang is the CTO and co-founder of Hyper.sh, a company that has been working on leveraging hypervisor isolation for containers. This session claims to discuss some lessons learned from running a cloud leveraging this sort of technology.

Wang starts with a brief overview of Hyper.sh. The information for this session comes from running a Hypernetes (Hyper.sh plus Kubernetes)-based cloud for a year.

So, what is a “container-native” cloud? Wang provides some criteria:

  • A container is a first-class citizen in the cloud. This means container-level APIs and the ability to launch containers without a VM.
  • The cloud offers container-centric resources (floating IPs, security groups, etc.).
  • The cloud offers container-based services (load balancing, scheduled jobs, functions, etc.).
  • Billing is handled on a per-container level (not on a VM level).

To be honest, I don’t see how any cloud other than Hyper.sh’s own offering could meet these criteria; none of the major public cloud providers (Microsoft Azure, AWS, GCP) currently satisfy Wang’s requirements. A “standard” OpenStack installation doesn’t meet these requirements. This makes the session more like a Continue reading

Make Your Application Serverless

This is a liveblog from the last day of the OpenStack Summit in Sydney, Australia. The title of the session is “Make Your Application Serverless,” and discusses Qinling, a project for serverless (Functions-as-a-Service, or FaaS) architectures/applications on OpenStack. The presenters for the session are Lingxian Kong and Feilong Wang from Catalyst Cloud.

Kong provides a brief background on himself and his co-presenter (Wang), and explains that Catalyst Cloud is an OpenStack-based public cloud based in New Zealand. Both presenters are active technical contributors to OpenStack projects.

Kong quickly transitions into the core content of the presentation, which focuses on serverless computing and Qinling, a project for implementing serverless architectures on OpenStack. Kong points out that serverless computing doesn’t mean there are no servers, only that the servers (typically VMs) are hidden from view. Functions-as-a-Service, or FaaS, is a better term that Kong prefers. He next provides an example of how a FaaS architecture may benefit applications, and contrasts solutions like AutoScaling Groups (or the equivalent in OpenStack) with FaaS.

Some key characteristics of serverless, as summarized by Kong:

  • No need to think about servers
  • Run your code, not the whole application
  • Highly available and horizontally scalable
  • Stateless/ephemeral
  • Lightweight/single-purpose functions
  • Event-driven Continue reading

IDG Contributor Network: Why speeds and feeds don’t fix your data management problems

For a very long time, IT professionals have made storage investments based on a few key metrics – how fast data can be written to a storage media, and how fast it can be read back when an application needs that information, and of course, the reliability and cost of that system. The critical importance of storage performance led us all to fixate on latency and how to minimize it through intelligent architectures and new technologies.Given the popularity of flash memory in storage, the significance of latency is not about to fade away, but a number of other metrics are rapidly rising in importance to IT teams. Yes, cost has always been a factor in choosing a storage investment, but with Cloud and object storage gaining popularity, the price of storage per GB is more than a function of speed and capacity, but also the opportunity cost of having to power and manage that resource.  When evaluating whether to archive data on premises, or to send it offsite, IT professionals are now looking at a much wider definition of overall cost.To read this article in full or to leave a comment, please click here

IDG Contributor Network: Why speeds and feeds don’t fix your data management problems

For a very long time, IT professionals have made storage investments based on a few key metrics – how fast data can be written to a storage media, and how fast it can be read back when an application needs that information, and of course, the reliability and cost of that system. The critical importance of storage performance led us all to fixate on latency and how to minimize it through intelligent architectures and new technologies.Given the popularity of flash memory in storage, the significance of latency is not about to fade away, but a number of other metrics are rapidly rising in importance to IT teams. Yes, cost has always been a factor in choosing a storage investment, but with Cloud and object storage gaining popularity, the price of storage per GB is more than a function of speed and capacity, but also the opportunity cost of having to power and manage that resource.  When evaluating whether to archive data on premises, or to send it offsite, IT professionals are now looking at a much wider definition of overall cost.To read this article in full or to leave a comment, please click here

Riverbed enhances its SD-WAN performance-monitoring platform

Digital transformation is on every IT and business leader’s radar today. The path to it, though, may not be simple. While many industry pundits like to call out the likes of Uber and AirBnb, those digital natives didn’t have to worry about disrupting an existing business.To help mainstream businesses make that jump to a digital organization, Riverbed launched two new solutions at its Disrupt customer event last week in New York City.Enhanced network performance management The first is a new version of its network and application performance management platform, SteelCentral, enabling IT staff to better understand digital experiences. This aligns with a new movement among the NPM/APM vendors to shift to digital experience management (DEM), providing visibility into customer or worker experience regardless of whether the infrastructure is on premises, in the public cloud or in a hybrid environment.To read this article in full or to leave a comment, please click here

Riverbed enhances its SD-WAN performance-monitoring platform

Digital transformation is on every IT and business leader’s radar today. The path to it, though, may not be simple. While many industry pundits like to call out the likes of Uber and AirBnb, those digital natives didn’t have to worry about disrupting an existing business.To help mainstream businesses make that jump to a digital organization, Riverbed launched two new solutions at its Disrupt customer event last week in New York City.Enhanced network performance management The first is a new version of its network and application performance management platform, SteelCentral, enabling IT staff to better understand digital experiences. This aligns with a new movement among the NPM/APM vendors to shift to digital experience management (DEM), providing visibility into customer or worker experience regardless of whether the infrastructure is on premises, in the public cloud or in a hybrid environment.To read this article in full or to leave a comment, please click here

Riverbed enhances its SD-WAN performance monitoring platform

Digital transformation is on every IT and business leader’s radar today. The path to it, though, may not be simple. While many industry pundits like to call out the likes of Uber and AirBnb, those digital natives didn’t have to worry about disrupting an existing business.To help mainstream businesses make that jump to a digital organization, Riverbed launched two new solutions at its Disrupt customer event last week in New York City.Enhanced network performance management The first is a new version of its network and application performance management platform, SteelCentral, enabling IT staff to better understand digital experiences. This aligns with a new movement among the NPM/APM vendors to shift to digital experience management (DEM), providing visibility into customer or worker experience regardless of whether the infrastructure is on premises, in the public cloud or in a hybrid environment.To read this article in full or to leave a comment, please click here

Riverbed enhances its SD-WAN performance monitoring platform

Digital transformation is on every IT and business leader’s radar today. The path to it, though, may not be simple. While many industry pundits like to call out the likes of Uber and AirBnb, those digital natives didn’t have to worry about disrupting an existing business.To help mainstream businesses make that jump to a digital organization, Riverbed launched two new solutions at its Disrupt customer event last week in New York City.Enhanced network performance management The first is a new version of its network and application performance management platform, SteelCentral, enabling IT staff to better understand digital experiences. This aligns with a new movement among the NPM/APM vendors to shift to digital experience management (DEM), providing visibility into customer or worker experience regardless of whether the infrastructure is on premises, in the public cloud or in a hybrid environment.To read this article in full or to leave a comment, please click here

Amazon and Google make it easier to connect to the cloud

As more organizations look to enable hybrid cloud computing, a big question remains: How do I connect my network to the cloud? This week Google Cloud Platform and Amazon Web Services each released new products that make that process easier.Google’s Dedicated Interconnect is now generally available Dedicated Interconnect is an important way for customers to connect to the public cloud. It allows organizations to connect their on-premises resources to a colocation facility and then that co-lo facility has a direct network connection to the public cloud. Public IaaS cloud providers like Google want to give their customers access to fast connections to their cloud, but they don’t want to connect to each individual customer’s site, so they’ve created this co-lo based Interconnect. Google runs the Interconnect and offers either a 99.9 or 99.99% service level agreement. Google is working with a handful of colocation vendors as the middle-man, including Equinix, Digital Realty and Infomart.To read this article in full or to leave a comment, please click here

Amazon and Google make it easier to connect to the cloud

As more organizations look to enable hybrid cloud computing, a big question remains: How do I connect my network to the cloud? This week Google Cloud Platform and Amazon Web Services each released new products that make that process easier.Google’s Dedicated Interconnect is now generally available Dedicated Interconnect is an important way for customers to connect to the public cloud. It allows organizations to connect their on-premises resources to a colocation facility and then that co-lo facility has a direct network connection to the public cloud. Public IaaS cloud providers like Google want to give their customers access to fast connections to their cloud, but they don’t want to connect to each individual customer’s site, so they’ve created this co-lo based Interconnect. Google runs the Interconnect and offers either a 99.9 or 99.99% service level agreement. Google is working with a handful of colocation vendors as the middle-man, including Equinix, Digital Realty and Infomart.To read this article in full or to leave a comment, please click here

What is a Tunnel

It’s a bird! It’s a plane! It’s a… tunnel? In this video, I take on the age old question: what is a tunnel? Is it a protocol, or is it something else?

Sponsored Post: Loupe, Etleap, Aerospike, Stream, Scalyr, VividCortex, Domino Data Lab, MemSQL, Zohocorp

Who's Hiring? 

  • Need excellent people? Advertise your job here! 

Fun and Informative Events

  • On-demand Webinar. Fast & Frictionless - The Decision Engine for Seamless Digital Business. In this session, guest speakers Michele Goetz, Principal Analyst at Forrester Research and Matthias Baumhof, VP Worldwide Engineering at ThreatMetrix, discuss: How risk-based authentication leveraging digital identities is key to empowering customer transactions; How real-time customer trust decisions can reduce fraud and improve customer satisfaction; How a high performance Hybrid Memory Architecture (HMA) database helps continuously evaluate across a multitude of factors to drive decisioning at the lowest operational cost. View now

  • Advertise your event here!

Cool Products and Services

  • .NET developers dealing with Errors in Production: You know the pain of troubleshooting errors with limited time, limited information, and limited tools. Managers want to know what’s wrong right away, users don’t want to provide log data, and you spend more time gathering information than you do fixing the problem. To fix all that, Loupe was built specifically as a .NET logging and monitoring solution. Loupe notifies you about any errors and tells you all the information you need to fix them. It tracks performance metrics, identifies which errors cause Continue reading

IDG Contributor Network: How to solve the IoT data distribution dilemma

While the Internet of Things has enjoyed dizzying success when it comes to generating capital and public awareness for its furthered proliferation, several hurdles remain in its way. IoT-enthusiast familiar with its staggeringly quick development already know that the data distribution dilemma currently facing the IoT is perhaps its greatest obstacle towards future growth, yet few of them have solutions on how to go about solving this crisis.So, what exactly is the data distribution dilemma facing the IoT today, and what should governments, businesses, and private actors be doing to overcome this challenge? Solving the IoT’s data distribution nightmare will take serious investments and require new and better standards, but the cost of ignoring this growing predicament are too high to ignore.To read this article in full or to leave a comment, please click here