Easing The Pain Of Prepping Data For AI

Organizations are turning to artificial intelligence and deep learning in hopes of being able to more quickly make the right business decisions, to remake their business models and become more efficient, and to improve the experience of their customers. The fast-emerging technologies will let enterprises gain more insight into the massive amounts of data they are generating and find the trends that normally would have been hidden from them. And enterprises are quickly moving in that direction.

A Gartner survey found that 59 percent of organizations are gathering information to help them build out their AI strategies, while the rest

Easing The Pain Of Prepping Data For AI was written by Jeffrey Burt at The Next Platform.

New AWS Instances Sport Customized Intel Skylakes, KVM Hypervisor

The global server market is increasingly driven by the hyperscalers, and the trendsetter for all of them is Amazon Web Services. The massive company dominates the fast-growing public cloud space, outpacing rivals like Microsoft Azure, Google Cloud Platform, and IBM Cloud, and is the top consumer of servers among a group of hyperscalers that are becoming the most powerful buyers of systems and new components, such as processors.

This can be seen in the numbers. According to IDC analysts, hyperscalers in the first and second quarters this year made a significant push to deploy servers, with AWS accounting for more

New AWS Instances Sport Customized Intel Skylakes, KVM Hypervisor was written by Jeffrey Burt at The Next Platform.

BrandPost: FlexWare: Year Two

It’s been over a year since AT&T introduced its FlexWare offering. It was positioned as the next big thing in enterprise networking, and in the intervening months, AT&T has rolled out a number of important virtual network functions (VNFs) that run on its x86-based FlexWare devices.Those VNFs essentially replace proprietary boxes that historically have been costly to operate and replace in terms of time and money.Light Reading’s Carol Wilson said that AT&T’s venture into “white box services” is “a clear signal to traditional telecom suppliers that the gig’s up on closed system sales.” That’s cheery news to enterprises that have long chafed over the relative inflexibility of on-site equipment solutions.To read this article in full or to leave a comment, please click here

Deep Dive into Container Images in Kolla

This is a liveblog of my last session at the Sydney OpenStack Summit. The session title is “OpenStack images that fit your imagination: deep dive into container images in Kolla.” The presenters are Vikram Hosakote and Rich Wellum, from Cisco and Lenovo, respectively.

Hosakote starts with an overview of Kolla. Kolla is a project to deploy OpenStack services into Docker containers. There are two ways to use Kolla: using Ansible (referred to as Kolla-Ansible) or using Kubernetes (referred to as Kolla-Kubernetes). Hosakote mentions that Kolla-Kubernetes also uses Helm and Helm charts; this makes me question the relationship between Kolla-Kubernetes and OpenStack-Helm.

Why Kolla? Some of the benefits of Kolla, as outlined by Hosakote, include:

  • Fast(er) deployment (Hosakote has deployed in as few as 9 minutes)
  • Easy maintenance, reconfiguration, patching, and upgrades
  • Containerized services are found in container registry
  • One tool to do multiple things

Hosakote briefly mentions his preference for Kolla over other tools, including Juju, DevStack, PackStack, Fuel, OpenStack-Ansible, TripleO, OpenStack-Puppet, and OpenStack-Chef.

Other benefits of using containers for OpenStack:

  • Reproduce golden state easily
  • No more “Works in DevStack” but doesn’t work in production
  • Production-ready images (this seems specific to Kolla, not just containers for OpenStack control plane)
  • Continue reading

Top 5 From The Last 3 Months

 

In the year 2017, news comes at you fast. So, it’s easy to miss the important or informational items that just weren’t on your radar when they first arrived. While we believe VMware NSX should be firmly on everyone’s virtualization radar, we understand that you may miss a few items from time to time. That’s why we’re putting together this VMware NSX news round-up.

This news round-up recaps the latest NSX-related material you may have missed over the past few months for you peruse at your leisure. We’ll compile these posts again from time to time, so be sure to keep your eye on this space for more VMware NSX news rounds-ups and informational posts!

Real World Use Cases for NSX and Pivotal Cloud Foundry

From the post: Pivotal Cloud Foundry (PCF) is the leading PaaS solution for enterprise customers today, providing a fast way to convert their ideas from conception to production. This is achieved by providing a platform to run their code in any cloud and any language taking care of all the infrastructure “stuff” for them.

From building the container image, compiling it with the required runtime, deploying it in a highly available mode and connecting Continue reading

Carrier-Grade SDN-Based OpenStack Networking Solution

This session was titled “Carrier-Grade SDN Based OpenStack Networking Solution,” led by Daniel Park and Sangho Shin. Both Park and Shin are from SK Telecom (SKT), and (based on the description) this session is a follow-up to a session from the Boston summit where SK Telecom talked about an SDN-based networking solution they’d developed and released for use in their own 5G-based network.

Shin starts the session with some presenter introductions, and sets the stage for the presentation. Shin first provides some background on SKT, and discusses the steps that SKT has been taking to prepare their network for 5G infrastructure/services. This involves more extensive use of virtual network functions (VNFs) and software-defined infrastructure based on open software and open hardware. Shin reinforces that the SKT project (which is apparently called COSMOS?) exclusively leverages open source software.

Diving into a bit more detail, Shin talks about SONA Fabric (which is used to control the leaf/spine fabric used as the network underlay), SONA (which handles virtual network management), and TACO (which is an SKT-specific version of OpenStack). The network monitoring solution is called TINA, and this feeds into an integrated monitoring system known as 3DV.

TACO (stands for SKT All Continue reading

Can OpenStack Beat AWS in Price

This is a liveblog of the session titled “Can OpenStack Beat AWS in Price: The Trilogy”. The presenters are Rico Lin, Bruno Lago, and Jean-Daniel Bonnetot. The “trilogy” refers to the third iteration of this presentation; each time the comparison has been done in a different geographical region (first in Europe, then in North America, and finally here in Asia-Pacific).

Lago starts the presentation with an explanation of the session, and each of the presenters introduce themselves, their companies, and their backgrounds. In this particular case, the presenters are representing Catalyst (runs Catalyst Cloud in New Zealand), OVH, and EasyStack—all three are OpenStack-powered public cloud offerings.

Lago explains that they’ll cover three common OpenStack scenarios:

  • Private cloud
  • Managed private cloud
  • OpenStack-powered public cloud

Lin takes point to talk a bit about price differences in different geographical regions. Focusing on AWS, Lin points out that AWS services are about 8% higher in Europe than in North America. Moving to APAC, AWS services are about 29% higher than in North America. With this 29% price increase, I can see where OpenStack might be much more competitive in APAC than in North America (and this, in turn, may explain why OpenStack seems much Continue reading

The Internet, Homemade

The following post originally appeared on the APNIC blog.

The Internet can enhance social inclusion and participation and can contribute to economic development. Therefore, it should be a commodity for every citizen, but, as RFC3271 says, ‘it will only be such if we make it so.’

Internet infrastructure and services do not even reach 50% of the global population. The three main issues affecting Internet growth are: not everyone wants or needs it, not everyone has access to it, and not everyone can provide it.

I respect people’s choices with the first issue since the Internet is not a natural thing that we need to sustain or protect ourselves. However, for many, they don’t want or need the Internet because there is a lack of locally relevant content and services or training on how to use it. Metaphorically speaking: Shall I eat the same fast food made far away when I like my local tasty food not offered here?

Without content and services adapted to my local taste and language, it may not be attractive or digestible. At the same time, local access and education are necessary primers to produce such relevant and meaningful content.

The second and third Continue reading

Widespread impact caused by Level 3 BGP route leak

For a little more than 90 minutes yesterday, internet service for millions of users in the U.S. and around the world slowed to a crawl.  Was this widespread service degradation caused by the latest botnet threat?  Not this time.  The cause was yet another BGP routing leak — a router misconfiguration directing internet traffic from its intended path to somewhere else.

While not a day goes by without a routing leak or misconfiguration of some sort on the internet, it is an entirely different matter when the error is committed by the largest telecommunications network in the world.

In this blog post, I’ll describe what happened in this routing leak and some of the impacts.  Unfortunately, there is no silver bullet to completely remove the possibility of these occurring in the future.  As long as we have humans configuring routers, mistakes will take place.

What happened?

At 17:47:05 UTC yesterday (6 November 2017), Level 3 (AS3356) began globally announcing thousands of BGP routes that had Continue reading

Lessons Learnt from Running a Container-Native Cloud

This is a liveblog of the session titled “Lessons Learnt from Running a Container-Native Cloud,” led by Xu Wang. Wang is the CTO and co-founder of Hyper.sh, a company that has been working on leveraging hypervisor isolation for containers. This session claims to discuss some lessons learned from running a cloud leveraging this sort of technology.

Wang starts with a brief overview of Hyper.sh. The information for this session comes from running a Hypernetes (Hyper.sh plus Kubernetes)-based cloud for a year.

So, what is a “container-native” cloud? Wang provides some criteria:

  • A container is a first-class citizen in the cloud. This means container-level APIs and the ability to launch containers without a VM.
  • The cloud offers container-centric resources (floating IPs, security groups, etc.).
  • The cloud offers container-based services (load balancing, scheduled jobs, functions, etc.).
  • Billing is handled on a per-container level (not on a VM level).

To be honest, I don’t see how any cloud other than Hyper.sh’s own offering could meet these criteria; none of the major public cloud providers (Microsoft Azure, AWS, GCP) currently satisfy Wang’s requirements. A “standard” OpenStack installation doesn’t meet these requirements. This makes the session more like a Continue reading

Make Your Application Serverless

This is a liveblog from the last day of the OpenStack Summit in Sydney, Australia. The title of the session is “Make Your Application Serverless,” and discusses Qinling, a project for serverless (Functions-as-a-Service, or FaaS) architectures/applications on OpenStack. The presenters for the session are Lingxian Kong and Feilong Wang from Catalyst Cloud.

Kong provides a brief background on himself and his co-presenter (Wang), and explains that Catalyst Cloud is an OpenStack-based public cloud based in New Zealand. Both presenters are active technical contributors to OpenStack projects.

Kong quickly transitions into the core content of the presentation, which focuses on serverless computing and Qinling, a project for implementing serverless architectures on OpenStack. Kong points out that serverless computing doesn’t mean there are no servers, only that the servers (typically VMs) are hidden from view. Functions-as-a-Service, or FaaS, is a better term that Kong prefers. He next provides an example of how a FaaS architecture may benefit applications, and contrasts solutions like AutoScaling Groups (or the equivalent in OpenStack) with FaaS.

Some key characteristics of serverless, as summarized by Kong:

  • No need to think about servers
  • Run your code, not the whole application
  • Highly available and horizontally scalable
  • Stateless/ephemeral
  • Lightweight/single-purpose functions
  • Event-driven Continue reading

IDG Contributor Network: Why speeds and feeds don’t fix your data management problems

For a very long time, IT professionals have made storage investments based on a few key metrics – how fast data can be written to a storage media, and how fast it can be read back when an application needs that information, and of course, the reliability and cost of that system. The critical importance of storage performance led us all to fixate on latency and how to minimize it through intelligent architectures and new technologies.Given the popularity of flash memory in storage, the significance of latency is not about to fade away, but a number of other metrics are rapidly rising in importance to IT teams. Yes, cost has always been a factor in choosing a storage investment, but with Cloud and object storage gaining popularity, the price of storage per GB is more than a function of speed and capacity, but also the opportunity cost of having to power and manage that resource.  When evaluating whether to archive data on premises, or to send it offsite, IT professionals are now looking at a much wider definition of overall cost.To read this article in full or to leave a comment, please click here

IDG Contributor Network: Why speeds and feeds don’t fix your data management problems

For a very long time, IT professionals have made storage investments based on a few key metrics – how fast data can be written to a storage media, and how fast it can be read back when an application needs that information, and of course, the reliability and cost of that system. The critical importance of storage performance led us all to fixate on latency and how to minimize it through intelligent architectures and new technologies.Given the popularity of flash memory in storage, the significance of latency is not about to fade away, but a number of other metrics are rapidly rising in importance to IT teams. Yes, cost has always been a factor in choosing a storage investment, but with Cloud and object storage gaining popularity, the price of storage per GB is more than a function of speed and capacity, but also the opportunity cost of having to power and manage that resource.  When evaluating whether to archive data on premises, or to send it offsite, IT professionals are now looking at a much wider definition of overall cost.To read this article in full or to leave a comment, please click here

Riverbed enhances its SD-WAN performance-monitoring platform

Digital transformation is on every IT and business leader’s radar today. The path to it, though, may not be simple. While many industry pundits like to call out the likes of Uber and AirBnb, those digital natives didn’t have to worry about disrupting an existing business.To help mainstream businesses make that jump to a digital organization, Riverbed launched two new solutions at its Disrupt customer event last week in New York City.Enhanced network performance management The first is a new version of its network and application performance management platform, SteelCentral, enabling IT staff to better understand digital experiences. This aligns with a new movement among the NPM/APM vendors to shift to digital experience management (DEM), providing visibility into customer or worker experience regardless of whether the infrastructure is on premises, in the public cloud or in a hybrid environment.To read this article in full or to leave a comment, please click here