Dynamic resource management for cloud computing is at a critical crossroad. The ultimate objective when provisioning software-defined infrastructure, synchronizing inter-cloud resources, or allocating network bandwidth is allowing applications to successfully execute on demand without concern for capacity. While these approaches are effective in supplying applications with additional capacity on demand, the downside is that application performance may not be optimized in the process.Cloud applications and services have become so complex that the runtime synchronization of resources required to support them drags down overall performance and leaves capacity unused. To tap this unused capacity, and deliver the performance expected, we need to enhance resource management with something like intelligent resource execution.To read this article in full or to leave a comment, please click here
Serverless computing is one of today’s hottest technology topics. Now that Amazon has announced AWS Lambda and Microsoft is previewing Azure Functions, the concept is becoming real.Serverless is billed as a solution that dynamically creates cloud services to process events in an ephemeral container that are executed on your behalf as a backend-as-a-service. Instead of leasing a virtual machine, then writing and deploying your code, you get to use a new “pay-per-event” pricing model while leveraging a catalogue of executable functions (building blocks) to construct your own service. It is a DIY cloud deployment model that promises to allow clouds to be used the same way we have become accustomed to using mobile applications on our smartphones: simply access the app (“function”) you need at any moment.To read this article in full or to leave a comment, please click here
Serverless computing is one of today’s hottest technology topics. Now that Amazon has announced AWS Lambda and Microsoft is previewing Azure Functions, the concept is becoming real.Serverless is billed as a solution that dynamically creates cloud services to process events in an ephemeral container that are executed on your behalf as a backend-as-a-service. Instead of leasing a virtual machine, then writing and deploying your code, you get to use a new “pay-per-event” pricing model while leveraging a catalogue of executable functions (building blocks) to construct your own service. It is a DIY cloud deployment model that promises to allow clouds to be used the same way we have become accustomed to using mobile applications on our smartphones: simply access the app (“function”) you need at any moment.To read this article in full or to leave a comment, please click here
Microservices are a popular architectural approach for cloud-native applications. But the idea of deconstructing a large service into smaller componets was originally conceived for clusters and distributed platforms—when applications were trying to increase compute performance and grow storage and network not available on a single host.
Once the boundry of a server was crossed, an application’s software components required interaction via inter-server “east-to-west” communications. As this concept developed and was applied to modern-day cloud services, building blocks such as JSON, RESTful API and Thrift were added to create what we now know as microservices.To read this article in full or to leave a comment, please click here
Who doesn’t love the fundamental promise of containers? Simple development, segmented applications, rolling changes, etc. They are certainly a blessing to both developers and operations. But if not thoughtfully designed, container virtual networking could be the curse that plagues us for years.Let’s start with a little perspective. The rise and wide deployment of virtual machines and containers coincides with mainstream data center networking evolving from a hierarchical layer 2/3 network to a flatter layer 2 interconnect. Since cloud infrastructure is inherently multi-tenant, traditionally virtual LANs have been used to isolate applications and tenants sharing a common infrastructure. But as containerized applications explode in number, the VLAN maximum size limit of 4,096 becomes grossly inadequate for very large cloud computing environments.To read this article in full or to leave a comment, please click here
Who doesn’t love the fundamental promise of containers? Simple development, segmented applications, rolling changes, etc. They are certainly a blessing to both developers and operations. But if not thoughtfully designed, container virtual networking could be the curse that plagues us for years.Let’s start with a little perspective. The rise and wide deployment of virtual machines and containers coincides with mainstream data center networking evolving from a hierarchical layer 2/3 network to a flatter layer 2 interconnect. Since cloud infrastructure is inherently multi-tenant, traditionally virtual LANs have been used to isolate applications and tenants sharing a common infrastructure. But as containerized applications explode in number, the VLAN maximum size limit of 4,096 becomes grossly inadequate for very large cloud computing environments.To read this article in full or to leave a comment, please click here
Driven by the need to partition databases into independent data sets to facilitate concurrent data access, NoSQL databases have been at the forefront of the “share-nothing” resource movement. But if NoSQL’s share-nothing philosophy is correct, then how do you explain the explosive growth and acceptance of Linux containers that share resources on the same host and the clusters and data center operating systems that run over them?On the surface, these two movements appear to be at odds, but a deeper look shows merits for both.+ Also on Network World: Containers: Most developers still don’t understand how to use them +To read this article in full or to leave a comment, please click here
Developers specifically design apps natively for the cloud with the expectation that they will achieve massive scale with millions or billions of concurrent users. While many aspire to be the next Facebook, Twitter, Snapchat or Uber, plenty of app developers for banks, ecommerce sites or SaaS companies design for scale that is still far beyond what was even imagined a decade ago.Monitoring the performance of cloud applications with this kind of scale, however, is daunting, and the traditional approach of doing periodic collection and analysis of statistics is simply impractical. Only machine learning techniques, applied to intelligent performance data collection, can reduce data loads without inadvertently omitting context- and performance-sensitive data.To read this article in full or to leave a comment, please click here
Containerization exploits the idea that cloud applications should be developed on a microservices architecture and be decoupled from their underlying infrastructure.That is not a new concept; software componentization dates back to Service-Oriented Architectures (SOA) and the client-server paradigm. De-coupling applications from their underlying infrastructure aligns with today’s vision that efficient data centers should provide an on-demand resource pool that offers instances of various software-definable resource types spawned as needed. As demand for an application grows, requiring additional resources to support it, the services could span over multiple servers (a cluster) distributed within a data center or across a globally distributed infrastructure.To read this article in full or to leave a comment, please click here
Despite recent advancements and improved parallelism in multi-core CPU performance, there is still a big challenge to be solved relating to the scale-out of cloud applications.Put simply, Linux application performance scales poorly as CPU core count increases. This is commonly experienced as typical Linux applications can be expected to see a 1.5X performance improvement for a 2-core CPU, but the scale quickly plateaus after that, with 4 core performance only improving around 2.5X. The performance further degrades as core counts rise. Given that, along with Intel’s announcement that its Xeon chips have up to 22 cores, scaling performance efficiently across cores is extremely important.To read this article in full or to leave a comment, please click here