With any new network monitoring and management software, the first step is to assess your existing inventory. Do this by allowing the software to discover all your devices. Your new software may alert you to things you didn’t know you had. Network monitoring and management tooling may be smart enough to propose ways to improve your system: it might suggest new configurations or highlight bottlenecks. Your last step is to let the Simple Network Management Protocol (SNMP) traps flow and inform you about day-to-day network usage.One blessing of the OSI model is that it has enabled innovation at the individual layers. These layers are abstracted from one another, which prevents accidentally injecting dependencies into other layers. The curse of the OSI model is that it has socially separated the people who work at the individual layers from one another—someone working at one layer may never think of someone else working at a different layer. It’s a recurring problem in network monitoring and management, in general.To read this article in full, please click here
With any new network monitoring and management software, the first step is to assess your existing inventory. Do this by allowing the software to discover all your devices. Your new software may alert you to things you didn’t know you had. Network monitoring and management tooling may be smart enough to propose ways to improve your system: it might suggest new configurations or highlight bottlenecks. Your last step is to let the Simple Network Management Protocol (SNMP) traps flow and inform you about day-to-day network usage.One blessing of the OSI model is that it has enabled innovation at the individual layers. These layers are abstracted from one another, which prevents accidentally injecting dependencies into other layers. The curse of the OSI model is that it has socially separated the people who work at the individual layers from one another—someone working at one layer may never think of someone else working at a different layer. It’s a recurring problem in network monitoring and management, in general.To read this article in full, please click here
It wasn’t that long ago that Amazon CTO Werner Vogels routinely said that any strategy that included on-premises data centers and the public cloud was really just a path to public cloud. Yet today, AWS touts architectures that include both.Microsoft has gone a step further with Azure Stack so that the public cloud vs. data center experience is as seamless as possible for its customers. Google, meanwhile, continues to invest in technologies that admit that some services will stay in private data centers (but you might as well make nice APIs for them, while also making it easier for on-premises business logic to consume public cloud services).To read this article in full, please click here
It wasn’t that long ago that Amazon CTO Werner Vogels routinely said that any strategy that included on-premises data centers and the public cloud was really just a path to public cloud. Yet today, AWS touts architectures that include both.Microsoft has gone a step further with Azure Stack so that the public cloud vs. data center experience is as seamless as possible for its customers. Google, meanwhile, continues to invest in technologies that admit that some services will stay in private data centers (but you might as well make nice APIs for them, while also making it easier for on-premises business logic to consume public cloud services).To read this article in full, please click here
IDC tells us that most companies are using more than one cloud and that cloud usage isn’t just about cost savings. Three out of every four companies are using cloud to chase additional revenue in the form of new customers, risk mitigation, IoT enablement or time to market gains. Most are using multiple external cloud services.However, especially as microservices become the dominant approach to new application development because of the iteration speed improvements that it provides, it has become important to distinguish the different ways that more than one cloud can be utilized. Specifically, the differences lie in where you sit in an organization and what you are trying to optimize from that seat. Although historically we’ve used the terms interchangeably, hybrid and multi cloud are not the same.To read this article in full, please click here
IDC tells us that most companies are using more than one cloud and that cloud usage isn’t just about cost savings. Three out of every four companies are using cloud to chase additional revenue in the form of new customers, risk mitigation, IoT enablement or time to market gains. Most are using multiple external cloud services.However, especially as microservices become the dominant approach to new application development because of the iteration speed improvements that it provides, it has become important to distinguish the different ways that more than one cloud can be utilized. Specifically, the differences lie in where you sit in an organization and what you are trying to optimize from that seat. Although historically we’ve used the terms interchangeably, hybrid and multi cloud are not the same.To read this article in full, please click here
If we could go back in time and start using public cloud in 2009, we’d probably be better off today. The AWS beta started in 2006 and was entirely API-driven, without either a console or a command line interface to make interacting with the service easier than what we know so well now. Three years later, it was more mature. Early adopters started to solve real problems with it, padding their resumes and bringing value to their organizations in ways that seemed impossible before.Serverless computing in 2018 is about where cloud computing was in 2009. But what exactly does serverless mean, and what are some easy ways to get started with it?Function-as-a-Service: making serverless architectures possible
As cool as the technology is, serverless computing is a terrible name because (spoiler alert) there are, in fact, servers under the hood. The name comes from the idea that developers don’t have to worry about the server, or even a container, as a unit of compute any more as public cloud services like AWS Lambda, IBM OpenWhisk, Google Cloud Functions and Azure Functions handle the details.To read this article in full, please click here
For the last six years running, the most important event in cloud computing has been AWS re:Invent, where the market leader announces its latest improvements. This year, 44,000 people descended upon a very crowded set of Las Vegas venues spread across multiple hotels for breakout sessions, certification exams, a diverse expo floor, and the all-important keynotes where the newest offerings were announced.Increasingly, the public cloud arms race is being waged on four fronts, with a fifth quickly emerging. All five had a healthy set of announcements—here are some of the highlights.1. IaaS/PaaS
AWS started the cloud revolution with its S3 object storage service in 2006, which was quickly followed by its EC2 compute offering and a set of other IaaS products. As time went by, PaaS services like load balancers, message queues, and databases emerged as key components as well. Both classifications of services are, of course, built on physical hardware that AWS organizes into availability zones and regions.To read this article in full, please click here
Public cloud or private cloud? Amazon or Azure? There once was a time when you could go to any bar in Las Vegas after a day of trade shows and hear people debating such topics, sometimes with great passion. But what has emerged more recently is the stance that you don’t have to choose one or the other, painting yourself into a figurative box of vendor lock-in. Instead, what more and more organizations are choosing is to not choose at all.Our friends at IDC call this Hybrid Cloud, but that terminology implies a single application using multiple clouds. It’s more accurate to say that organizations increasingly have a multiple cloud mindset. What does that mean? Choose the right cloud for the right job on an application-by-application basis.To read this article in full or to leave a comment, please click here
A year ago, IDC told us that 68 percent of organizations have adopted cloud for enterprise applications—and that it’s not just about cost, but about revenue increases as well. That study also says that 73 percent of respondents, who spanned both IT and line-of-business users, have a hybrid cloud strategy in place.But when you dig further into those numbers, you’ll find that to most those respondents, “hybrid” means “subscribing to multiple external cloud services.” This can mean some applications in a portfolio run on one cloud while others run on a different cloud. To another 47 percent of those surveyed, “hybrid” means “using a mix of public cloud services and dedicated assets,” which conjures an image of a database running on-site sending data to a web or application server on a public cloud.To read this article in full or to leave a comment, please click here
Competition in the 21st Century economy is fierce. Consumers are more tech-savvy than ever, as we all carry around more computing power in our pockets than Neil Armstrong took to the moon—and potential customers pay attention to differentiated experiences. Increasingly for all businesses, investing in agile software development is a way to achieve that differentiation.In other words, in the 21st Century, every business is a software business.All that software that’s enabling this transformation toward digital experiences has to run somewhere, and it turns out a hybrid cloud strategy gives businesses maximum choice when it comes to what runs where.To read this article in full or to leave a comment, please click here
Once upon a time, in a magic, faraway land called “The 1990s,” every application had its own set of physical servers. Citizens of this land, who sometimes called themselves "developers," feared getting fired for not having enough capacity to handle peak loads. New physical servers took months to be delivered, so developers ordered more data center hardware than they probably needed. Because it was so difficult to get new machines, developers treated them like pets, gave them names and took great care to keep them up and running at all times. Everybody was so excited about the “Internet Bubble” and the land grab that was going on that no one seemed to care about underutilized hardware. To read this article in full or to leave a comment, please click here
Once upon a time, in a magic, faraway land called “The 1990s,” every application had its own set of physical servers. Citizens of this land, who sometimes called themselves "developers," feared getting fired for not having enough capacity to handle peak loads. New physical servers took months to be delivered, so developers ordered more data center hardware than they probably needed. Because it was so difficult to get new machines, developers treated them like pets, gave them names and took great care to keep them up and running at all times. Everybody was so excited about the “Internet Bubble” and the land grab that was going on that no one seemed to care about underutilized hardware. To read this article in full or to leave a comment, please click here