Moving to the cloud is supposed to reduce the headaches for IT administrators, but often it has the opposite effect of increasing their workload, especially around security and visibility. Not only do they have to make sure on-premises systems adhere to regulatory compliance, but that their cloud services do as well.Security specialist Qualys addresses these issues of security and visibility with its new app framework, CloudView, which complements existing Qualys services for security, compliance and threat intelligence with real-time monitoring of all enterprise cloud services from a single dashboard.+ Also on Network World: 18 free cloud storage options +
"Accelerated cloud adoption requires new adaptive security solutions that support fast-moving digital transformation efforts," said Philippe Courtot, Qualys CEO, in a statement. "Our new CloudView and its apps add unparalleled visibility and continuous security of all cloud workloads to provide customers complete cloud security in a single, integrated platform and drastically reduce their spend."To read this article in full or to leave a comment, please click here
For many decades, the term “random numbers” meant “pseudo-random numbers” to anyone who thought much about the issue and understood that computers simply were not equipped to produce anything that was truly random.Manufacturers did what they could, grabbing some signals from the likes of mouse movement, keyboard activity, system interrupts, and packet collisions just to get a modest sampling of random data to improve the security of their cryptographic processes.And the bad guys worked at breaking the encryption.We used longer keys and better algorithms.And the bad guys kept at it. And life went on.But something recently changed all that. No, not yesterday or last week. But it was only back in November of last year that something called the Entropy Engine won an Oscar of Innovation award for collaborators Los Alamos National Laboratory and Whitewood Security. This Entropy Engine is capable of delivering as much as 350 Mbps of true random numbers—sufficient to feed an entire data center with enough random data to dramatically improve all cryptographic processes.To read this article in full or to leave a comment, please click here
This is a tale of two cloud players, both old-guard IT firms with vested interests in on-premises software sales. One is making a very successful transition to the cloud era, while the other is failing badly. And it’s a familiar story. Microsoft is kicking butt, and IBM is getting its butt kicked.In its most recent quarter, Microsoft announced revenues of $23.3 billion, $7.43 billion of that comes from what it calls “the Intelligent Cloud,” including Azure, a 97 percent year-over-year increase. There was another $8.45 billion from the Productivity and Business Process business, which includes Office and Office 365. The company did not separate out the installed software sales from the on-demand version, but it did say that for the first time, Office 365 is outselling the on-premises version.To read this article in full or to leave a comment, please click here
Hybrid cloud architectures are currently very popular as a way for enterprises to move to the cloud without abandoning their existing data center investments. At first glance, the strategy makes sense, but there’s a very real danger that the hybrid cloud’s popularity will turn out to be little more than a transitional stage, potentially distracting companies from optimizing either their on-premise data centers or their migration to the cloud.The many meanings of ‘hybrid cloud’
Making things more complicated, the term “hybrid cloud” can have a number of meanings, but at the root it covers any combination of traditional and cloud architectures. That can mean anything from a traditional data center shop running a couple of non-strategic, standalone applications in the cloud to complex architectures with some core applications residing on-premise and others in various cloud implementations.To read this article in full or to leave a comment, please click here
With the summer solstice in the rear view mirror, those of us north of the equator are preparing for the true summer heat to arrive in force this next month. While BBQs, boating, and your preferred beverage may be the first things on your mind for this next month, many folks in the data center world greet summer with a different attitude entirely. For starters, the period from June to August is Outage Season. Data from previous years shows more centers head offline during this time period than any other 3-month span of your calendar. This includes both poor performing infrastructure to full-scale outages. In addition, data center managers often fight higher energy bills due to hotter external temperatures that drive up the heat inside your facility.To read this article in full or to leave a comment, please click here
With the summer solstice in the rear view mirror, those of us north of the equator are preparing for the true summer heat to arrive in force this next month. While BBQs, boating, and your preferred beverage may be the first things on your mind for this next month, many folks in the data center world greet summer with a different attitude entirely. For starters, the period from June to August is Outage Season. Data from previous years shows more centers head offline during this time period than any other 3-month span of your calendar. This includes both poor performing infrastructure to full-scale outages. In addition, data center managers often fight higher energy bills due to hotter external temperatures that drive up the heat inside your facility.To read this article in full or to leave a comment, please click here
Skilled storage pros are in demand as enterprise IT teams take on exponential data growth and strategically migrate data assets from legacy systems to more modern options.For these professionals who are looking for a new job or aiming to advance in their current role, a certification could potentially differentiate them from other one candidates. And for hiring managers, certifications can help trim some of the risk from the recruitment process by validating to some extent expertise in areas such as network-attached storage, storage area networks, and storage configuration and operations management.Vendor neutral vs. vendor specificTo read this article in full or to leave a comment, please click here
While Amazon is raking in the lion's share of money spent by public-cloud users, Oracle is doubling down on its hybrid-cloud strategy, appealing to enterprises that want to put data and applications behind their firewall while taking advantage of cloud pricing models and technology.Oracle has greatly expanded the services available through its on-premises Cloud at Customer offering so that they are essentially at parity with what the company has on its public cloud. The company announced Tuesday that a broad portfolio of SaaS (software as a service) applications as well as PaaS (platform as a service) and Oracle Big Data Machine services are now available via Cloud at Customer.To read this article in full or to leave a comment, please click here
As any IT person knows and likely learned the hard way, you pay for every bit of data you transmit back and forth to your cloud provider. So, what do you do if you want to put a few petabytes in the cloud? The bill could hit the thousands of dollars, and it will take days to transfer it all—even under ideal circumstances.Amazon introduced a decidedly low-tech but practical solution two years ago called Snowball. It was a storage appliance they shipped to you, which you connected to your data center network, transferred all the data at very high speeds, then sent the device back to an Amazon data center, where they transferred the data for you. It’s reminiscent of the old Sneakernet, but it worked. To read this article in full or to leave a comment, please click here
Intent-based networking pioneer Apstra announced today that it has entered into a distribution agreement with Tokyo Electron Device (TED) for the Japanese market.For those who don’t know Apstra, the company came to market with an intent-based networking solution for the data center in June 2016. Since then, Cisco’s “Network Intuitive” launch, which was all about intent-based networking, has made intent-based networking a household term (at least for households with Cisco engineers in them). Cisco’s solution is focused at the campus and Apstra at the data center, but the two companies are working with the same vision of automating network operations using intent rather than manual processes.To read this article in full or to leave a comment, please click here
A few weeks back, I wrote that “choosing Microsoft Windows for your organization should get you fired.”It’s a statement that, while certainly a bit on the inflammatory side, I completely stand by—mostly due to the known insecure nature of running Windows as a server operating system.What I didn’t do was give specific examples of what to move your existing Windows-based infrastructure to. Sure, the obvious answer for most SysAdmins is simply “migrate the servers over to Linux.” But what about specific server applications that your organization might already rely upon? That’s a whole other can of worms.To read this article in full or to leave a comment, please click here
Amazon Web Services (AWS) and VMware are reportedly in talks about possibly teaming up to develop data center software products, according to The Information, which cited anonymous sources.Unfortunately, the article doesn’t have much if any detail on what that product would be. The speculation is it might be a stack-like product, since VMware already provides what would be the base software for such a product and stacks are becoming the in thing.Already there is OpenStack, the open-source product that runs cloud services in a data center, and Microsoft just shipped Azure Stack, its answer to OpenStack that will allow the same features of its Azure public cloud to run within a company’s private data center.To read this article in full or to leave a comment, please click here
IBM has introduced the 14th generation of its Z series mainframes, which still sell respectably despite repeated predictions of their demise. One of the major features being touted is the simple ability to encrypt all of the data on the mainframe in one shot. The mainframe, called IBM Z or z14, introduces a new encryption engine that for the first time will allow users to encrypt all of their data with one click—in databases, applications or cloud services—with virtually no impact on performance.The new encryption engine is capable of running more than 12 billion encrypted transactions every day. The mainframe comes with four times more silicon for processing cryptographic algorithms over the previous generation mainframe along with encryption-oriented upgrades to the operating system, middleware and databases. To read this article in full or to leave a comment, please click here
IBM wants businesses to use its new z14 mainframe to encrypt pretty much everything -- an approach to security it calls pervasive encryption.Encrypting everything, and restricting access to the keys, is one way to reduce the risk and impact of data breaches. It can reduce the threat surface by 92 percent, according to research commissioned by IBM.To make such pervasive encryption viable, the z14 has four times as much silicon devoted to cryptographic accelerators as its predecessor, the z13, giving it seven times the cryptographic performance.To read this article in full or to leave a comment, please click here
IBM has revamped and restructured its services division to provide greater emphasis on its Watson platform and artificial intelligence.IBM has been retrenching around Watson, a series of cognitive applications and A.I. applications in one coherent platform, for the last few years as traditional sales of mainframe hardware and software continue to dry up.Bart van den Daele, general manager of IBM Global Technology Services in Europe, told Bloomberg that the new A.I.-centric services will help IBM’s customers minimize disruptions such as server outages or other malfunctions by predicting problems before they occur and taking corrective action, such as adding cloud capacity or rerouting network traffic around bottlenecks.To read this article in full or to leave a comment, please click here
IBM has revamped and restructured its services division to provide greater emphasis on its Watson platform and artificial intelligence (AI).IBM has been retrenching around Watson, a series of cognitive applications and AI applications in one coherent platform, for the last few years as traditional sales of mainframe hardware and software continue to dry up.Bart van den Daele, general manager of IBM Global Technology Services in Europe, told Bloomberg that the new AI-centric services will help IBM’s customers minimize disruptions such as server outages or other malfunctions by predicting problems before they occur and taking corrective action, such as adding cloud capacity or rerouting network traffic around bottlenecks.To read this article in full or to leave a comment, please click here
It's not the server -- it's the system. That's the word from Cisco as it rolls out its new, M5 generation Unified Computing System rack and blade servers, triggered by Intel's release of the Xeon Scalable Processor platform.Cisco's new servers use the Xeon Scalable processors -- unveiled Tuesday in New York -- to fuel performance as well as increase server density and throughput. But the value in the UCS product family lies in how the hardware works with configuration management and optimization software to make data centers run at peak efficiency, company officials say.To read this article in full or to leave a comment, please click here
Hyperconvergence is an IT framework that combines storage, computing and networking into a single system in an effort to reduce data center complexity and increase scalability. Hyperconverged platforms include a hypervisor for virtualized computing, software-defined storage, and virtualized networking, and they typically run on standard, off-the-shelf servers. Multiple nodes can be clustered together to create pools of shared compute and storage resources, designed for convenient consumption. The use of commodity hardware, supported by a single vendor, yields an infrastructure that's designed to be more flexible and simpler to manage than traditional enterprise storage infrastructure. For IT leaders who are embarking on data center modernization projects, hyperconvergence can provide the agility of public cloud infrastructure without relinquishing control of hardware on their own premises.To read this article in full or to leave a comment, please click here
Make no mistake: Intel's Xeon Processor Scalable Family, based on the company's Skylake architecture, is about much more than revving up CPU performance. The new processor line is essentially a platform for computing, memory and storage designed to let data centers -- groaning under the weight of cloud traffic, ever-expanding databases and machine-learning data sets -- optimize workloads and curb operational costs.In order to expand the market for its silicon and maintain its de facto processor monopoly in the data center, Intel is even starting to encroach on server-maker turf by offering what it calls Select Solutions, generally referred to in the industry as engineered systems -- packages of hardware and software tuned to specific applications.To read this article in full or to leave a comment, please click here