Archive

Category Archives for "Network World Data Center"

IDG Contributor Network: What you’ll need when the big Internet of Things breakout occurs

The Internet of Things (IoT) sometimes has the feel of a trend that’s forever going to be on the cusp of a huge breakout. Figures fly around about the projected size of the IoT and they’re always massive (such as the 50 billion devices Cisco predicted by 2020). But the number of things in the IoT is already counted in the 8 billion to 15 billion range. So, shouldn’t we be seeing more from the IoT by now? Based on what leaders are saying in a survey commissioned by Verizon, we soon will.To read this article in full or to leave a comment, please click here

Former Intel CEO Paul Otellini remembered

Former Intel CEO Paul Otellini died in his sleep on Monday at the age of 66. His tenure was marked by a significant comeback for the company, dealing with a number of business and technological issues, making the company a massive player in the data center but a fumbled opportunity for the mobile market. Unlike his predecessors, Otellini was not an engineer but had a MBA from the University of California, Berkeley. He joined Intel in 1974 right out of Berkeley and spent his entire career at Intel, eventually becoming chief operating officer in 2002 and CEO in 2005, a position he held until his retirement in 2012. "We are deeply saddened by Paul’s passing,” CEO Brian Krzanich said in a statement. “He was the relentless voice of the customer in a sea of engineers, and he taught us that we only win when we put the customer first." To read this article in full or to leave a comment, please click here

Server downtime is bad. Server slowness is worse

I've worked at my fair share of large corporations in my life, and like most of you, I've experienced more network and server outages than I can shake a stick at. Sometimes these outages are small and only mildly disruptive (a file server going down for a few minutes). Other times, an outage can cause massive, widespread work stoppages (such as when an email server goes offline for multiple hours — or days). These outages are, at least for the company, bad things. If your employees can no longer communicate, work all but grinds to a halt. One hour of total downtime multiplied by the average hourly pay of your employees can equal a pretty big amount of lost moolah.To read this article in full or to leave a comment, please click here

Will machine learning save the enterprise server business?

Nvidia and server makers Dell EMC, HPE, IBM and Supermicro announced enterprise servers featuring Nvidia’s Tesla V100 GPU. The question is, can servers designed for machine learning stem the erosion of enterprise server purchases as companies shift to PaaS, IaaS, and cloud services? The recent introduction of hardened industrial servers for IoT may indicate that server makers are looking for growth in vertical markets.There are very compelling reasons for moving enterprise workloads to Amazon, Google, IBM and other hosted infrastructures. The scalability of on-demand resources, operating efficiency at cloud-scale and security are just three of many reasons. For instance, Google has 90 engineers working on just security where most enterprises are understaffed.To read this article in full or to leave a comment, please click here

Google, Scale Computing partner for easier hybrid cloud deployment

Google has partnered with Scale Computing, developer of infrastructure software for hyper-converged systems, to make it easier to deploy Google Cloud Platform as a backup for your own data center. The two companies have created a platform called Cloud Unity, which integrates Scale’s HC3 software environment with Google Compute Engine. HC3 is a cluster software product that merges server, storage and virtualization into a single appliance for easier converged infrastructure. Also on Network World: Google develops high-capacity cloud data transfer device With HC3, you can build a cluster using Google's infrastructure instead of buying your own, thus creating a backup of your own data center in Google’s data centers. Cloud Unity creates a SD-WAN connection to your existing Scale environment, so the Google-hosted cloud version of your data center appears as just another cluster on the same LAN. It uses a VXLAN encryption between your site and Google’s data center. To read this article in full or to leave a comment, please click here

Few IT departments engage in future planning

It’s an old cliché: If you fail to plan, you better plan to fail. That seems to apply to a new study by CompTIA that finds only 34 percent of businesses surveyed plan their IT infrastructure beyond one year.The reasons are legitimate: the disruption brought about by the migration to cloud computing and Internet of Things (IoT) deployments. Both are seriously disruptive and can make long-term planning a challenge. To stay flexible to changes as they undergo a digital transformation, businesses are reticent to plan beyond one year out. + Also on Network World: Cost optimization gains ground in IT infrastructure decisions + The report, titled "Planning a Modern IT Architecture," also found some of the usual problems dogging IT shops. Four in 10 companies said they lacked the budget for heavy investment in new architecture, and one-third said they don’t have the knowledge on emerging technologies and new trends to formulate an integration plan. To read this article in full or to leave a comment, please click here

America’s first exascale supercomputer set for 2021 debut

The next step up in supercomputer performance is exaflops, and there is something of an arms race between nations to get there first — although it’s much more benign than the nuclear arms race of the last century. If anything, it’s beneficial because these monster machines will allow all kinds of advanced scientific research. An exascale computer is capable of processing one exaflop, one quintillion (1,000,000,000,000,000,000) calculations of floating point operations per second. That’s about a trillion times more powerful than a consumer laptop. + Also on Network World: Texas goes big with 18-petaflop supercomputer + China has said it will have an exascale computer by 2020, one year sooner than the U.S.To read this article in full or to leave a comment, please click here

Nvidia gets broad support for cutting-edge Volta GPUs in the data center

Data center workloads for AI, graphics rendering, high-performance computing and business intelligence are getting a boost as a Who's Who of the world's biggest server makers and cloud providers snap up Nvidia's Volta-based Tesla V100 GPU accelerators.Nvidia is rallying its entire ecosystem, including software makers, around the new Tesla V100s, effectively consolidating its dominance in GPUs for data centers.IBM, HPE, Dell EMC and Supermicro announced at the Strata Data Conference in New York Wednesday that they are or will be using the GPUs, which are now shipping. Earlier this week at Nvidia's GPU Technology Conference in Beijing, Lenovo, Huawei and Inspur said they would be using Nvidia's HGX reference architecture to offer Volta architecture-based systems for hyperscale data centers.To read this article in full or to leave a comment, please click here

Nvidia accelerates the path to AI for IoT, hyperscale data centers

It’s safe to say the Internet of Things (IoT) era has arrived, as we live in a world where things are being connected at pace never seen before. Cars, video cameras, parking meters, building facilities and anything else one can think of are being connected to the internet, generating massive quantities of data.The question is how does one interpret the data and understand what it means? Clearly trying to process this much data manually doesn’t work, which is why most of the web-scale companies have embraced artificial intelligence (AI) as a way to create new services that can leverage the data. This includes speech recognition, natural language processing, real-time translation, predictive services and contextual recommendations. Every major cloud provider and many large enterprises have AI initiatives underway.To read this article in full or to leave a comment, please click here

A lack of cloud skills could cost companies money

A poll from Europe finds two in three IT decision makers say their organization is losing out on revenue because their firm lacks specific cloud expertise.The report, compiled by cloud hosting provider Rackspace and the London School of Economics, polled 950 IT decision makers and 950 IT pros and found nearly three quarters of IT decision makers (71 percent) believed their organizations have lost revenue due to a lack of cloud expertise. On average, this accounts for 5 percent of total global revenue, no small amount of money. + Also on Network World: 10 most in-demand tech skills + Also, the survey found 65 percent believed they could bring greater innovation to their company with “the right cloud insight.” And 85 percent said greater expertise within their organization would help them recoup the return on their cloud investment.To read this article in full or to leave a comment, please click here

Your next servers might be a no-name brand

For years, white box PCs have accounted for a significant chunk of desktop sales. It was the same wherever I went: small mom-and-pop shops built their own PCs using components shipped in from Taiwan, and if there was a logo on it, it was for the PC store (affectionately referred to as “screwdriver shops”) that built the thing. On the server side, though, it remained a name-brand business. Data centers were filled with racks of servers that bore the logos of IBM (now Lenovo), Dell and HP. + Also on Network World: How a data center works, today and tomorrow + However, that’s changing. In its latest sales figures for the second quarter of 2017, IDC says ODM sales now account for the largest group of server sales, surpassing HPE. In the second calendar quarter of 2017, worldwide server sales increased 6.3 percent year over year to $15.7 billion thanks in part to new Intel Skylake processors.To read this article in full or to leave a comment, please click here

How a data center works, today and tomorrow

A data center is a physical facility that enterprises use to house their business-critical applications and information, so as they evolve, it’s important to think long-term about how to maintain their reliability and security.Data center components Data centers are often referred to as a singular thing, but in actuality they are composed of a number of technical elements such as routers, switches, security devices, storage systems, servers, application delivery controllers and more. These are the components that IT needs to store and manage the most critical systems that are vital to the continuous operations of a company. Because of this, the reliability, efficiency, security and constant evolution of a data center are typically a top priority.To read this article in full or to leave a comment, please click here

IDG Contributor Network: What’s your problem? Survey uncovers top sources of IT pain

It’s hardly surprising that IT professionals have their hands full in the age of IoT (Internet of Things) and Big Data. Supporting rapidly growing data volumes, new data types, and many more data sources is making it harder than ever for IT to meet service level agreements (SLAs) while keeping spending in check. The complexity IT manages is clear in the results of a recent Storage Census of over 300 IT professionals my company, Primary Data, conducted at VMworld 2017. The survey showcased the conflicting pressures currently faced by IT leaders. Those surveyed included delivering performance, executing data migrations, meeting expectations with existing budgets, and integrating the cloud into their infrastructure among the biggest challenges facing their departments today. Let’s examine the factors that contribute to these challenges and how IT can solve them.To read this article in full or to leave a comment, please click here

Cisco Intersight brings cloud management to compute

I don’t think anyone would argue with the premise that data centers have increased significantly over the past decade. Data centers used to be orderly, as each application had its own dedicated hardware and software. This was highly inefficient, but most data centers could be managed with a handful of people. Then something changed. Businesses were driven to improve the utilization of infrastructure and increase the level of agility, and along came a number of technologies such as virtualization, containers and the cloud. Also, organizations started to embrace the concept of DevOps, which necessitates a level of dynamism and speed never seen before in data centers. To read this article in full or to leave a comment, please click here

Arista reaches for the hybrid clouds

Many years ago, when Arista Networks was in its infancy, its charismatic and sometimes controversial (at least to the folks at Cisco) CEO talked about how the company’s software-first approach would disrupt the networking industry. Just a few years later, the company stands a $1.7 billion revenue company with a dominant position in the webscale industry and a market cap of over $13 billion, so clearly CEO Jayshree Ullal’s prophecy came true.Arista’s software rigor enabled the company to quickly jump into verticals where low latency and high performance mattered. Also, because of Arista’s software prowess, the company has been able to expand its addressable market to see to the networking needs of dense virtualization and containerized environments, as well as private cloud deployments, and quickly adapt the latest and greatest silicon.To read this article in full or to leave a comment, please click here

Report confirms on-premises data center spending declined

Just a month ago we had research that indicated on-premises data center investments were dropping in priority as companies moved to the cloud. Now a second report confirms this suspicion that companies are de-emphasizing their on-premises data centers in favor of the cloud.The numbers come from Synergy Research, which show that spending on traditional, non-cloud data center hardware and software dropped 18 percent between the second quarters of 2015 and 2017. During that same period, public cloud spending grew 35 percent. The overall market for data center equipment grew by 5 percent to a total of more than $30 billion. To read this article in full or to leave a comment, please click here

IDG Contributor Network: What is a data fabric and why should you care?

What is a data fabric? The concept of a "data fabric" is emerging as an approach to help organizations better deal with fast growing data, ever changing application requirements and distributed processing needs.The term references technology that creates a converged platform that supports the storage, processing, analysis and management of disparate data. Data that is currently maintained in files, database tables, data streams, objects, images, sensor data and even container-based applications can all be accessed using a number of different standard interfaces.A data fabric makes it possible for applications and tools designed to access data using many interfaces such as NFS (Network File System), POSIX (portable operating system interface), a REST API (representative state transfer), HDFS (Hadoop distributed file system), ODBC (open database connectivity), and Apache KAFKA for real-time streaming data. A data fabric must also be capable of being enhanced to support other standards as they emerge in importance.To read this article in full or to leave a comment, please click here

IDG Contributor Network: Measuring the economic value of data

Today, there are new and changing uses of data in the digital economy. The big questions however are, who is winning with data, where is this data being kept, what makes new data different, when should data be kept, moved, deleted or transformed, how should data be valued, and why data is so much more important than it used to be?Data is one of the most important assets that any company has, but it’s surprising that we don’t put the same rigor into understanding and measuring the value of our data that we put into more traditional physical assets. Furthermore, should data be depreciated as an asset? Or perhaps it appreciates like art. Counter-intuitively, the answer is “yes” to both.To read this article in full or to leave a comment, please click here

IDG Contributor Network: The power of machine learning reaches data management

Machine learning is a hot topic across the technology spectrum today. From self-driving cars, to catching nefarious content in the fight against terrorism, to apps that automatically retouch photos before you even take them, it is popping up just about everywhere. Each innovation is creating a new wave of business opportunity while simplifying and automating tasks that are generally beyond the reach of how much data we human beings can process at once, or even in a lifetime.To read this article in full or to leave a comment, please click here

VMware adds whitelist security to the hypervisor

Overlooked in the hoopla around the VMworld conference was an announcement of the availability of AppDefense, a new product that lets companies restrict the types of operations applications are allowed to run on virtualized servers. AppDefense works with the VMware hypervisor and can also connect to third-party provisioning, configuration management and workflow automation platforms. It can send out alerts, quarantine apps, shut them down and even restore a VM from an image. All of this is based on AppDefense catching unusual behavior, such as trying to modify the kernel or communicate with an unrecognized remote server. VMware already has some security features built into its NSX and VSAN products, but those are around networking and storage. AppDefense secures the core virtual machines in vSphere itself. It does this by using behavior-based whitelisting, which is not easy to do on desktops because they run a lot of apps. But on a server, especially a virtual server, it’s a much easier proposition. In some cases, virtual servers run only one or two apps, so shutting out everything else is simple.To read this article in full or to leave a comment, please click here

1 79 80 81 82 83 172