Last December, I wrote a post looking at “What to expect from Cisco in 2017”. It’s a foregone conclusion that Cisco will make a number of acquisitions every year, so that’s not hard to predict. The tough part is guessing the potential targets.One of the easier acquisitions to predict was Springpath because Cisco’s HyperFlex hyper-converged infrastructure (HCI) solution is an OEM of Springpath. The two companies have been working very closely since Springpath was founded in 2012. The product has been extremely well received by customers and channel partners, resulting in a little more than 1,800 customers to date. In fact, nearly customer and channel partner wanted the companies to join. To read this article in full or to leave a comment, please click here
Software-defined networking (SDN) is defined by a decoupling of the control and packet-forwarding planes in a network, an architecture that can slash operational costs and speed the time it takes to make changes or provision new services.Since all the intelligence resides in software – not baked into monolithic specialty hardware – customers can replace traditional switches with commodity devices to save on capital costs. SDN also makes it possible for the network to interface with applications directly via APIs to improve security and application performance.So what is SDN?
Traditional networks are made up of devices with integrated control and data-forwarding planes so each box needs to be configured and managed independently. Because of this, even simple changes to the network can take weeks or even months to complete because the changes have to be made to each device. This was acceptable when network changes were typically made independently from business changes.To read this article in full or to leave a comment, please click here
Hypothetical: You need to set up the IT infrastructure (email, file sharing, etc.) for a new company. No restrictions. No legacy application support necessary. How would you do it? What would that ideal IT infrastructure look like?I decided to sit down and think of my ideal setup — based on quite a few years of being a vice president of engineering at various companies — and document them here. Maybe you’ll find my choices useful; maybe you’ll think I’m crazy. Either way, these are good things to consider for any organization. Run services on your own servers
The first thing I’m going to decide on, right up front, is to self-host as many services as I possibly can. To read this article in full or to leave a comment, please click here
Windows Server 2016 has been out for a year now, the “we’ll wait for the first service pack” delay is behind us, and there are clear features in Windows 2016 that enterprises are adopting and integrating into their network environment. Here's a look at five of those features.Windows Server 2016 as the base server operating system
This isn't a specific “feature” in Windows 2016, but there's an overall general acceptance by enterprises deploying Windows Server applications to install them on the latest Windows Server 2016 operating system.To read this article in full or to leave a comment, please click here
Windows Server 2016 has been out for a year now, the “we’ll wait for the first service pack” delay is behind us, and there are clear features in Windows 2016 that enterprises are adopting and integrating into their network environment. Here's a look at five of those features.Windows Server 2016 as the base server operating system
This isn't a specific “feature” in Windows 2016, but there's an overall general acceptance by enterprises deploying Windows Server applications to install them on the latest Windows Server 2016 operating system.To read this article in full or to leave a comment, please click here
Microsoft pulled off a big get with its acquisition of Cycle Computing, the developer of a suite of high-performance computing (HPC) services called CycleCloud for cloud orchestration, provisioning and data management in the cloud.You may not know its name but Cycle Computing is actually a major player. In 2012, it helped Amazon create the first massive cloud-based supercomputer, spanning 51,000 cores. For just one hour of run time, the bill was $5,000.+ Also on Network World: Azure Stack: Microsoft’s private-cloud platform and what IT pros need to know about it +
In 2013, Cycle Computing hit its biggest cloud run, creating a cluster of 156,314 cores with a theoretical peak speed of 1.21 petaflops that ran for 18 hours and spanned Amazon data centers around the world. The bill for that monstrosity was $33,000. To read this article in full or to leave a comment, please click here
Microsoft pulled off a big get with its acquisition of Cycle Computing, the developer of a suite of high-performance computing (HPC) services called CycleCloud for cloud orchestration, provisioning and data management in the cloud.You may not know its name but Cycle Computing is actually a major player. In 2012, it helped Amazon create the first massive cloud-based supercomputer, spanning 51,000 cores. For just one hour of run time, the bill was $5,000.+ Also on Network World: Azure Stack: Microsoft’s private-cloud platform and what IT pros need to know about it +
In 2013, Cycle Computing hit its biggest cloud run, creating a cluster of 156,314 cores with a theoretical peak speed of 1.21 petaflops that ran for 18 hours and spanned Amazon data centers around the world. The bill for that monstrosity was $33,000. To read this article in full or to leave a comment, please click here
Data center provider Equinix is making a big bet on fuel cells to power its facilities by installing natural gas-powered fuel cells at 12 of its U.S. data centers. It’s part of a push for the firm to be 100% reliant on renewable fuels, and it could set an example for other data centers in power management.Equinix uses fuel cells developed by Bloom Energy, a leader in the data center energy market that has been profiled by 60 Minutes and whose giant “batteries” are installed at data centers run by eBay, Apple, NTT, CenturyLink and Verizon.To read this article in full or to leave a comment, please click here
Data center provider Equinix is making a big bet on fuel cells to power its facilities by installing natural gas-powered fuel cells at 12 of its U.S. data centers. It’s part of a push for the firm to be 100% reliant on renewable fuels, and it could set an example for other data centers in power management.Equinix uses fuel cells developed by Bloom Energy, a leader in the data center energy market that has been profiled by 60 Minutes and whose giant “batteries” are installed at data centers run by eBay, Apple, NTT, CenturyLink and Verizon.To read this article in full or to leave a comment, please click here
Interconnection is the fuel of digital business, and organizations must understand its power if they hope to handle the global digital economy’s increasing demands. For such a pivotal business enabler, interconnection has long been tough to quantify. But new research from the company I work for, Equinix, does just that by looking at installed interconnection bandwidth capacity and projected growth. The Global Interconnection Index, published by Equinix and sourced from multiple analyst reports, is an industry-first look at how interconnection bandwidth is shaping and scaling the digital world. It aims to give digital business the insight needed to prepare for tomorrow. To read this article in full or to leave a comment, please click here
Interconnection is the fuel of digital business, and organizations must understand its power if they hope to handle the global digital economy’s increasing demands. For such a pivotal business enabler, interconnection has long been tough to quantify. But new research from the company I work for, Equinix, does just that by looking at installed interconnection bandwidth capacity and projected growth. The Global Interconnection Index, published by Equinix and sourced from multiple analyst reports, is an industry-first look at how interconnection bandwidth is shaping and scaling the digital world. It aims to give digital business the insight needed to prepare for tomorrow. To read this article in full or to leave a comment, please click here
Oracle is now offering its Exadata Cloud service on bare-metal servers it provides through its data centers. The company launched Exadata Cloud two years ago to offer its database services as a cloud service and has upgraded it considerably to compete with Amazon Web Services (AWS) and Microsoft Azure.Exadata Cloud is basically the cloud version of the Exadata Database Machine, which features Oracle’s database software, servers, storage and network connectivity all integrated on custom hardware the company inherited from its acquisition of Sun Microsystems in 2010.+ Also on Network World: Oracle CEO Mark Hurd: We have the whole cloud stack +
The upgrade to the Exadata Cloud infrastructure on bare metal means customers can now get their own dedicated database appliance in the cloud instead of running the database in a virtual machine, which is how most cloud services are offered. Bare metal means dedicated hardware, which should increase performance.To read this article in full or to leave a comment, please click here
Oracle is now offering its Exadata Cloud service on bare-metal servers it provides through its data centers. The company launched Exadata Cloud two years ago to offer its database services as a cloud service and has upgraded it considerably to compete with Amazon Web Services (AWS) and Microsoft Azure.Exadata Cloud is basically the cloud version of the Exadata Database Machine, which features Oracle’s database software, servers, storage and network connectivity all integrated on custom hardware the company inherited from its acquisition of Sun Microsystems in 2010.+ Also on Network World: Oracle CEO Mark Hurd: We have the whole cloud stack +
The upgrade to the Exadata Cloud infrastructure on bare metal means customers can now get their own dedicated database appliance in the cloud instead of running the database in a virtual machine, which is how most cloud services are offered. Bare metal means dedicated hardware, which should increase performance.To read this article in full or to leave a comment, please click here
Imanis Data, formerly known as Talena, released version 3.0 of its eponymous backup and recovery platform last week with emphasis on supporting very large datasets that are being generated in the era of big data. The company notes that three out of four companies have experienced a data loss over the last year, which carries an average cost of $900,000 and weeks of downtime. With Imanis Data 3.0, the company claims its backup architecture backs up, recovers and replicates terabyte and petabyte-sized data sets up to 10 times faster than any other solution on the market, minimizing the impact of data loss by reducing costly days and weeks of downtime to minutes and hours and reducing secondary storage costs by up to 80%. To read this article in full or to leave a comment, please click here
Imanis Data, formerly known as Talena, released version 3.0 of its eponymous backup and recovery platform last week with emphasis on supporting very large datasets that are being generated in the era of big data. The company notes that three out of four companies have experienced a data loss over the last year, which carries an average cost of $900,000 and weeks of downtime. With Imanis Data 3.0, the company claims its backup architecture backs up, recovers and replicates terabyte and petabyte-sized data sets up to 10 times faster than any other solution on the market, minimizing the impact of data loss by reducing costly days and weeks of downtime to minutes and hours and reducing secondary storage costs by up to 80%. To read this article in full or to leave a comment, please click here
In any enterprise – large and small – bottom-line ROI is arguably the biggest factor driving business decisions. Whether switching from PCs to Macs, investing in new travel and expense management software or integrating data center solutions, every business unit, from HR to sales and IT, must prove the value that new processes and offerings will have on the enterprises’ bottom line. The problem? Getting buy-in from all business groups, from the C-Suite on down, can be a serious undertaking often halting or ceasing potential implementation. When it comes to the data center, no one knows this better than data center managers who must work tirelessly with C-Suite to showcase the value and benefits of next generations data center software solutions.To read this article in full or to leave a comment, please click here
In any enterprise – large and small – bottom-line ROI is arguably the biggest factor driving business decisions. Whether switching from PCs to Macs, investing in new travel and expense management software or integrating data center solutions, every business unit, from HR to sales and IT, must prove the value that new processes and offerings will have on the enterprises’ bottom line. The problem? Getting buy-in from all business groups, from the C-Suite on down, can be a serious undertaking often halting or ceasing potential implementation. When it comes to the data center, no one knows this better than data center managers who must work tirelessly with C-Suite to showcase the value and benefits of next generations data center software solutions.To read this article in full or to leave a comment, please click here
Hewlett Packard Enterprise is preparing to send a supercomputer to where no supercomputer has gone before — into orbit. HPE and NASA have worked on what HPE calls the Spaceborne Computer for the better part of a year. It uses commercial off-the-shelf computer components, meaning it’s a fairly generic supercomputer. It’s decent — Ars Technica quotes HPE as stating it’s a 1 teraflop computer, but that wouldn’t get it on the Top 500 list by a mile. The Spaceborne Computer is built on HPE's Apollo 40 system, a high-density server racks that houses the compute, storage and networking in one case, much like a hyperconverged system. HPE Apollo is typically used for data analytics and high-performance computing (HPC). To read this article in full or to leave a comment, please click here
Hewlett Packard Enterprise is preparing to send a supercomputer to where no supercomputer has gone before — into orbit. HPE and NASA have worked on what HPE calls the Spaceborne Computer for the better part of a year. It uses commercial off-the-shelf computer components, meaning it’s a fairly generic supercomputer. It’s decent — Ars Technica quotes HPE as stating it’s a 1 teraflop computer, but that wouldn’t get it on the Top 500 list by a mile. The Spaceborne Computer is built on HPE's Apollo 40 system, a high-density server racks that houses the compute, storage and networking in one case, much like a hyperconverged system. HPE Apollo is typically used for data analytics and high-performance computing (HPC). To read this article in full or to leave a comment, please click here
Data center REITs have returned more than 100 percent over the last two years. Crown Castle recently purchased Lightower for one of the highest per fiber mile prices ever. What is driving the growth? Isn’t everyone using the cloud?The data center is still the safest place to store and process your data. The data center is where the cloud resides. Cross connect capabilities like connections to financial markets, exchanges, Bloomberg, Reuters, and even cloud providers like AWS and Azure make data centers more functional for companies looking to take advantage of their geography.Geography impacts latency as light can only travel between two distances within a set span of time governed by physics. Tony Soprano said it best when talking about Real Estate to his son Anthony Junior, “Buy land AJ, ‘cause God ain’t making any more of it.” That couldn’t be more true about data centers. Areas like Ashburn, Virginia, are data center hot spots because of low taxes, bandwidth availability, and proximity to Washington, D.C. and New York.To read this article in full or to leave a comment, please click here