Andy Patrizio

Author Archives: Andy Patrizio

On-premises data center spending drops in priority

A survey has found that on-premises data centers are the lowest priority for investment by IT organizations, a reflection of the growing impact of cloud infrastructure and services. For Computer Economics’ annual IT Spending and Staffing Benchmarks report, the organization surveyed more than 200 IT organizations over the first half of this year. It noted that top-line findings show that IT organizations continue on a path of “steady but modest growth in operational budgets while capital budgets and hiring are essentially flat.”+ Also on Network World: IT budgets shift away from capital expenses thanks to the cloud + Now, that’s not to say data center spending will be cut. It’s just that it won’t get the priority for increased spending and is being reduced. IT capital spending now accounts for only 18% of total IT spending, compared to 24% in 2013. To read this article in full or to leave a comment, please click here

Docker brings containers to mainframes

Docker announced the first major update to its flagship Docker Enterprise Edition 17.06, with a clear eye to on-premises data centers and DevOps. Docker rolled out the rebranded Docker EE in March, based on what was previously known as the Docker Commercially Supported and Docker Datacenter products. With that launch, Docker added the ability to port legacy apps to containers without having to modify the code.The major new feature of this update — which seems to borrow from Microsoft’s year/month naming convention for Windows 10 updates — is support for IBM z Systems mainframes running Linux. Now containerized apps can be run on a mainframe, with all of the scale and uptime reliability it brings, and they run with no modifications necessary.To read this article in full or to leave a comment, please click here

Docker brings containers to mainframes

Docker announced the first major update to its flagship Docker Enterprise Edition 17.06, with a clear eye to on-premises data centers and DevOps. Docker rolled out the rebranded Docker EE in March, based on what was previously known as the Docker Commercially Supported and Docker Datacenter products. With that launch, Docker added the ability to port legacy apps to containers without having to modify the code.The major new feature of this update — which seems to borrow from Microsoft’s year/month naming convention for Windows 10 updates — is support for IBM z Systems mainframes running Linux. Now containerized apps can be run on a mainframe, with all of the scale and uptime reliability it brings, and they run with no modifications necessary.To read this article in full or to leave a comment, please click here

Docker brings containers to mainframes

Docker announced the first major update to its flagship Docker Enterprise Edition 17.06, with a clear eye to on-premises data centers and DevOps. Docker rolled out the rebranded Docker EE in March, based on what was previously known as the Docker Commercially Supported and Docker Datacenter products. With that launch, Docker added the ability to port legacy apps to containers without having to modify the code.The major new feature of this update — which seems to borrow from Microsoft’s year/month naming convention for Windows 10 updates — is support for IBM z Systems mainframes running Linux. Now containerized apps can be run on a mainframe, with all of the scale and uptime reliability it brings, and they run with no modifications necessary.To read this article in full or to leave a comment, please click here

Microsoft acquires cloud-based HPC developer

Microsoft pulled off a big get with its acquisition of Cycle Computing, the developer of a suite of high-performance computing (HPC) services called CycleCloud for cloud orchestration, provisioning and data management in the cloud.You may not know its name but Cycle Computing is actually a major player. In 2012, it helped Amazon create the first massive cloud-based supercomputer, spanning 51,000 cores. For just one hour of run time, the bill was $5,000.+ Also on Network World: Azure Stack: Microsoft’s private-cloud platform and what IT pros need to know about it + In 2013, Cycle Computing hit its biggest cloud run, creating a cluster of 156,314 cores with a theoretical peak speed of 1.21 petaflops that ran for 18 hours and spanned Amazon data centers around the world. The bill for that monstrosity was $33,000. To read this article in full or to leave a comment, please click here

Microsoft acquires cloud-based HPC developer

Microsoft pulled off a big get with its acquisition of Cycle Computing, the developer of a suite of high-performance computing (HPC) services called CycleCloud for cloud orchestration, provisioning and data management in the cloud.You may not know its name but Cycle Computing is actually a major player. In 2012, it helped Amazon create the first massive cloud-based supercomputer, spanning 51,000 cores. For just one hour of run time, the bill was $5,000.+ Also on Network World: Azure Stack: Microsoft’s private-cloud platform and what IT pros need to know about it + In 2013, Cycle Computing hit its biggest cloud run, creating a cluster of 156,314 cores with a theoretical peak speed of 1.21 petaflops that ran for 18 hours and spanned Amazon data centers around the world. The bill for that monstrosity was $33,000. To read this article in full or to leave a comment, please click here

Microsoft acquires cloud-based HPC developer

Microsoft pulled off a big get with its acquisition of Cycle Computing, the developer of a suite of high-performance computing (HPC) services called CycleCloud for cloud orchestration, provisioning and data management in the cloud.You may not know its name but Cycle Computing is actually a major player. In 2012, it helped Amazon create the first massive cloud-based supercomputer, spanning 51,000 cores. For just one hour of run time, the bill was $5,000.+ Also on Network World: Azure Stack: Microsoft’s private-cloud platform and what IT pros need to know about it + In 2013, Cycle Computing hit its biggest cloud run, creating a cluster of 156,314 cores with a theoretical peak speed of 1.21 petaflops that ran for 18 hours and spanned Amazon data centers around the world. The bill for that monstrosity was $33,000. To read this article in full or to leave a comment, please click here

Microsoft acquires cloud-based HPC developer

Microsoft pulled off a big get with its acquisition of Cycle Computing, the developer of a suite of high-performance computing (HPC) services called CycleCloud for cloud orchestration, provisioning and data management in the cloud.You may not know its name but Cycle Computing is actually a major player. In 2012, it helped Amazon create the first massive cloud-based supercomputer, spanning 51,000 cores. For just one hour of run time, the bill was $5,000.+ Also on Network World: Azure Stack: Microsoft’s private-cloud platform and what IT pros need to know about it + In 2013, Cycle Computing hit its biggest cloud run, creating a cluster of 156,314 cores with a theoretical peak speed of 1.21 petaflops that ran for 18 hours and spanned Amazon data centers around the world. The bill for that monstrosity was $33,000. To read this article in full or to leave a comment, please click here

Data center provider Equinix bets big on fuel cells

Data center provider Equinix is making a big bet on fuel cells to power its facilities by installing natural gas-powered fuel cells at 12 of its U.S. data centers. It’s part of a push for the firm to be 100% reliant on renewable fuels, and it could set an example for other data centers in power management.Equinix uses fuel cells developed by Bloom Energy, a leader in the data center energy market that has been profiled by 60 Minutes and whose giant “batteries” are installed at data centers run by eBay, Apple, NTT, CenturyLink and Verizon.To read this article in full or to leave a comment, please click here

Data center provider Equinix bets big on fuel cells

Data center provider Equinix is making a big bet on fuel cells to power its facilities by installing natural gas-powered fuel cells at 12 of its U.S. data centers. It’s part of a push for the firm to be 100% reliant on renewable fuels, and it could set an example for other data centers in power management.Equinix uses fuel cells developed by Bloom Energy, a leader in the data center energy market that has been profiled by 60 Minutes and whose giant “batteries” are installed at data centers run by eBay, Apple, NTT, CenturyLink and Verizon.To read this article in full or to leave a comment, please click here

Data center provider Equinix bets big on fuel cells

Data center provider Equinix is making a big bet on fuel cells to power its facilities by installing natural gas-powered fuel cells at 12 of its U.S. data centers. It’s part of a push for the firm to be 100% reliant on renewable fuels, and it could set an example for other data centers in power management.Equinix uses fuel cells developed by Bloom Energy, a leader in the data center energy market that has been profiled by 60 Minutes and whose giant “batteries” are installed at data centers run by eBay, Apple, NTT, CenturyLink and Verizon.To read this article in full or to leave a comment, please click here

Data center provider Equinix bets big on fuel cells

Data center provider Equinix is making a big bet on fuel cells to power its facilities by installing natural gas-powered fuel cells at 12 of its U.S. data centers. It’s part of a push for the firm to be 100% reliant on renewable fuels, and it could set an example for other data centers in power management.Equinix uses fuel cells developed by Bloom Energy, a leader in the data center energy market that has been profiled by 60 Minutes and whose giant “batteries” are installed at data centers run by eBay, Apple, NTT, CenturyLink and Verizon.To read this article in full or to leave a comment, please click here

Oracle expands database offering to its cloud services

Oracle is now offering its Exadata Cloud service on bare-metal servers it provides through its data centers. The company launched Exadata Cloud two years ago to offer its database services as a cloud service and has upgraded it considerably to compete with Amazon Web Services (AWS) and Microsoft Azure.Exadata Cloud is basically the cloud version of the Exadata Database Machine, which features Oracle’s database software, servers, storage and network connectivity all integrated on custom hardware the company inherited from its acquisition of Sun Microsystems in 2010.+ Also on Network World: Oracle CEO Mark Hurd: We have the whole cloud stack + The upgrade to the Exadata Cloud infrastructure on bare metal means customers can now get their own dedicated database appliance in the cloud instead of running the database in a virtual machine, which is how most cloud services are offered. Bare metal means dedicated hardware, which should increase performance.To read this article in full or to leave a comment, please click here

Oracle expands database offering to its cloud services

Oracle is now offering its Exadata Cloud service on bare-metal servers it provides through its data centers. The company launched Exadata Cloud two years ago to offer its database services as a cloud service and has upgraded it considerably to compete with Amazon Web Services (AWS) and Microsoft Azure.Exadata Cloud is basically the cloud version of the Exadata Database Machine, which features Oracle’s database software, servers, storage and network connectivity all integrated on custom hardware the company inherited from its acquisition of Sun Microsystems in 2010.+ Also on Network World: Oracle CEO Mark Hurd: We have the whole cloud stack + The upgrade to the Exadata Cloud infrastructure on bare metal means customers can now get their own dedicated database appliance in the cloud instead of running the database in a virtual machine, which is how most cloud services are offered. Bare metal means dedicated hardware, which should increase performance.To read this article in full or to leave a comment, please click here

Oracle expands database offering to its cloud services

Oracle is now offering its Exadata Cloud service on bare-metal servers it provides through its data centers. The company launched Exadata Cloud two years ago to offer its database services as a cloud service and has upgraded it considerably to compete with Amazon Web Services (AWS) and Microsoft Azure.Exadata Cloud is basically the cloud version of the Exadata Database Machine, which features Oracle’s database software, servers, storage and network connectivity all integrated on custom hardware the company inherited from its acquisition of Sun Microsystems in 2010.+ Also on Network World: Oracle CEO Mark Hurd: We have the whole cloud stack + The upgrade to the Exadata Cloud infrastructure on bare metal means customers can now get their own dedicated database appliance in the cloud instead of running the database in a virtual machine, which is how most cloud services are offered. Bare metal means dedicated hardware, which should increase performance.To read this article in full or to leave a comment, please click here

Oracle expands database offering to its cloud services

Oracle is now offering its Exadata Cloud service on bare-metal servers it provides through its data centers. The company launched Exadata Cloud two years ago to offer its database services as a cloud service and has upgraded it considerably to compete with Amazon Web Services (AWS) and Microsoft Azure.Exadata Cloud is basically the cloud version of the Exadata Database Machine, which features Oracle’s database software, servers, storage and network connectivity all integrated on custom hardware the company inherited from its acquisition of Sun Microsystems in 2010.+ Also on Network World: Oracle CEO Mark Hurd: We have the whole cloud stack + The upgrade to the Exadata Cloud infrastructure on bare metal means customers can now get their own dedicated database appliance in the cloud instead of running the database in a virtual machine, which is how most cloud services are offered. Bare metal means dedicated hardware, which should increase performance.To read this article in full or to leave a comment, please click here

Imanis Data focuses on big data backup and recovery

Imanis Data, formerly known as Talena, released version 3.0 of its eponymous backup and recovery platform last week with emphasis on supporting very large datasets that are being generated in the era of big data. The company notes that three out of four companies have experienced a data loss over the last year, which carries an average cost of $900,000 and weeks of downtime. With Imanis Data 3.0, the company claims its backup architecture backs up, recovers and replicates terabyte and petabyte-sized data sets up to 10 times faster than any other solution on the market, minimizing the impact of data loss by reducing costly days and weeks of downtime to minutes and hours and reducing secondary storage costs by up to 80%. To read this article in full or to leave a comment, please click here

Imanis Data focuses on big data backup and recovery

Imanis Data, formerly known as Talena, released version 3.0 of its eponymous backup and recovery platform last week with emphasis on supporting very large datasets that are being generated in the era of big data. The company notes that three out of four companies have experienced a data loss over the last year, which carries an average cost of $900,000 and weeks of downtime. With Imanis Data 3.0, the company claims its backup architecture backs up, recovers and replicates terabyte and petabyte-sized data sets up to 10 times faster than any other solution on the market, minimizing the impact of data loss by reducing costly days and weeks of downtime to minutes and hours and reducing secondary storage costs by up to 80%. To read this article in full or to leave a comment, please click here

Imanis Data focuses on big data backup and recovery

Imanis Data, formerly known as Talena, released version 3.0 of its eponymous backup and recovery platform last week with emphasis on supporting very large datasets that are being generated in the era of big data. The company notes that three out of four companies have experienced a data loss over the last year, which carries an average cost of $900,000 and weeks of downtime. With Imanis Data 3.0, the company claims its backup architecture backs up, recovers and replicates terabyte and petabyte-sized data sets up to 10 times faster than any other solution on the market, minimizing the impact of data loss by reducing costly days and weeks of downtime to minutes and hours and reducing secondary storage costs by up to 80%. To read this article in full or to leave a comment, please click here

HPE looks to put a supercomputer in space

Hewlett Packard Enterprise is preparing to send a supercomputer to where no supercomputer has gone before — into orbit. HPE and NASA have worked on what HPE calls the Spaceborne Computer for the better part of a year. It uses commercial off-the-shelf computer components, meaning it’s a fairly generic supercomputer. It’s decent — Ars Technica quotes HPE as stating it’s a 1 teraflop computer, but that wouldn’t get it on the Top 500 list by a mile. The Spaceborne Computer is built on HPE's Apollo 40 system, a high-density server racks that houses the compute, storage and networking in one case, much like a hyperconverged system. HPE Apollo is typically used for data analytics and high-performance computing (HPC). To read this article in full or to leave a comment, please click here

1 55 56 57 58 59 75