Amazon Web Services has announced that it is offering what it calls bare-metal Macs in its cloud, although Amazon’s definition of “bare metal” doesn’t exactly jibe with the generally accepted definition.“Bare metal” typically means no operating system. It’s very popular as a means of what is known as “lift and shift,” where a company takes its custom operating environment, starting with the operating system, libraries, apps, databases, and so on, and moves it from on-premises to the cloud without needing to make a modification to its software stack.Here, Amazon is offering Macs running macOS 10.14 (Mojave) or 10.15 (Catalina) on an eighth generation, six-core Intel Core i7 (Coffee Lake) processor running at 3.2 GHz. (Amusingly, the instances are run on Mac Minis. What I wouldn’t give to see a data center with racks full of Mac Minis.)To read this article in full, please click here
SUSE’s acquisition of Rancher Labs puts the Germany-based open-source software company in a much stronger position to offer flexible, edge-based services to its customers, according to an analyst at IDC.The deal—which was originally announced this summer—essentially makes Rancher Labs into SUSE’s containerization “innovation center,” said IDC research director Gary Chen. Any customer working on digital transformation and rapid development is likely to appreciate the improved support for containerization—letting workloads function on whatever hardware is handy, and communicate across different arrangements of edge, cloud and local computing.Terms of the deal were not publicly disclosed, but a CNBC report published after the initial announcement quoted sources familiar with the deal as saying that SUSE is paying between $600 million and $700 million.To read this article in full, please click here
Schneider Electric is better known in its native Europe than in the U.S., but it's looking to change that with a $40 million project to upgrade its U.S. manufacturing resources. The company, which specializes in energy management and automation technologies for data centers, shared its plans for U.S. expansion at its Innovation Summit North America 2020, held virtually this year.Schneider also unveiled a new set of ruggedized data-center enclosures targeting the Industrial Internet of Things (IIoT). Designed for indoor industrial environments, the EcoStruxure Micro Data Center R-Series offers a fast and simple way to deploy and manage edge computing infrastructure in a place like a factory floor.To read this article in full, please click here
Next Pathway has announced the next-generation of its cloud-migration-planning technology, called Crawler360, which helps enterprises shift legacy data warehouses and data lakes to the cloud by telling them exactly how to cost, size, and start the journey.Data warehouses and especially data lakes can get out of control with poorly managed, siloed data and different forms of structured and unstructured data turning the warehouse and lake into a swamp.Crawler360 addresses this problem by scanning data pipelines, database applications, and business-intelligence tools to automatically capture the end-to-end data lineage of the legacy environment. By doing so, Crawler360 defines relationships across siloed applications to understand their interdependencies, identifies redundant data sets that have swelled over time that can be consolidated, and pinpoints “hot and cold spots” to define which workloads to prioritize for migration.To read this article in full, please click here
Two very useful tools for extracting essential details on your Linux system OS and hardware are screenfetch and geofetch.Each of these tools is actually a lengthy bash script that fetches the information from your system for you and presents it in an attractive manner with the distribution logo on the left and details on the right--essentially "screen shots" of your system. Neither is likely to be installed on your system by default, but each can be installed with a single command.screenfetch
You can install screenfetch with sudo apt install screenfetch or sudo yum install screenfetch. Screenfetch is a script with nearly 6,500 lines. It will automatically detect your distribution and display the distribution, kernel, uptime, number of packages installed, shell you're using, overall and available disk space, CPU, GPU and memory (in use and available). It also displays an ASCII art rendition of the logo related to whatever distribution it's run on, but you can turn this off if you want to see just the list of details.To read this article in full, please click here
While Todd Nightingale has been Cisco’s Enterprise Networking & Cloud business chief since March, some of the directions he wants to take the company’s biggest business unit—namely superior cloud-neutral orchestration and automation—are already evident.The COVID-19 pandemic and the enterprise response to it are big drivers for near-future enterprise networking technology. But the ideas of cloud connectivity and pushing simplicity and agility in the network, while they are already important, implementation has accelerated for most customers, Nightingale said in a recent interview.To read this article in full, please click here
A data center is a physical facility that enterprises use to house their business-critical applications and information. As they evolve from centralized on-premises facilities to edge deployments to public cloud services, it’s important to think long-term about how to maintain their reliability and security.What is a data center?
Data centers are often referred to as a singular thing, but in actuality they are composed of a number of technical elements. These can be broken down into three categories:
Compute: The memory and processing power to run the applications, generally provided by high-end servers
Storage: Important enterprise data is generally housed in a data center, on media ranging from tape to solid-state drives, with multiple backups
Networking: Interconnections between data center components and to the outside world, including routers, switches, application-delivery controllers, and more
These are the components that IT needs to store and manage the most critical resources that are vital to the continuous operations of an organization. Because of this, the reliability, efficiency, security and constant evolution of data centers are typically a top priority. Both software and hardware security measures are a must.To read this article in full, please click here
A data center is the physical facility providing the compute power to run applications, the storage capabilities to process data, and the networking to connect employees with the resources needed to do their jobs.Experts have been predicting that the on-premises data center will be replaced by cloud-based alternatives, but many organizations have concluded that they will always have applications that need to live on-premises. Rather than dying, the data center is evolving.It is becoming more distributed, with edge data centers springing up to process IoT data. It is being modernized to operate more efficiently through technologies like virtualization and containers. It is adding cloud-like features such as self-service. And the on-prem data center is integrating with cloud resources in a hybrid model.To read this article in full, please click here
Next-generation computational storage systems are coming to the fore that perform processing operations on the storage device itself to reduce internal system transport time, reduce application bottlenecks and pave the way to more intelligent edge devices.
Sensor power loss is the scourge of IoT.Deploying millions of sensors is pretty much a useless endeavor if the devices continually run out of power. IoT sensors can't collect or transmit data without power.That's one reason researchers are exploring ambient energy harvesting. Numerous projects have shown that small amounts of power can be generated by converting ambient energy in the environment – from stray magnetic fields, humidity, waste heat, and even unwanted wireless radio noise, for example – into usable electrical energy to power the IoT.To read this article in full, please click here
The Millennial generation is becoming a driving force behind the circular economy of used IT equipment.IT shops have typically bought used gear if they needed to replace old equipment and couldn't get parts from the vendor. But the idea of buying a low-mileage server with one or two years of use wasn't very popular. Companies typically bought new.But that's changing. IT shops of all sizes are increasingly buying used gear, both brand name and white box brands from China, according to IDC. The research firm puts the CAGR at 5% and estimates sales of used IT infrastructure gear will reach $36 billion by 2024. The deals are being done through the major OEMs as well as resellers like ITRenew, which buys servers from hyperscalers, refreshes them, certifies they are functioning, and resells them.To read this article in full, please click here
Ceridian is betting on hybrid cloud, network virtualization and automation as it aims to improve IT service delivery, weed out inefficiencies and bolster security.The human capital management (HCM) company recently completed its transition to a cloud architecture, shuttering its on-premises data centers and migrating its applications and back-office systems to multiple clouds. "We are a true consumer of hybrid cloud technology," says CIO Warren Perlman. "We have operations in both [VMware Cloud on AWS] as well as native AWS, and also native Azure."To read this article in full, please click here
After its purchase of cloud storage automation specialist Spot for $450 million this past June, NetApp is releasing its first new product under the brand. Called Spot Storage, it's a "storageless" solution that's designed to enable automated administration of cloud-native, container-based applications.NetApp describes Spot Storage as a cloud-based, serverless offering for application-driven architectures that run microservices-based applications in Kubernetes containers."Serverless computing" is a bit of a misnomer. Your application and data still reside on servers, but they're not tied to one particular physical location. Just like the cloud means never using the same physical box twice, a serverless storage service means the cloud provider runs the server and dynamically manages the allocation of machine resources.To read this article in full, please click here
IBM continued enhancing its core Cloud Pak hybrid cloud software offerings, this week bolstering automation and data features that will let customers simplify everything from software provisioning and patching, to data discovery and document processing.IBM Cloud Paks are bundles of Red Hat’s Kubernetes-based OpenShift Container Platform along with Red Hat Linux and a variety of connecting technologies to let enterprise customers deploy and manage containers on their choice of private or public infrastructure, including AWS, Microsoft Azure, Google Cloud Platform and Alibaba.[Get regularly scheduled insights by signing up for Network World newsletters.]
The driving idea behind Cloud Paks is to ease the building, orchestrating and managing of multiple containers for enterprise workloads. To read this article in full, please click here
A data center is a physical facility that enterprises use to house their business-critical applications and information. As they evolve, it’s important to think long-term about how to maintain their reliability and security.What is a data center?
Data centers are often referred to as a singular thing, but in actuality they are composed of a number of technical elements. These can be broken down into three categories:
Compute: The memory and processing power to run the applications, generally provided by high-end servers
Storage: Important enterprise data is generally housed in a data center, on media ranging from tape to solid-state drives, with multiple backups
Networking: Interconnections between data center components and to the outside world, including routers, switches, application-delivery controllers, and more
These are the components that IT needs to store and manage the most critical systems that are vital to the continuous operations of a company. Because of this, the reliability, efficiency, security and constant evolution of data centers are typically a top priority. Both software and hardware security measures are a must.To read this article in full, please click here
Amazon Web Services (AWS) has announced the general availability of a new GPU-powered instance called Amazon P4d that is based on Nvidia’s new Ampere architecture, and the two firms are making big performance claims.AWS has offered GPU-powered instances for a decade now, the most current generation called P3. AWS and Nvidia are both claiming that P4d instances offer three times faster performance, up to 60% lower cost, and 2.5 times more GPU memory for machine learning training and high-performance computing workloads when compared to P3 instances.To read this article in full, please click here
The reliability of services delivered by ISPs, cloud providers and conferencing services (a.k.a. unified communications-as-a-service (UCaaS)) is an indication of how well served businesses are via the internet.ThousandEyes is monitoring how these providers are handling the performance challenges they face. It will provide Network World a roundup of interesting events of the week in the delivery of these services, and Network World will provide a summary here. Stop back next week for another update, and see more details here.
Get regularly scheduled insights by signing up for Network World newsletters To read this article in full, please click here