Dell EMC has updated its PowerMax line of enterprise storage systems to offer Intel’s Optane persistent storage and NVMe-over-Fabric, both of which will give the PowerMax a big boost in performance.Last year, Dell launched the PowerMax line with high-performance storage, specifically targeting industries that need very low latency and high resiliency, such as banking, healthcare, and cloud service providers.The company claims the new PowerMax is the first-to-market with dual port Intel Optane SSDs and the use of storage-class memory (SCM) as persistent storage. The Optane is a new type of non-volatile storage that sits between SSDs and memory. It has the persistence of a SSD but almost the speed of a DRAM. Optane storage also has a ridiculous price tag. For example, a 512 GB stick costs nearly $8,000.To read this article in full, please click here
Dell EMC has updated its PowerMax line of enterprise storage systems to offer Intel’s Optane persistent storage and NVMe-over-Fabric, both of which will give the PowerMax a big boost in performance.Last year, Dell launched the PowerMax line with high-performance storage, specifically targeting industries that need very low latency and high resiliency, such as banking, healthcare, and cloud service providers.The company claims the new PowerMax is the first-to-market with dual port Intel Optane SSDs and the use of storage-class memory (SCM) as persistent storage. The Optane is a new type of non-volatile storage that sits between SSDs and memory. It has the persistence of a SSD but almost the speed of a DRAM. Optane storage also has a ridiculous price tag. For example, a 512 GB stick costs nearly $8,000.To read this article in full, please click here
AMD's $5.4 billion purchase of ATI Technologies in 2006 seemed like an odd match. Not only were the companies in separate markets, but they were on separate coasts, with ATI in the Toronto, Canada, region and AMD in Sunnyvale, California.They made it work, and arguably it saved AMD from extinction because it was the graphics business that kept the company afloat while the Athlon/Opteron business was going nowhere. There were many quarters where graphics brought in more revenue than CPUs and likely saved the company from bankruptcy.But those days are over, and AMD is once again a highly competitive CPU company, and quarterly sales are getting very close to the $2 billion mark. While the CPU business is on fire, the GPU business continues to do well.To read this article in full, please click here
AMD's $5.4 billion purchase of ATI Technologies in 2006 seemed like an odd match. Not only were the companies in separate markets, but they were on separate coasts, with ATI in the Toronto, Canada, region and AMD in Sunnyvale, California.They made it work, and arguably it saved AMD from extinction because it was the graphics business that kept the company afloat while the Athlon/Opteron business was going nowhere. There were many quarters where graphics brought in more revenue than CPUs and likely saved the company from bankruptcy.But those days are over, and AMD is once again a highly competitive CPU company, and quarterly sales are getting very close to the $2 billion mark. While the CPU business is on fire, the GPU business continues to do well.To read this article in full, please click here
Tests by the evaluation and testing site ServeTheHome found a server with two AMD Epyc processors can outperform a four-socket Intel system that costs considerably more.If you don’t read ServeTheHome, you should. It’s cut from the same cloth as Tom’s Hardware Guide and AnandTech but with a focus on server hardware, mostly the low end but they throw in some enterprise stuff, as well.ServeTheHome ran tests comparing the AMD Epyc 7742, which has 64 cores and 128 threads, and the Intel Xeon Platinum 8180M with its 28 cores and 56 threads. The dollars, though, show a real difference. Each Epyc 7742 costs $6,950, while each Xeon Platinum 8180M goes for $13,011. So, two Epyc 7742 processors cost you $13,900, and four Xeon Platinum 8180M processors cost $52,044, four times as much as the AMD chips.To read this article in full, please click here
Tests by the evaluation and testing site ServeTheHome found a server with two AMD Epyc processors can outperform a four-socket Intel system that costs considerably more.If you don’t read ServeTheHome, you should. It’s cut from the same cloth as Tom’s Hardware Guide and AnandTech but with a focus on server hardware, mostly the low end but they throw in some enterprise stuff, as well.ServeTheHome ran tests comparing the AMD Epyc 7742, which has 64 cores and 128 threads, and the Intel Xeon Platinum 8180M with its 28 cores and 56 threads. The dollars, though, show a real difference. Each Epyc 7742 costs $6,950, while each Xeon Platinum 8180M goes for $13,011. So, two Epyc 7742 processors cost you $13,900, and four Xeon Platinum 8180M processors cost $52,044, four times as much as the AMD chips.To read this article in full, please click here
The USB Implementers Forum (USB-IF), the industry consortium behind the development of the Universal Serial Bus (USB) specification, announced this week it has finalized the technical specifications for USB4, the next generation of the spec.One of the most important aspects of USB4 (they have dispensed with the space between the acronym and the version number with this release) is that it merges USB with Thunderbolt 3, an Intel-designed interface that hasn’t really caught on outside of laptops despite its potential. For that reason, Intel gave the Thunderbolt spec to the USB consortium.Unfortunately, Thunderbolt 3 is listed as an option for USB4 devices, so some will have it and some won’t. This will undoubtedly cause headaches, and hopefully all device makers will include Thunderbolt 3.To read this article in full, please click here
A security group discovered a vulnerability in three models of Supermicro motherboards that could allow an attacker to remotely commandeer the server. Fortunately, a fix is already available.Eclypsium, which specializes in firmware security, announced in its blog that it had found a set of flaws in the baseboard management controller (BMC) for three different models of Supermicro server boards: the X9, X10, and X11.[ Also see: What to consider when deploying a next-generation firewall | Get regularly scheduled insights: Sign up for Network World newsletters ]
BMCs are designed to permit administrators remote access to the computer so they can do maintenance and other updates, such as firmware and operating system patches. It’s meant to be a secure port into the computer while at the same time walled off from the rest of the server.To read this article in full, please click here
A security group discovered a vulnerability in three models of Supermicro motherboards that could allow an attacker to remotely commandeer the server. Fortunately, a fix is already available.Eclypsium, which specializes in firmware security, announced in its blog that it had found a set of flaws in the baseboard management controller (BMC) for three different models of Supermicro server boards: the X9, X10, and X11.[ Also see: What to consider when deploying a next-generation firewall | Get regularly scheduled insights: Sign up for Network World newsletters ]
BMCs are designed to permit administrators remote access to the computer so they can do maintenance and other updates, such as firmware and operating system patches. It’s meant to be a secure port into the computer while at the same time walled off from the rest of the server.To read this article in full, please click here
A security group discovered a vulnerability in three models of Supermicro motherboards that could allow an attacker to remotely commandeer the server. Fortunately, a fix is already available.Eclypsium, which specializes in firmware security, announced in its blog that it had found a set of flaws in the baseboard management controller (BMC) for three different models of Supermicro server boards: the X9, X10, and X11.[ Also see: What to consider when deploying a next-generation firewall | Get regularly scheduled insights: Sign up for Network World newsletters ]
BMCs are designed to permit administrators remote access to the computer so they can do maintenance and other updates, such as firmware and operating system patches. It’s meant to be a secure port into the computer while at the same time walled off from the rest of the server.To read this article in full, please click here
A security group discovered a vulnerability in three models of Supermicro motherboards that could allow an attacker to remotely commandeer the server. Fortunately, a fix is already available.Eclypsium, which specializes in firmware security, announced in its blog that it had found a set of flaws in the baseboard management controller (BMC) for three different models of Supermicro server boards: the X9, X10, and X11.[ Also see: What to consider when deploying a next-generation firewall | Get regularly scheduled insights: Sign up for Network World newsletters ]
BMCs are designed to permit administrators remote access to the computer so they can do maintenance and other updates, such as firmware and operating system patches. It’s meant to be a secure port into the computer while at the same time walled off from the rest of the server.To read this article in full, please click here
HP Enterprise (HPE) has been aggressively promoting its GreenLake IT consumption model since it was introduced last year. GreenLake is a pay-per-use consumption model where the customer does not take ownership of the hardware but merely leases it and pays only for their use, which is metered.Consumption models have become popular among OEMs looking to keep customers that are anxious to get out of owning expensive assets, such as servers. Dell EMC has its own program called Flex on Demand, and Lenovo has ThinkAgile CP.To read this article in full, please click here
HP Enterprise (HPE) has been aggressively promoting its GreenLake IT consumption model since it was introduced last year. GreenLake is a pay-per-use consumption model where the customer does not take ownership of the hardware but merely leases it and pays only for their use, which is metered.Consumption models have become popular among OEMs looking to keep customers that are anxious to get out of owning expensive assets, such as servers. Dell EMC has its own program called Flex on Demand, and Lenovo has ThinkAgile CP.To read this article in full, please click here
Intel announced this week it has begun shipping its 10nm Agilex FPGAs to early-access customers, including Microsoft, featuring the Compute Express Link (CXL), a cache and memory coherent CPUs-to-anything interconnect that has an industry consortium of more than 60 members. The company first announced the chips in April.The Agilex FPGA is the product of the Altera group, which Intel bought in 2015 for $16.7 billion. It sold FPGAs under the Stratix brand name, but this line is the first to come out under Intel ownership. CXL replaces OmniPath Connect, a fabric Intel developed but no one else supported. The company ended support for OmniPath earlier this month in favor of CXL, which has wide industry support.To read this article in full, please click here
Intel announced this week it has begun shipping its 10nm Agilex FPGAs to early-access customers, including Microsoft, featuring the Compute Express Link (CXL), a cache and memory coherent CPUs-to-anything interconnect that has an industry consortium of more than 60 members. The company first announced the chips in April.The Agilex FPGA is the product of the Altera group, which Intel bought in 2015 for $16.7 billion. It sold FPGAs under the Stratix brand name, but this line is the first to come out under Intel ownership. CXL replaces OmniPath Connect, a fabric Intel developed but no one else supported. The company ended support for OmniPath earlier this month in favor of CXL, which has wide industry support.To read this article in full, please click here
If you were wondering what prompted Nvidia to shell out nearly $7 billion for Mellanox Technologies, here’s your answer: The networking hardware provider has introduced a pair of processors for offloading network workloads from the CPU.ConnectX-6 Dx and BlueField-2 are cloud SmartNICs and I/O Processing Unit (IPU) solutions, respectively, designed to take the work of network processing off the CPU, freeing it to do its processing job.[ Learn more about SDN: Find out where SDN is going and learn the difference between SDN and NFV. | Get regularly scheduled insights: Sign up for Network World newsletters. ]
The company promises up to 200Gbit/sec throughput with ConnectX and BlueField. It said the market for 25Gbit and faster Ethernet was 31% of the total market last year and will grow to 61% next year. With the internet of things (IoT) and artificial intelligence (AI), a lot of data needs to be moved around and Ethernet needs to get a lot faster.To read this article in full, please click here
There are a host of different AI-related solutions for the data center, ranging from add-in cards to dedicated servers, like the Nvidia DGX-2. But a startup called Cerebras Systems has its own server offering that relies on a single massive processor rather than a slew of small ones working in parallel.Cerebras has taken the wraps off its Wafer Scale Engine (WSE), an AI chip that measures 8.46x8.46 inches, making it almost the size of an iPad and more than 50 times larger than a CPU or GPU. A typical CPU or GPU is about the size of a postage stamp.Now see how AI can boost data-center availability and efficiency.
Cerebras won’t sell the chips to ODMs due to the challenges of building and cooling such a massive chip. Instead, it will come as part of a complete server to be installed in data centers, which it says will start shipping in October.To read this article in full, please click here
There are a host of different AI-related solutions for the data center, ranging from add-in cards to dedicated servers, like the Nvidia DGX-2. But a startup called Cerebras Systems has its own server offering that relies on a single massive processor rather than a slew of small ones working in parallel.Cerebras has taken the wraps off its Wafer Scale Engine (WSE), an AI chip that measures 8.46x8.46 inches, making it almost the size of an iPad and more than 50 times larger than a CPU or GPU. A typical CPU or GPU is about the size of a postage stamp.Now see how AI can boost data-center availability and efficiency.
Cerebras won’t sell the chips to ODMs due to the challenges of building and cooling such a massive chip. Instead, it will come as part of a complete server to be installed in data centers, which it says will start shipping in October.To read this article in full, please click here
Nvidia is boasting of a breakthrough in conversation natural language processing (NLP) training and inference, enabling more complex interchanges between customers and chatbots with immediate responses.The need for such technology is expected to grow, as digital voice assistants alone are expected to climb from 2.5 billion to 8 billion within the next four years, according to Juniper Research, while Gartner predicts that by 2021, 15% of all customer service interactions will be completely handled by AI, an increase of 400% from 2017.The company said its DGX-2 AI platform trained the BERT-Large AI language model in less than an hour and performed AI inference in 2+ milliseconds, making it possible “for developers to use state-of-the-art language understanding for large-scale applications.”To read this article in full, please click here
Nvidia is boasting of a breakthrough in conversation natural language processing (NLP) training and inference, enabling more complex interchanges between customers and chatbots with immediate responses.The need for such technology is expected to grow, as digital voice assistants alone are expected to climb from 2.5 billion to 8 billion within the next four years, according to Juniper Research, while Gartner predicts that by 2021, 15% of all customer service interactions will be completely handled by AI, an increase of 400% from 2017.The company said its DGX-2 AI platform trained the BERT-Large AI language model in less than an hour and performed AI inference in 2+ milliseconds, making it possible “for developers to use state-of-the-art language understanding for large-scale applications.”To read this article in full, please click here