Immersion cooling specialist LiquidSack has introduced a pair of modular data center units using immersion cooling for edge deployments and advanced cloud computing applications.The units are called the MicroModular and MegaModular. The former contains a single 48U DataTank immersion cooling system (the size of a standard server rack) and the latter comes with up to six 48U DataTanks. The products can offer between 250kW to 1.5MW of IT capacity with a PUE of 1.02. (Power usage effectiveness, or PUE, is a metric to measure data center efficiency. It’s the ratio of the total amount of energy used by a data center facility to the energy delivered to computing equipment.)To read this article in full, please click here
This week's Network Break covers new HCI gear from the Cisco/Nutanix partnership, a sensor to detect Wi-Fi 6e performance, Intel financial engineering, Amazon shipping test satellites for a space broadband service, and more IT news.
This week's Network Break covers new HCI gear from the Cisco/Nutanix partnership, a sensor to detect Wi-Fi 6e performance, Intel financial engineering, Amazon shipping test satellites for a space broadband service, and more IT news.
Fortinet has expanded its campus network portfolio with two new switches that feature integration with Fortinet’s security services and AIops management tool.The FortiSwitch 600 is a multi-gigabit secure campus access switch that supports up to 5GE access and 25GE uplinks. The FortiSwitch 2000 is a campus core switch designed to support larger, more complex campus environments by aggregating high-performance access switches, including the FortiSwitch 600.The new switches are integrated with Fortinet’s FortiGuard AI-Powered Security Services and FortiAIOps management tool, which lets customers utilize security and operations features such as malware protection, device profiling and role-based access control.To read this article in full, please click here
Fortinet has expanded its campus network portfolio with two new switches that feature integration with Fortinet’s security services and AIops management tool.The FortiSwitch 600 is a multi-gigabit secure campus access switch that supports up to 5GE access and 25GE uplinks. The FortiSwitch 2000 is a campus core switch designed to support larger, more complex campus environments by aggregating high-performance access switches, including the FortiSwitch 600.The new switches are integrated with Fortinet’s FortiGuard AI-Powered Security Services and FortiAIOps management tool, which lets customers utilize security and operations features such as malware protection, device profiling and role-based access control.To read this article in full, please click here
Dell Technologies is expanding its generative AI products and services offerings.The vendor introduced its generative AI lineup at the end of July, but that news was centered around validating existing hardware designs for training and inferencing. Dell's new products are models made for customization and tuning.The name is a mouthful: Dell Validated Design for Generative AI with NVIDIA for Model Customization. The solutions are designed to help customers more quickly and securely extract intelligence from their data.There may be a race to move anything and everything to the cloud, but that doesn’t include generative AI, according to Dell's research. Among enterprises surveyed by Dell, 82% prefer an on-premises or hybrid solution to AI processing, said Carol Wilder, Dell's vice president for cross portfolio software and solutions.To read this article in full, please click here
Dell Technologies is expanding its generative AI products and services offerings.The vendor introduced its generative AI lineup at the end of July, but that news was centered around validating existing hardware designs for training and inferencing. Dell's new products are models made for customization and tuning.The name is a mouthful: Dell Validated Design for Generative AI with NVIDIA for Model Customization. The solutions are designed to help customers more quickly and securely extract intelligence from their data.There may be a race to move anything and everything to the cloud, but that doesn’t include generative AI, according to Dell's research. Among enterprises surveyed by Dell, 82% prefer an on-premises or hybrid solution to AI processing, said Carol Wilder, Dell's vice president for cross portfolio software and solutions.To read this article in full, please click here
Event-Driven Ansible became generally available in Ansible Automation Platform 2.4. As part of the release, Red Hat Insights and Ansible teams collaborated to implement and certify a Red Hat Insights collection integrating Insights events as a source of events for Ansible Automation Platform. This provides a consistent way to receive and handle events triggered from Insights to drive and track automation within the Ansible Automation Platform. The collection is available for installation on Ansible automation hub.
In this article, we explore how to get, configure and use the Red Hat Insights collection for Event-Driven Ansible, and provide an end-to-end example of an automation flow using Ansible Automation Platform and its Event-Driven Ansible functionality. Our goal is to automate the creation of a ServiceNow incident ticket with all relevant information when malware is detected by Insights on one of our Red Hat Enterprise Linux (RHEL) systems. We show how we can track and audit all automation from the Event-Driven Ansible controller as part of Ansible Automation Platform.
Introducing a new certified collection for Event-Driven Ansible
In a previous article, we looked at exposing and consuming Insights events to drive automation with Event-Driven Ansible. We validated that events triggered Continue reading
By default, processes run on the Linux command line are terminated as soon as you log out of your session. However, if you want to start a long-running process and ensure that it keeps running after you log off, there are a couple ways that you can make this happen. The first is to use the nohup command.Using nohup
The nohup (no hangup) command will override the normal hangups (SIGHUP signals) that terminate processes when you log out. For example, if you wanted to run a process with a long-running loop and leave it to complete on its own, you could use a command like this one:% nohup long-loop &
[1] 6828
$ nohup: ignoring input and appending output to 'nohup.out'
Note that SIGHUP is a signal that is sent to a process when the controlling terminal of the process is closed.To read this article in full, please click here
By default, processes run on the Linux command line are terminated as soon as you log out of your session. However, if you want to start a long-running process and ensure that it keeps running after you log off, there are a couple ways that you can make this happen. The first is to use the nohup command.Using nohup
The nohup (no hangup) command will override the normal hangups (SIGHUP signals) that terminate processes when you log out. For example, if you wanted to run a process with a long-running loop and leave it to complete on its own, you could use a command like this one:% nohup long-loop &
[1] 6828
$ nohup: ignoring input and appending output to 'nohup.out'
Note that SIGHUP is a signal that is sent to a process when the controlling terminal of the process is closed.To read this article in full, please click here
Starting on Aug 25, 2023, we started to notice some unusually big HTTP attacks hitting many of our customers. These attacks were detected and mitigated by our automated DDoS system. It was not long however, before they started to reach record breaking sizes — and eventually peaked just above 201 million requests per second. This was nearly 3x bigger than our previous biggest attack on record.
Concerning is the fact that the attacker was able to generate such an attack with a botnet of merely 20,000 machines. There are botnets today that are made up of hundreds of thousands or millions of machines. Given that the entire web typically sees only between 1–3 billion requests per second, it's not inconceivable that using this method could focus an entire web’s worth of requests on a small number of targets.
Detecting and Mitigating
This was a novel attack vector at an unprecedented scale, but Cloudflare's existing protections were largely able to absorb the brunt of the attacks. While initially we saw some impact to customer traffic — affecting roughly 1% of requests during the initial wave of attacks — today we’ve Continue reading
Earlier today, Cloudflare, along with Google and Amazon AWS, disclosed the existence of a novel zero-day vulnerability dubbed the “HTTP/2 Rapid Reset” attack. This attack exploits a weakness in the HTTP/2 protocol to generate enormous, hyper-volumetric Distributed Denial of Service (DDoS) attacks. Cloudflare has mitigated a barrage of these attacks in recent months, including an attack three times larger than any previous attack we’ve observed, which exceeded 201 million requests per second (rps). Since the end of August 2023, Cloudflare has mitigated more than 1,100 other attacks with over 10 million rps — and 184 attacks that were greater than our previous DDoS record of 71 million rps.
This zero-day provided threat actors with a critical new tool in their Swiss Army knife of vulnerabilities to exploit and attack their victims at a magnitude that has never been seen before. While at times complex and challenging to combat, these attacks allowed Cloudflare the opportunity to develop purpose-built technology to mitigate the effects of the zero-day vulnerability.
If you are using Cloudflare for HTTP DDoS mitigation, you are protected. And below, we’ve included more information on this vulnerability, and Continue reading
Am 25. August 2023 begannen wir, ungewöhnlich große HTTP-Angriffe auf viele unserer Kunden zu bemerken. Diese Angriffe wurden von unserem automatischen DDoS-System erkannt und abgewehrt. Es dauerte jedoch nicht lange, bis sie rekordverdächtige Ausmaße annahmen und schließlich einen Spitzenwert von knapp über 201 Millionen Anfragen pro Sekunde erreichten. Damit waren sie fast dreimal so groß wie der bis zu diesem Zeitpunkt größte Angriff, den wir jemals verzeichnet hatten.
Sie werden angegriffen oder benötigen zusätzlichen Schutz? Klicken Sie hier, um Hilfe zu erhalten.
Besorgniserregend ist die Tatsache, dass der Angreifer in der Lage war, einen solchen Angriff mit einem Botnetz von lediglich 20.000 Rechnern durchzuführen. Es gibt heute Botnetze, die aus Hunderttausenden oder Millionen von Rechnern bestehen. Bedenkt man, dass das gesamte Web in der Regel nur zwischen 1 bis 3 Milliarden Anfragen pro Sekunde verzeichnet, ist es nicht unvorstellbar, dass sich mit dieser Methode quasi die Anzahl aller Anfragen im Internet auf eine kleine Reihe von Zielen konzentrieren ließe.
Erkennen und Abwehren
Dies war ein neuartiger Angriffsvektor in einem noch nie dagewesenen Ausmaß, aber die bestehenden Schutzmechanismen von Cloudflare konnten die Wucht der Angriffe weitgehend bewältigen. Zunächst sahen wir einige Auswirkungen auf den Traffic unserer Kunden – Continue reading
오늘 Cloudflare는 Google, Amazon AWS와 함께 “HTTP/2 Rapid Reset” 공격이라는 새로운 zero-day 취약성의 존재를 공개했습니다. 이 공격은 HTTP/2 프로토콜의 약점을 악용하여 거대 하이퍼 볼류메트릭 분산 서비스 거부 (DDoS) 공격을 생성합니다. Cloudflare는 최근 몇 달간 이런 빗발치는 공격을 완화하였는데, 그중에는 이전에 우리가 목격한 것보다 규모가 3배 큰, 초당 2억 100만 요청(rps)을 넘어서는 공격도 있었습니다. Cloudflare는 2023년 8월 말부터 1,000만 rps가 넘는 기타 공격 1,100건 이상을 완화하였으며, 184건은 이전 DDoS 기록인 7,100만 rps보다도 큰 공격이었습니다.
공격을 받고 있거나 추가 보호가 필요하신가요? 여기를 클릭하여 도움을 받으세요.
위협 행위자들은 이러한 zero-day를 통해 취약성이라는 맥가이버 칼에 치명적인 도구를 새로 더하여 이전에 본 적 없는 규모로 피해자를 공격할 수 있게 되었습니다. 때로는 복잡하고 힘든 싸움이었지만, Cloudflare는 이러한 공격을 통해 zero-day 취약성의 영향을 완화하기 위한 기술을 개발할 수 있었습니다.
HTTP DDoS 완화를 위해 Cloudflare를 이용 중이시라면, 여러분은 보호받고 있습니다. 이 취약성에 대해 아래에서 더 알아보고, 스스로를 보호하기 위해 할 수 있는 리소스와 권장 사항을 확인해 보세요.
공격 분석: 모든 CSO가 알아야 할 사항
2023년 8월 말, Cloudflare 팀은 알 수 없는 위협 행위자가 개발한 새로운 zero-day 취약성을 발견하였습니다. 이 취약성은 인터넷 및 모든 웹 사이트의 작동에 필수적인 표준 HTTP/2 프로토콜을 악용합니다. Rapid Reset이라는 별명이 붙은 이 새로운 zero-day 취약성 공격은 요청 전송 후 즉각적 취소를 반복함으로써 HTTP/2 Continue reading
El 25 de agosto de 2023, empezamos a observar algunos ataques de inundación HTTP inusualmente voluminosos que afectaban a muchos de nuestros clientes. Nuestro sistema DDoS automatizado detectó y mitigó estos ataques. Sin embargo, no pasó mucho tiempo antes de que empezaran a alcanzar tamaños sin precedentes, hasta alcanzar finalmente un pico de 201 millones de solicitudes por segundo, casi el triple que el mayor ataque registrado hasta ese momento.
¿Estás siendo blanco de un ataque o necesitas mayor protección? Si necesitas ayuda, haz clic aquí.
Lo más inquietante es que el atacante fuera capaz de generar semejante ataque con una botnet de solo 20 000 máquinas. Hoy en día existen botnets formadas por cientos de miles o millones de máquinas. La web suele recibir solo entre 1000 y 3000 millones de solicitudes por segundo, por eso no parece imposible que utilizando este método se pudiera concentrar el volumen de solicitudes de toda una web en un pequeño número de objetivos.
Detección y mitigación
Se trataba de un vector de ataque novedoso a una escala sin precedentes, pero las soluciones de protección de Cloudflare pudieron mitigar en gran medida los efectos más graves de los ataques. Si bien Continue reading
GET / HTTP/1.1 CRLFHost: blog.cloudflare.comCRLFCRLFGET /page/2/ HTTP/1.1 CRLFHost: blog.cloudflare.comCRLFCRLF
レスポンスは、次のようになります:
HTTP/1.1 200 OK CRLFServer: cloudflareCRLFContent-Length: 100CRLFtext/html; charset=UTF-8CRLFCRLF<100 bytes of data>CRLFHTTP/1.1 200 OK CRLFServer: cloudflareCRLFContent-Length: 100CRLFtext/html; charset=UTF-8CRLFCRLF<100 bytes of data>
これが起こると、TLSプロキシはアップストリームプロキシに接続できなくなり、最も深刻な攻撃時に「502 Bad Gateway」エラーが表示されるクライアントがあるのは、これが理由です。重要なのは、現在ではHTTP分析の作成に使用されるログは、ビジネスロジックプロキシからも出力されることになります。その結果、これらのエラーはCloudflareのダッシュボードには表示されません。当社内部のダッシュボードによると、(緩和策を実施する前の)最初の攻撃波では、リクエストの約1%が影響を受け、8月29日の最も深刻な攻撃では数秒間で約12%のピークが見られました。次のグラフは、この現象が起きていた2時間にわたるエラーの割合を示したものです: