Archive

Category Archives for "Networking"

Getting started on the Linux (or Unix) command line, Part 1

To get started as a Linux (or Unix) user, you need to have a good perspective on how Linux works and a handle on some of the most basic commands. This first post in a “getting started” series examines some of the first commands you need to be ready to use.On logging in When you first log into a Linux system and open a terminal window or log into a Linux system from another system using a tool like PuTTY, you’ll find yourself sitting in your home directory. Some of the commands you will probably want to try first will include these:pwd -- shows you where you are in the file system right now (stands for “present working directory”) whoami – confirms the account you just logged into date -- show the current date and time hostname -- display the system’s name Using the whoami command immediately after logging in might generate a “duh!” response since you just entered your assigned username and password. But, once you find yourself using more than one account, it’s always helpful to know a command that will remind you which you’re using at the moment.To read this article in full, Continue reading

Getting started on the Linux (or Unix) command line, Part 1

To get started as a Linux (or Unix) user, you need to have a good perspective on how Linux works and a handle on some of the most basic commands. This first post in a “getting started” series examines some of the first commands you need to be ready to use.On logging in When you first log into a Linux system and open a terminal window or log into a Linux system from another system using a tool like PuTTY, you’ll find yourself sitting in your home directory. Some of the commands you will probably want to try first will include these:pwd -- shows you where you are in the file system right now (stands for “present working directory”) whoami – confirms the account you just logged into date -- show the current date and time hostname -- display the system’s name Using the whoami command immediately after logging in might generate a “duh!” response since you just entered your assigned username and password. But, once you find yourself using more than one account, it’s always helpful to know a command that will remind you which you’re using at the moment.To read this article in full, Continue reading

Nvidia unveils new GPU-based platform to fuel generative AI performance

Nvidia has announced a new AI computing platform called Nvidia HGX H200, a turbocharged version of the company’s Nvidia Hopper architecture powered by its latest GPU offering, the Nvidia H200 Tensor Core.The company also is teaming up with HPE to offer a supercomputing system, built on the Nvidia Grace Hopper GH200 Superchips, specifically designed for generative AI training.A surge in enterprise interest in AI has fueled demand for Nvidia GPUs to handle generative AI and high-performance computing workloads. Its latest GPU, the Nvidia H200, is the first  to offer HBM3e, high bandwidth memory that is 50% faster than current HBM3, allowing for the delivery of 141GB of memory at 4.8 terabytes per second, providing double the capacity and 2.4 times more bandwidth than its predecessor, the Nvidia A100.To read this article in full, please click here

Nvidia unveils new GPU-based platform to fuel generative AI performance

Nvidia has announced a new AI computing platform called Nvidia HGX H200, a turbocharged version of the company’s Nvidia Hopper architecture powered by its latest GPU offering, the Nvidia H200 Tensor Core.The company also is teaming up with HPE to offer a supercomputing system, built on the Nvidia Grace Hopper GH200 Superchips, specifically designed for generative AI training.A surge in enterprise interest in AI has fueled demand for Nvidia GPUs to handle generative AI and high-performance computing workloads. Its latest GPU, the Nvidia H200, is the first  to offer HBM3e, high bandwidth memory that is 50% faster than current HBM3, allowing for the delivery of 141GB of memory at 4.8 terabytes per second, providing double the capacity and 2.4 times more bandwidth than its predecessor, the Nvidia A100.To read this article in full, please click here

BrandPost: Combatting ransomware with layered Zero Trust Security

Ransomware is a growing threat to organizations, according to research independently conducted by Enterprise Strategy Group and sponsored by Zerto, a Hewlett Packard Enterprise company.According to the report, 2023 Ransomware Preparedness: Lighting the Way to Readiness and Mitigation, 75% of organizations experienced ransomware attacks in the last 12 months, with10%  facing daily attacks.[i] 46% of organizations experienced ransomware attacks at least monthly—with 11% reporting daily attacks.To read this article in full, please click here

Cisco leans on partners, blueprints for AI infrastructure growth

Cisco is taking a collaborative approach to helping enterprise customers build AI infrastructures.At its recent partner summit, Cisco talked up a variety of new programs and partnerships aimed at helping enterprises get their core infrastructure ready for AI workloads and applications.“While AI is driving a lot of changes in technology, we believe that it should not require a wholesale rethink of customer data center operations,” said Todd Brannon, senior director, cloud infrastructure marketing, with Cisco’s cloud infrastructure and software group. To read this article in full, please click here

Streaming and longer context lengths for LLMs on Workers AI

Streaming LLMs and longer context lengths available in Workers AI

Workers AI is our serverless GPU-powered inference platform running on top of Cloudflare’s global network. It provides a growing catalog of off-the-shelf models that run seamlessly with Workers and enable developers to build powerful and scalable AI applications in minutes. We’ve already seen developers doing amazing things with Workers AI, and we can’t wait to see what they do as we continue to expand the platform. To that end, today we’re excited to announce some of our most-requested new features: streaming responses for all Large Language Models (LLMs) on Workers AI, larger context and sequence windows, and a full-precision Llama-2 model variant.

If you’ve used ChatGPT before, then you’re familiar with the benefits of response streaming, where responses flow in token by token. LLMs work internally by generating responses sequentially using a process of repeated inference — the full output of a LLM model is essentially a sequence of hundreds or thousands of individual prediction tasks. For this reason, while it only takes a few milliseconds to generate a single token, generating the full response takes longer, on the order of seconds. The good news is we can start displaying the response as soon as the first tokens are generated, Continue reading

Is Anyone Using netlab on Windows?

Tomas wants to start netlab with PowerShell, but it doesn’t work for him, and I don’t know anyone running netlab directly on Windows (I know people running it in a Ubuntu VM on Windows, but that’s a different story).

In theory, netlab (and Ansible) should work fine with Windows Subsystem for Linux. In practice, there’s often a gap between theory and practice – if you run netlab on Windows (probably using VirtualBox with Vagrant), I’d love to hear from you. Please leave a comment, email me, add a comment to Tomas’ GitHub issue, or fix the documentation and submit a PR. Thank you!

Is Anyone Using netlab on Windows?

Tomas wants to start netlab with PowerShell, but it doesn’t work for him, and I don’t know anyone running netlab directly on Windows (I know people running it in a Ubuntu VM on Windows, but that’s a different story).

In theory, netlab (and Ansible) should work fine with Windows Subsystem for Linux. In practice, there’s often a gap between theory and practice – if you run netlab on Windows (probably using VirtualBox with Vagrant), I’d love to hear from you. Please leave a comment, email me, add a comment to Tomas’ GitHub issue, or fix the documentation and submit a PR. Thank you!

NB455: Extreme Announces ZTNA Offering; Palo Alto Networks Spends Big On A Browser Startup

Extreme Networks is jumping into Zero Trust Network Access, Palo Alto Networks is reportedly spending more than half a billion dollars to acquire a corporate browser startup, and Forrester predicts as much as 20% of VMware's customers may jump ship after the Broadcom acquisition completes. We cover these stories and more in today's Network Break podcast.

The post NB455: Extreme Announces ZTNA Offering; Palo Alto Networks Spends Big On A Browser Startup appeared first on Packet Pushers.

NB455: Extreme Announces ZTNA Offering; Palo Alto Networks Spends Big On A Browser Startup

Extreme Networks is jumping into Zero Trust Network Access, Palo Alto Networks is reportedly spending more than half a billion dollars to acquire a corporate browser startup, and Forrester predicts as much as 20% of VMware’s customers may jump ship after the Broadcom acquisition completes. Arista touts a strong third quarter, while F5 forecasts a... Read more »

LiquidStack expands into single-phase liquid cooling

LiquidStack, one of the first major players in the immersion cooling business, has entered the single-phase liquid cooling market with an expansion of its DataTank product portfolio.Immersion cooling is the process of dunking the motherboard in a nonconductive liquid to cool it. It's primarily centered around the CPU but, in this case, involves the entire motherboard, including the memory and other chips.Immersion cooling has been around for a while but has been something of a fringe technology. With server technology growing hotter and denser, immersion has begun to creep into the mainstream.To read this article in full, please click here

LiquidStack expands into single-phase liquid cooling

LiquidStack, one of the first major players in the immersion cooling business, has entered the single-phase liquid cooling market with an expansion of its DataTank product portfolio.Immersion cooling is the process of dunking the motherboard in a nonconductive liquid to cool it. It's primarily centered around the CPU but, in this case, involves the entire motherboard, including the memory and other chips.Immersion cooling has been around for a while but has been something of a fringe technology. With server technology growing hotter and denser, immersion has begun to creep into the mainstream.To read this article in full, please click here

Aurora enters TOP500 supercomputer ranking at No. 2 with a challenge for reigning champ Frontier

Frontier maintained its top spot in the latest edition of the TOP500 for the fourth consecutive time and is still the only exascale machine on the list of the world's most powerful supercomputers. Newcomer Aurora debuted at No. 2 in the ranking, and it’s expected to surpass Frontier once the system is fully built.Frontier, housed at the Oak Ridge National Laboratory (ORNL) in Tenn., landed the top spot with an HPL score of 1.194 quintillion floating point operations per second (FLOPS), which is the same score from earlier this year. A quintillion is 1018 or one exaFLOPS (EFLOPS). The speed measurement used in evaluating the computers is the High Performance Linpack (HPL) benchmark, which measures how well systems solve a dense system of linear equations.To read this article in full, please click here

Aurora enters TOP500 supercomputer ranking at No. 2 with a challenge for reigning champ Frontier

Frontier maintained its top spot in the latest edition of the TOP500 for the fourth consecutive time and is still the only exascale machine on the list of the world's most powerful supercomputers. Newcomer Aurora debuted at No. 2 in the ranking, and it’s expected to surpass Frontier once the system is fully built.Frontier, housed at the Oak Ridge National Laboratory (ORNL) in Tenn., landed the top spot with an HPL score of 1.194 quintillion floating point operations per second (FLOPS), which is the same score from earlier this year. A quintillion is 1018 or one exaFLOPS (EFLOPS). The speed measurement used in evaluating the computers is the High Performance Linpack (HPL) benchmark, which measures how well systems solve a dense system of linear equations.To read this article in full, please click here

BrandPost: Prioritizing the human behind the screen through end-user experience scoring

As digital landscapes evolve, so does the definition of network performance. It's no longer just about metrics; it's about the human behind the screen. Businesses are recognizing the need to zoom in on the actual experiences of end-users. This emphasis has given rise to advanced tools that delve deeper, capturing the essence of user interactions and painting a clearer picture of network health.The rise of End-User Experience (EUE) scoringEnd-User Experience (EUE) Scoring has emerged as a game-changer in the realm of network monitoring. Rather than solely relying on traditional metrics like latency or bandwidth, EUE scoring provides a holistic measure of how a user perceives the performance of a network or application. By consolidating various key performance indicators into a single, comprehensible metric, businesses can gain actionable insights into the true quality of their digital services, ensuring that their users' experiences are nothing short of exceptional.To read this article in full, please click here

Tech Bytes: Why AI Workloads Require Optimized Ethernet Fabrics (Sponsored)

Network engineers have a good grasp on how to build data center networks to support all kinds of apps, from traditional three-tier designs to applications built around containers and microservices. But what about building a network fabric to support AI? Today on the Tech Bytes podcast, sponsored by Nokia, we talk about the special requirements to build a data center fabric for AI use cases such as training and inference.

The post Tech Bytes: Why AI Workloads Require Optimized Ethernet Fabrics (Sponsored) appeared first on Packet Pushers.