Hashing out the hash command on Linux

When you type “hash” on a Linux system, you could get one of two very different responses depending on the shell you are using.If you are using bash or a related shell such as ksh, you should see a list of the commands that you have used since your terminal session began, sometimes with a count of how many times each command was used. This can be more useful than using the history command if you just want to see your very recent command activity, but the hash command is not a single executable. Instead, it relies on your shell.How to tell if you're using a bash built-in in Linux hash in bash Here’s an example of the hash command run in bash:To read this article in full, please click here

Hashing out the hash command on Linux

When you type “hash” on a Linux system, you could get one of two very different responses depending on the shell you are using.If you are using bash or a related shell such as ksh, you should see a list of the commands that you have used since your terminal session began, sometimes with a count of how many times each command was used. This can be more useful than using the history command if you just want to see your very recent command activity, but the hash command is not a single executable. Instead, it relies on your shell.How to tell if you're using a bash built-in in Linux hash in bash Here’s an example of the hash command run in bash:To read this article in full, please click here

Three Dimensions of BGP Address Family Nerd Knobs

Got into an interesting BGP discussion a few days ago, resulting in a wild chase through recent SRv6 and BGP drafts and RFCs. You might find the results mildly interesting ;)

BGP has three dimensions of address family configurability:

  • Transport sessions. Most vendors implement BGP over TCP over IPv4 and IPv6. I’m sure there’s someone out there running BGP over CLNS1, and there are already drafts proposing running BGP over QUIC2.
  • Address families enabled on individual transport sessions, more precisely a combination of Address Family Identifier (AFI) and Subsequent Address Family Identifier.
  • Next hops address family for enabled address families.

Three Dimensions of BGP Address Family Nerd Knobs

Got into an interesting BGP discussion a few days ago, resulting in a wild chase through recent SRv6 and BGP drafts and RFCs. You might find the results mildly interesting ;)

BGP has three dimensions of address family configurability:

  • Transport sessions. Most vendors implement BGP over TCP over IPv4 and IPv6. I’m sure there’s someone out there running BGP over CLNS1, and there are already drafts proposing running BGP over QUIC2.
  • Address families enabled on individual transport sessions, more precisely a combination of Address Family Identifier (AFI) and Subsequent Address Family Identifier.
  • Next hops address family for enabled address families.

Meta plans the world’s fastest supercomputer for AI

Facebook’s parent company Meta said it is building the world's largest AI supercomputer to power machine-learning and natural language processing for building its metaverse project.The new machine, called the Research Super Computer (RSC), will contain 16,000 Nvidia A100 GPUs and 4,000 AMD Epyc Rome 7742 processors. It has 2,000 Nvidia DGX-A100 nodes, with eight GPU chips and two Epyc microprocessors per node. Meta expects to complete construction this year.World's fastest supercomputer is 3x faster than No. 2 RSC is already partially built, with 760 of the DGX-A100 systems deployed. Meta researchers have already started using RSC to train large models in natural language processing (NLP) and computer vision for research with the goal of eventually training models with trillions of parameters, according to Meta.To read this article in full, please click here

Meta plans the world’s fastest supercomputer for AI

Facebook’s parent company Meta said it is building the world's largest AI supercomputer to power machine-learning and natural language processing for building its metaverse project.The new machine, called the Research Super Computer (RSC), will contain 16,000 Nvidia A100 GPUs and 4,000 AMD Epyc Rome 7742 processors. It has 2,000 Nvidia DGX-A100 nodes, with eight GPU chips and two Epyc microprocessors per node. Meta expects to complete construction this year.World's fastest supercomputer is 3x faster than No. 2 RSC is already partially built, with 760 of the DGX-A100 systems deployed. Meta researchers have already started using RSC to train large models in natural language processing (NLP) and computer vision for research with the goal of eventually training models with trillions of parameters, according to Meta.To read this article in full, please click here

Makings of a Web3 Stack: Agoric, IPFS, Cosmos Network

Want an easy way to get started in Web3? Download a Dietrich Ayala, IPFS Ecosystem Growth Engineer, Rowland Graus, head of product for Marko Baricevic, software engineer for Cosmos Network. an open source technology to help blockchains interoperate. Each participant describes the role in the Web3 ecosystem where their respective technologies play. These technologies are often used together, so they represent an emerging blockchain stack of sorts. TNS editor-in-chief Joab Jackson hosted the Continue reading

Cisco weds collaboration and SD-WAN

Looking to offer branch offices and hybrid workers secure access to corporate collaboration, Cisco is melding its Webex software with a key component of its SD-WAN package.Webex collaboration software is being added to the applications supported by Cloud OnRamp--a key part of Cisco’s enterprise SD-WAN offering that links branch-offices or individual remote users to cloud applications. It includes application-aware firewalls, URL-filtering, intrusion detection/prevention, DNS-layer security, and Advanced Malware Protection (AMP) Threat Grid, as well as network services such as load-balancing and Wide Area Application Services, according to Cisco.To read this article in full, please click here

Designing Uber

 

This is a guest post by Ankit Sirmorya. Ankit is working as a Machine Learning Lead/Sr. Machine Learning Engineer at Amazon and has led several machine-learning initiatives across the Amazon ecosystem. Ankit has been working on applying machine learning to solve ambiguous business problems and improve customer experience. For instance, he created a platform for experimenting with different hypotheses on Amazon product pages using reinforcement learning techniques. Currently, he is in the Alexa Shopping organization where he is developing machine-learning-based solutions to send personalized reorder hints to customers for improving their experience.

Requirements

In Scope

Getting Started with Ansible.utils Collection for Playbook Creators: Part 2

Use Case: Operational state assessment using ansible.utils collection

In ansible.utils, there are a variety of plugins which we can use for operational state assessment of network devices. I overviewed the ansible.utils collection in part one of this two part blog series. If you have not reviewed part one, I recommend you do so, since I will build on this information in this part two blog. We will see how the ansible.utils collection can be useful in operational state assessment as an example use case.

In general, state assessment workflow has following steps:

  • Retrieve (Source of Truth)

  • Collect the current operational state from the remote host. 
  • Convert it into normalized structured data. Structured data can be in json, yaml format or any other format.
  • Store is an inventory variable.
  • Validate 

    • Define the desired state criteria in a standard based format, for example, as defined in a json schema format.
    • Retrieve operational state at runtime.
    • Validate the current state data against the pre-defined criteria to identify if there is any deviation.
  • Remediate 

    •  Implement required configuration changes to correct drift. 
    • Report on the change as an audit trail.

     

    How can ansible.utils collection Continue reading

    Getting Started with Ansible.utils Collection for Playbook Creators: Part 1

    The Ansible ansible.utils collection includes a variety of plugins that aid in the management, manipulation and visibility of data for the Ansible playbook developer. The most common use case for this collection is when you want to work with the complex data structures present in an Ansible playbook, inventory, or returned from modules. See each plugin documentation page for detailed examples for how these utilities can be used in tasks. In this two-part blog we will overview this collection in part one and see an example use case of using the utils collection in detail in part two.

     

    Plugins inside ansible.utils 

    Plugins are code which will augment ansible core functionality. This code executes on control node.it and gives options and extensions for the core features of Red Hat Ansible Automation Platform. This ansible.utils plugin collection includes:

    • Filter plugins
    • Lookup plugins
    • Test plugins
    • Modules

     

    Filter plugins

    Filter plugins manipulate data. With the right filter you can extract a particular value, transform data types and formats, perform mathematical calculations, split and concatenate strings, insert dates and times, and do much more. Ansible Automation Platform uses the standard filters shipped with Jinja2 and adds some specialized filter Continue reading

    Anomaly Detection: Glimpse into the Future of IoT Data

    Margaret Lee Margaret is senior vice president and general manager of digital service and operations management for BMC Software, Inc. She has P&L responsibility for the company’s full suite of BMC Helix solutions for IT service management and IT operations management. Big data and the internet-of-things go hand in hand. With the continued proliferation of IoT devices — one prognosticator estimates there will be