Archive

Category Archives for "Networking"

IoT is everywhere

Technology pundits are often given to hyperbole, but when they claim that the Internet of Things (IoT) is changing everything, they may have a point. At least, the IoT is being used in just about everything you can think of, from deeply geeky applications such as industrial sensors to frivolous gimmicks like Wi-Fi enabled toothbrushes.Don’t believe me? Let’s take a look at some of the many, many different IoT use cases that people are actually using—or at least testing.Fitness wearables: IoT is the key concept powering wearables from fitness trackers to smartwatches, but keeping weekend warriors fit is only the beginning. Elite athletes and professional sports franchises are using IoT to push their performance parameters. At the other end of the spectrum, IoT can track your pet’s location and health.To read this article in full or to leave a comment, please click here

Capture w/Trace in Firepower Threat Defense

A few days ago I wrote an article demonstrating the Packet Tracer feature for troubleshooting Firepower Threat Defense. Another very cool tool for troubleshooting is the Capture w/Trace Feature. The power of this tool comes from both capturing a PCAP file (for Wireshark or your tool of choice) and a separate window pane that has a view of the device operation (very similar to the Packet Tracer output).

Similar to Packet Tracer, to initiate Capture w/Trace in the Firepower Management Console, choose ‘Devices‘ then ‘Device Management‘. Next, select the device that you want to perform the operation and select the icon that looks like a screwdriver and wrench.

DevDevMgmt

Note to reader: All Firepower can be accessed by clicking here (or choosing Firepower from the menu at the top of the page).

This will produce the screen that provides health monitoring and troubleshooting for the device. Selecting “Advanced Troubleshooting” will change the view to a multi-tab troubleshooting screen.

AdvTroubleshoot

Select the Capture w/Trace tab. The Add Capture button will allow for selection of filter criteria for the capture.

CapturewTrace

Add Capture

AddCapture

After filling out this information and choosing “Save“, an entry will be created for Continue reading

Now you can get a bachelor’s degree in data center engineering

In an era where all the hot tech jobs seem to focus on application development and cloud computing, it can be hard to find fresh data center engineering talent. The Institute of Technology in Sligo, Ireland, is trying to rewrite that story with a new Bachelors Degree in Data Center Facilities Engineering, starting this fall.According to the school, “The purpose of this new engineering degree programme is to provide the Data Centre industry with staff who are qualified to provide the proficient and in-depth skills necessary for the technical management and operation of data centre facilities. Expert operation and maintenance of these facilities is crucial in order to maintain 24/7 services with optimum energy efficiency.”To read this article in full or to leave a comment, please click here

Random Thoughts on Grey Failures and Scale

I have used the example of increasing paths to the point where the control plane converges more slowly, impacting convergence, hence increasing the Mean Time to Repair, to show that too much redundancy can actually reduce overall network availability. Many engineers I’ve talked to balk at this idea, because it seems hard to believe that adding another link could, in fact, impact routing protocol convergence in such a way. I ran across a paper a while back that provides a different kind of example about the trade-off around redundancy in a network, but I never got around to actually reading the entire paper and trying to figure out how it fits in.

In Gray Failure: The Achilles’ Heel of Cloud-Scale Systems, the authors argue that one of the main problems with building a cloud system is with grey failures—when a router fails only some of the time, or drops (or delays) only some small percentage of the traffic. The example given is—

  • A single service must collect information from many other services on the network to complete a particular operation
  • Each of these information collection operations represent a single transaction carried across the network
  • The more transactions there are, the Continue reading

Will IoT party like 1999?

After I read Brian Bailey’s IoT semiconductor design article, IoT Myth Busting, I thought of Prince’s song 1999, in particular, the line: “So tonight I'm gonna party like it's nineteen ninety-nine.”  Without a lot of irrational exuberance, we won’t see IoT edge and fog networks soon Most IoT applications are prototypes and proof of concepts (PoC) designed to justify enterprise budget increases and follow-on venture investment rounds. Unless we return to and party like it is 1999 when telecoms over-invested in capacity ahead of demand, the telecom carriers are not going to build the new fog and edge networks that IoT needs to grow ahead of demand. At this stage, we would have to see a return of the irrational exuberance, a term coined by Federal Reserve Chairman Alan Greenspan, used to describe the over investment and over valuation during the dot-com bubble.To read this article in full or to leave a comment, please click here

Will IoT party like 1999?

After I read Brian Bailey’s IoT semiconductor design article, IoT Myth Busting, I thought of Prince’s song 1999, in particular, the line: “So tonight I'm gonna party like it's nineteen ninety-nine.”  Without a lot of irrational exuberance, we won’t see IoT edge and fog networks soon Most IoT applications are prototypes and proof of concepts (PoC) designed to justify enterprise budget increases and follow-on venture investment rounds. Unless we return to and party like it is 1999 when telecoms over-invested in capacity ahead of demand, the telecom carriers are not going to build the new fog and edge networks that IoT needs to grow ahead of demand. At this stage, we would have to see a return of the irrational exuberance, a term coined by Federal Reserve Chairman Alan Greenspan, used to describe the over investment and over valuation during the dot-com bubble.To read this article in full or to leave a comment, please click here

Unix: How random is random?

On Unix systems, random numbers are generated in a number of ways and random data can serve many purposes. From simple commands to fairly complex processes, the question “How random is random?” is worth asking.EZ random numbers If all you need is a casual list of random numbers, the RANDOM variable is an easy choice. Type "echo $RANDOM" and you'll get a number between 0 and 32,767 (the largest number that two bytes can hold).$ echo $RANDOM 29366 Of course, this process is actually providing a "pseudo-random" number. As anyone who thinks about random numbers very often might tell you, numbers generated by a program have a limitation. Programs follow carefully crafted steps, and those steps aren’t even close to being truly random. You can increase the randomness of RANDOM's value by seeding it (i.e., setting the variable to some initial value). Some just use the current process ID (via $$) for that. Note that for any particular starting point, the subsequent values that $RANDOM provides are quite predictable.To read this article in full or to leave a comment, please click here

Unix: How random is random?

On Unix systems, random numbers are generated in a number of ways and random data can serve many purposes. From simple commands to fairly complex processes, the question “How random is random?” is worth asking.EZ random numbers If all you need is a casual list of random numbers, the RANDOM variable is an easy choice. Type "echo $RANDOM" and you'll get a number between 0 and 32,767 (the largest number that two bytes can hold).$ echo $RANDOM 29366 Of course, this process is actually providing a "pseudo-random" number. As anyone who thinks about random numbers very often might tell you, numbers generated by a program have a limitation. Programs follow carefully crafted steps, and those steps aren’t even close to being truly random. You can increase the randomness of RANDOM's value by seeding it (i.e., setting the variable to some initial value). Some just use the current process ID (via $$) for that. Note that for any particular starting point, the subsequent values that $RANDOM provides are quite predictable.To read this article in full or to leave a comment, please click here

Unix: How random is random?

On Unix systems, random numbers are generated in a number of ways and random data can serve many purposes. From simple commands to fairly complex processes, the question “How random is random?” is worth asking.EZ random numbers If all you need is a casual list of random numbers, the RANDOM variable is an easy choice. Type "echo $RANDOM" and you'll get a number between 0 and 32,767 (the largest number that two bytes can hold).$ echo $RANDOM 29366 Of course, this process is actually providing a "pseudo-random" number. As anyone who thinks about random numbers very often might tell you, numbers generated by a program have a limitation. Programs follow carefully crafted steps, and those steps aren’t even close to being truly random. You can increase the randomness of RANDOM's value by seeding it (i.e., setting the variable to some initial value). Some just use the current process ID (via $$) for that. Note that for any particular starting point, the subsequent values that $RANDOM provides are quite predictable.To read this article in full or to leave a comment, please click here

Unix: How random is random?

On Unix systems, random numbers are generated in a number of ways and random data can serve many purposes. From simple commands to fairly complex processes, the question “How random is random?” is worth asking.EZ random numbers If all you need is a casual list of random numbers, the RANDOM variable is an easy choice. Type "echo $RANDOM" and you'll get a number between 0 and 32,767 (the largest number that two bytes can hold).$ echo $RANDOM 29366 Of course, this process is actually providing a "pseudo-random" number. As anyone who thinks about random numbers very often might tell you, numbers generated by a program have a limitation. Programs follow carefully crafted steps, and those steps aren’t even close to being truly random. You can increase the randomness of RANDOM's value by seeding it (i.e., setting the variable to some initial value). Some just use the current process ID (via $$) for that. Note that for any particular starting point, the subsequent values that $RANDOM provides are quite predictable.To read this article in full, please click here