Archive

Category Archives for "Networking"

Random Thoughts on Grey Failures and Scale

I have used the example of increasing paths to the point where the control plane converges more slowly, impacting convergence, hence increasing the Mean Time to Repair, to show that too much redundancy can actually reduce overall network availability. Many engineers I’ve talked to balk at this idea, because it seems hard to believe that adding another link could, in fact, impact routing protocol convergence in such a way. I ran across a paper a while back that provides a different kind of example about the trade-off around redundancy in a network, but I never got around to actually reading the entire paper and trying to figure out how it fits in.

In Gray Failure: The Achilles’ Heel of Cloud-Scale Systems, the authors argue that one of the main problems with building a cloud system is with grey failures—when a router fails only some of the time, or drops (or delays) only some small percentage of the traffic. The example given is—

  • A single service must collect information from many other services on the network to complete a particular operation
  • Each of these information collection operations represent a single transaction carried across the network
  • The more transactions there are, the Continue reading

Will IoT party like 1999?

After I read Brian Bailey’s IoT semiconductor design article, IoT Myth Busting, I thought of Prince’s song 1999, in particular, the line: “So tonight I'm gonna party like it's nineteen ninety-nine.”  Without a lot of irrational exuberance, we won’t see IoT edge and fog networks soon Most IoT applications are prototypes and proof of concepts (PoC) designed to justify enterprise budget increases and follow-on venture investment rounds. Unless we return to and party like it is 1999 when telecoms over-invested in capacity ahead of demand, the telecom carriers are not going to build the new fog and edge networks that IoT needs to grow ahead of demand. At this stage, we would have to see a return of the irrational exuberance, a term coined by Federal Reserve Chairman Alan Greenspan, used to describe the over investment and over valuation during the dot-com bubble.To read this article in full or to leave a comment, please click here

Will IoT party like 1999?

After I read Brian Bailey’s IoT semiconductor design article, IoT Myth Busting, I thought of Prince’s song 1999, in particular, the line: “So tonight I'm gonna party like it's nineteen ninety-nine.”  Without a lot of irrational exuberance, we won’t see IoT edge and fog networks soon Most IoT applications are prototypes and proof of concepts (PoC) designed to justify enterprise budget increases and follow-on venture investment rounds. Unless we return to and party like it is 1999 when telecoms over-invested in capacity ahead of demand, the telecom carriers are not going to build the new fog and edge networks that IoT needs to grow ahead of demand. At this stage, we would have to see a return of the irrational exuberance, a term coined by Federal Reserve Chairman Alan Greenspan, used to describe the over investment and over valuation during the dot-com bubble.To read this article in full or to leave a comment, please click here

Unix: How random is random?

On Unix systems, random numbers are generated in a number of ways and random data can serve many purposes. From simple commands to fairly complex processes, the question “How random is random?” is worth asking.EZ random numbers If all you need is a casual list of random numbers, the RANDOM variable is an easy choice. Type "echo $RANDOM" and you'll get a number between 0 and 32,767 (the largest number that two bytes can hold).$ echo $RANDOM 29366 Of course, this process is actually providing a "pseudo-random" number. As anyone who thinks about random numbers very often might tell you, numbers generated by a program have a limitation. Programs follow carefully crafted steps, and those steps aren’t even close to being truly random. You can increase the randomness of RANDOM's value by seeding it (i.e., setting the variable to some initial value). Some just use the current process ID (via $$) for that. Note that for any particular starting point, the subsequent values that $RANDOM provides are quite predictable.To read this article in full or to leave a comment, please click here

Unix: How random is random?

On Unix systems, random numbers are generated in a number of ways and random data can serve many purposes. From simple commands to fairly complex processes, the question “How random is random?” is worth asking.EZ random numbers If all you need is a casual list of random numbers, the RANDOM variable is an easy choice. Type "echo $RANDOM" and you'll get a number between 0 and 32,767 (the largest number that two bytes can hold).$ echo $RANDOM 29366 Of course, this process is actually providing a "pseudo-random" number. As anyone who thinks about random numbers very often might tell you, numbers generated by a program have a limitation. Programs follow carefully crafted steps, and those steps aren’t even close to being truly random. You can increase the randomness of RANDOM's value by seeding it (i.e., setting the variable to some initial value). Some just use the current process ID (via $$) for that. Note that for any particular starting point, the subsequent values that $RANDOM provides are quite predictable.To read this article in full or to leave a comment, please click here

Unix: How random is random?

On Unix systems, random numbers are generated in a number of ways and random data can serve many purposes. From simple commands to fairly complex processes, the question “How random is random?” is worth asking.EZ random numbers If all you need is a casual list of random numbers, the RANDOM variable is an easy choice. Type "echo $RANDOM" and you'll get a number between 0 and 32,767 (the largest number that two bytes can hold).$ echo $RANDOM 29366 Of course, this process is actually providing a "pseudo-random" number. As anyone who thinks about random numbers very often might tell you, numbers generated by a program have a limitation. Programs follow carefully crafted steps, and those steps aren’t even close to being truly random. You can increase the randomness of RANDOM's value by seeding it (i.e., setting the variable to some initial value). Some just use the current process ID (via $$) for that. Note that for any particular starting point, the subsequent values that $RANDOM provides are quite predictable.To read this article in full or to leave a comment, please click here

Unix: How random is random?

On Unix systems, random numbers are generated in a number of ways and random data can serve many purposes. From simple commands to fairly complex processes, the question “How random is random?” is worth asking.EZ random numbers If all you need is a casual list of random numbers, the RANDOM variable is an easy choice. Type "echo $RANDOM" and you'll get a number between 0 and 32,767 (the largest number that two bytes can hold).$ echo $RANDOM 29366 Of course, this process is actually providing a "pseudo-random" number. As anyone who thinks about random numbers very often might tell you, numbers generated by a program have a limitation. Programs follow carefully crafted steps, and those steps aren’t even close to being truly random. You can increase the randomness of RANDOM's value by seeding it (i.e., setting the variable to some initial value). Some just use the current process ID (via $$) for that. Note that for any particular starting point, the subsequent values that $RANDOM provides are quite predictable.To read this article in full, please click here

Hyper-converged infrastructure – Part 1 : Is it a real thing ?

Recently I was lucky enough to play with Cisco Hyperflex in a lab and since it was funny to play with, I decided to write a basic blog post about the hyper-converged infrastructure concept (experts, you can move forward and read something else ? ). It has really piqued my interest. I know I may be […]

The post Hyper-converged infrastructure – Part 1 : Is it a real thing ? appeared first on VPackets.net.

IBM touts full data encryption in new Z series mainframes

IBM has introduced the 14th generation of its Z series mainframes, which still sell respectably despite repeated predictions of their demise. One of the major features being touted is the simple ability to encrypt all of the data on the mainframe in one shot. The mainframe, called IBM Z or z14, introduces a new encryption engine that for the first time will allow users to encrypt all of their data with one click—in databases, applications or cloud services—with virtually no impact on performance.The new encryption engine is capable of running more than 12 billion encrypted transactions every day. The mainframe comes with four times more silicon for processing cryptographic algorithms over the previous generation mainframe along with encryption-oriented upgrades to the operating system, middleware and databases.  To read this article in full or to leave a comment, please click here

IBM touts full data encryption in new Z series mainframes

IBM has introduced the 14th generation of its Z series mainframes, which still sell respectably despite repeated predictions of their demise. One of the major features being touted is the simple ability to encrypt all of the data on the mainframe in one shot. The mainframe, called IBM Z or z14, introduces a new encryption engine that for the first time will allow users to encrypt all of their data with one click—in databases, applications or cloud services—with virtually no impact on performance.The new encryption engine is capable of running more than 12 billion encrypted transactions every day. The mainframe comes with four times more silicon for processing cryptographic algorithms over the previous generation mainframe along with encryption-oriented upgrades to the operating system, middleware and databases.  To read this article in full or to leave a comment, please click here

IBM touts full data encryption in new Z series mainframes

IBM has introduced the 14th generation of its Z series mainframes, which still sell respectably despite repeated predictions of their demise. One of the major features being touted is the simple ability to encrypt all of the data on the mainframe in one shot. The mainframe, called IBM Z or z14, introduces a new encryption engine that for the first time will allow users to encrypt all of their data with one click—in databases, applications or cloud services—with virtually no impact on performance.The new encryption engine is capable of running more than 12 billion encrypted transactions every day. The mainframe comes with four times more silicon for processing cryptographic algorithms over the previous generation mainframe along with encryption-oriented upgrades to the operating system, middleware and databases.  To read this article in full or to leave a comment, please click here

IBM touts full data encryption in new Z series mainframes

IBM has introduced the 14th generation of its Z series mainframes, which still sell respectably despite repeated predictions of their demise. One of the major features being touted is the simple ability to encrypt all of the data on the mainframe in one shot. The mainframe, called IBM Z or z14, introduces a new encryption engine that for the first time will allow users to encrypt all of their data with one click—in databases, applications or cloud services—with virtually no impact on performance.The new encryption engine is capable of running more than 12 billion encrypted transactions every day. The mainframe comes with four times more silicon for processing cryptographic algorithms over the previous generation mainframe along with encryption-oriented upgrades to the operating system, middleware and databases.  To read this article in full or to leave a comment, please click here

NASA’s CTO tells enterprises how to network IoT

The internet of things combined with cloud computing is the platform for innovation that is used by NASA’s Jet Propulsion Laboratory and that should be used by enterprises, but it means setting up the right network infrastructure, JPL’s CTO says.“Number one, build an IoT network that’s separate from the regular network,” says Tom Soderstrom, the JPL CTO. “That’s what we did, and we found that it was amazing.”To read this article in full or to leave a comment, please click here

Jet Propulstion Lab’s IT CTO tells enterprises how to network IoT

The internet of things combined with cloud computing is the platform for innovation that is used by NASA’s Jet Propulsion Laboratory and that should be used by enterprises, but it means setting up the right network infrastructure, JPL’s IT CTO says.“Number one, build an IoT network that’s separate from the regular network,” says Tom Soderstrom, the JPL IT CTO. “That’s what we did, and we found that it was amazing.”To read this article in full or to leave a comment, please click here

Jet Propulsion Lab’s IT CTO tells enterprises how to network IoT

The internet of things combined with cloud computing is the platform for innovation that is used by NASA’s Jet Propulsion Laboratory and that should be used by enterprises, but it means setting up the right network infrastructure, JPL’s IT CTO says.“Number one, build an IoT network that’s separate from the regular network,” says Tom Soderstrom, the JPL IT CTO. “That’s what we did, and we found that it was amazing.”To read this article in full or to leave a comment, please click here