How Google Wants To Rewire The Internet

When all of your business is driven by end users coming to use your applications over the Internet, the network is arguably the most critical part of the infrastructure. That is why search engine and ad serving giant Google, which has expanded out to media serving, hosted enterprise applications, and cloud computing, has put a tremendous amount of investment into creating its own network stack.

But running a fast, efficient, hyperscale network for internal datacenters is not sufficient for a good user experience, and that is why Google has created a software defined networking stack to do routing over the

How Google Wants To Rewire The Internet was written by Timothy Prickett Morgan at The Next Platform.

Random Thoughts on Grey Failures and Scale

I have used the example of increasing paths to the point where the control plane converges more slowly, impacting convergence, hence increasing the Mean Time to Repair, to show that too much redundancy can actually reduce overall network availability. Many engineers I’ve talked to balk at this idea, because it seems hard to believe that adding another link could, in fact, impact routing protocol convergence in such a way. I ran across a paper a while back that provides a different kind of example about the trade-off around redundancy in a network, but I never got around to actually reading the entire paper and trying to figure out how it fits in.

In Gray Failure: The Achilles’ Heel of Cloud-Scale Systems, the authors argue that one of the main problems with building a cloud system is with grey failures—when a router fails only some of the time, or drops (or delays) only some small percentage of the traffic. The example given is—

  • A single service must collect information from many other services on the network to complete a particular operation
  • Each of these information collection operations represent a single transaction carried across the network
  • The more transactions there are, the Continue reading

The Golden Grail: Automatic Distributed Hyperparameter Tuning

While it might not be an exciting problem front and center of AI conversations, the issue of efficient hyperparameter tuning for neural network training is a tough one. There are some options that aim to automate this process but for most users, this is a cumbersome area—and one that can lead to bad performance when not done properly.

The problem with coming up with automatic tools for tuning is that many machine learning workloads are dependent on the dataset and the conditions of the problem being solved. For instance, some users might prefer less accuracy over a speedup or efficiency

The Golden Grail: Automatic Distributed Hyperparameter Tuning was written by Nicole Hemsoth at The Next Platform.

You’ll Never Believe the Big Hairy Audacious Startup John Jacob Astor Created in 1808

 

Think your startup has a Big Hairy Audacious Goal? Along with President Thomas Jefferson, John Jacob Astor  conceived (in 1808), and implemented (in 1810) plan to funnel the entire tradable wealth of the westernmost sector of the North American continent north of Mexico through his own hands. Early accounts described it as “the largest commercial enterprise the world has ever known.”

Think your startup raised a lot of money? Astor put up $400,000 ($7,614,486 in today's dollars) of his own money, with more committed after the first prototype succeeded.

Think competition is new? John Jacob Astor dealt with rivals in one of three ways: he tried to buy them out; if that didn’t work, he tried to partner with them; if he failed to join them, he tried to crush them.

Think your startup requires commitment? Joining Astor required pledging five years of one’s life to a start-up venture bound for the unknownn.

Think your startup works hard? Voyageur's paddled twelve to fifteen hours per day, with short breaks while afloat for a pipe of tobacco. During that single day each voyageur would make more than thirty thousand paddle strokes. On the upper Great Continue reading

The best way to learn Docker for Free: Play-With-Docker (PWD)

Last year at the Distributed System Summit in Berlin, Docker captains Marcos Nils and Jonathan Leibiusky started hacking on an in-browser solution to help people learn Docker. A few days later, Play-with-docker (PWD) was born. 

PWD is a Docker playground which allows users to run Docker commands in a matter of seconds. It gives the experience of having a free Alpine Linux Virtual Machine in browser, where you can build and run Docker containers and even create clusters in Docker Swarm Mode. Under the hood Docker-in-Docker (DinD) is used to give the effect of multiple VMs/PCs. In addition to the playground, PWD also includes a training site composed of a large set of Docker labs and quizzes from beginner to advanced level available at training.play-with-docker.com.

In case you missed it, Marcos and Jonathan presented PWD during the last DockerCon Moby Cool Hack session. Watch the video below for a deep dive into the infrastructure and roadmaps.

Over the past few months, the Docker team has been working closely with Marcos, Jonathan and other active members of the Docker community to add new features to the project and Docker labs to the training section.

PWD: the Playground

Here Continue reading

Will IoT party like 1999?

After I read Brian Bailey’s IoT semiconductor design article, IoT Myth Busting, I thought of Prince’s song 1999, in particular, the line: “So tonight I'm gonna party like it's nineteen ninety-nine.”  Without a lot of irrational exuberance, we won’t see IoT edge and fog networks soon Most IoT applications are prototypes and proof of concepts (PoC) designed to justify enterprise budget increases and follow-on venture investment rounds. Unless we return to and party like it is 1999 when telecoms over-invested in capacity ahead of demand, the telecom carriers are not going to build the new fog and edge networks that IoT needs to grow ahead of demand. At this stage, we would have to see a return of the irrational exuberance, a term coined by Federal Reserve Chairman Alan Greenspan, used to describe the over investment and over valuation during the dot-com bubble.To read this article in full or to leave a comment, please click here

Will IoT party like 1999?

After I read Brian Bailey’s IoT semiconductor design article, IoT Myth Busting, I thought of Prince’s song 1999, in particular, the line: “So tonight I'm gonna party like it's nineteen ninety-nine.”  Without a lot of irrational exuberance, we won’t see IoT edge and fog networks soon Most IoT applications are prototypes and proof of concepts (PoC) designed to justify enterprise budget increases and follow-on venture investment rounds. Unless we return to and party like it is 1999 when telecoms over-invested in capacity ahead of demand, the telecom carriers are not going to build the new fog and edge networks that IoT needs to grow ahead of demand. At this stage, we would have to see a return of the irrational exuberance, a term coined by Federal Reserve Chairman Alan Greenspan, used to describe the over investment and over valuation during the dot-com bubble.To read this article in full or to leave a comment, please click here

Unix: How random is random?

On Unix systems, random numbers are generated in a number of ways and random data can serve many purposes. From simple commands to fairly complex processes, the question “How random is random?” is worth asking.EZ random numbers If all you need is a casual list of random numbers, the RANDOM variable is an easy choice. Type "echo $RANDOM" and you'll get a number between 0 and 32,767 (the largest number that two bytes can hold).$ echo $RANDOM 29366 Of course, this process is actually providing a "pseudo-random" number. As anyone who thinks about random numbers very often might tell you, numbers generated by a program have a limitation. Programs follow carefully crafted steps, and those steps aren’t even close to being truly random. You can increase the randomness of RANDOM's value by seeding it (i.e., setting the variable to some initial value). Some just use the current process ID (via $$) for that. Note that for any particular starting point, the subsequent values that $RANDOM provides are quite predictable.To read this article in full, please click here

Unix: How random is random?

On Unix systems, random numbers are generated in a number of ways and random data can serve many purposes. From simple commands to fairly complex processes, the question “How random is random?” is worth asking.EZ random numbers If all you need is a casual list of random numbers, the RANDOM variable is an easy choice. Type "echo $RANDOM" and you'll get a number between 0 and 32,767 (the largest number that two bytes can hold).$ echo $RANDOM 29366 Of course, this process is actually providing a "pseudo-random" number. As anyone who thinks about random numbers very often might tell you, numbers generated by a program have a limitation. Programs follow carefully crafted steps, and those steps aren’t even close to being truly random. You can increase the randomness of RANDOM's value by seeding it (i.e., setting the variable to some initial value). Some just use the current process ID (via $$) for that. Note that for any particular starting point, the subsequent values that $RANDOM provides are quite predictable.To read this article in full or to leave a comment, please click here

Unix: How random is random?

On Unix systems, random numbers are generated in a number of ways and random data can serve many purposes. From simple commands to fairly complex processes, the question “How random is random?” is worth asking.EZ random numbers If all you need is a casual list of random numbers, the RANDOM variable is an easy choice. Type "echo $RANDOM" and you'll get a number between 0 and 32,767 (the largest number that two bytes can hold).$ echo $RANDOM 29366 Of course, this process is actually providing a "pseudo-random" number. As anyone who thinks about random numbers very often might tell you, numbers generated by a program have a limitation. Programs follow carefully crafted steps, and those steps aren’t even close to being truly random. You can increase the randomness of RANDOM's value by seeding it (i.e., setting the variable to some initial value). Some just use the current process ID (via $$) for that. Note that for any particular starting point, the subsequent values that $RANDOM provides are quite predictable.To read this article in full or to leave a comment, please click here

Unix: How random is random?

On Unix systems, random numbers are generated in a number of ways and random data can serve many purposes. From simple commands to fairly complex processes, the question “How random is random?” is worth asking.EZ random numbers If all you need is a casual list of random numbers, the RANDOM variable is an easy choice. Type "echo $RANDOM" and you'll get a number between 0 and 32,767 (the largest number that two bytes can hold).$ echo $RANDOM 29366 Of course, this process is actually providing a "pseudo-random" number. As anyone who thinks about random numbers very often might tell you, numbers generated by a program have a limitation. Programs follow carefully crafted steps, and those steps aren’t even close to being truly random. You can increase the randomness of RANDOM's value by seeding it (i.e., setting the variable to some initial value). Some just use the current process ID (via $$) for that. Note that for any particular starting point, the subsequent values that $RANDOM provides are quite predictable.To read this article in full or to leave a comment, please click here

Hyper-converged infrastructure – Part 1 : Is it a real thing ?

Recently I was lucky enough to play with Cisco Hyperflex in a lab and since it was funny to play with, I decided to write a basic blog post about the hyper-converged infrastructure concept (experts, you can move forward and read something else ? ). It has really piqued my interest. I know I may be […]

The post Hyper-converged infrastructure – Part 1 : Is it a real thing ? appeared first on VPackets.net.

IBM touts full data encryption in new Z series mainframes

IBM has introduced the 14th generation of its Z series mainframes, which still sell respectably despite repeated predictions of their demise. One of the major features being touted is the simple ability to encrypt all of the data on the mainframe in one shot. The mainframe, called IBM Z or z14, introduces a new encryption engine that for the first time will allow users to encrypt all of their data with one click—in databases, applications or cloud services—with virtually no impact on performance.The new encryption engine is capable of running more than 12 billion encrypted transactions every day. The mainframe comes with four times more silicon for processing cryptographic algorithms over the previous generation mainframe along with encryption-oriented upgrades to the operating system, middleware and databases.  To read this article in full or to leave a comment, please click here