Making AWS re:Invent More Family-Friendly

AWS re:Invent is just around the corner, and Spousetivities will be there to help bring a new level of family friendliness to the event. If you’re thinking of bringing a spouse, partner, or significant other with you to Las Vegas, I’d encourage you to strongly consider getting him or her involved in Spousetivities.

Want a sneak peek at what’s planned? Have a look:

  • Monday’s activity is a full-day trip to Death Valley, including a stop at Bad Water Basin (significant because it is 280 feet below sea level, making it the lowest place in the Western Hemisphere!). Lunch is included, of course.
  • On Tuesday, Spousetivities participants will get to visit a number of locations on the Las Vegas Strip, including Siegfried and Roy’s Secret Garden, the Wildlife Habitat at the Flamingo, and the Shark Reef at Mandalay Bay. Transportion is provided for longer connections, but there will be some walking involved—wear comfortable shoes!
  • Wednesday includes a visit to Red Rock Canyon and Hoover Dam. There will some opportunities for short sightseeing walks in Red Rock Canyon (plus the 13-mile scenic drive), and the Hoover Dam tour includes access to the generator room (a very cool sight).
  • Wrapping up the Continue reading

NetDevOpEd: The power of network verification

Microsoft just published information on their internal tool called “CrystalNet” which Microsoft defines as “a high-fidelity, cloud-scale network emulator in daily use at Microsoft. We built CrystalNet to help our engineers in their quest to improve the overall reliability of our networking infrastructure.” You can read more about their tool in this detailed ACM Paper. But what I want to talk about is how this amazing technology is accessible to you, at any organization, right now, with network verification using Cumulus VX.

What Microsoft has accomplished is truly amazing. They can simulate their network environment and prevent nearly 70% of the network issues they experienced in a two-year period. They have the ability to spin up hundreds of nodes with the exact same configurations and protocols they run in production. Then applying network tests, they verify if proposed changes will have negative impact on applications and services. This work took the team of Microsoft researchers over two years to develop. It’s really quite the feat!

What I find exciting about this is it validates exactly what we at Cumulus have been preaching for the last two years as well. The ability to make a 1:1 mirror of Continue reading

Learning to Ask Questions

One thing I’m often asked in email and in person is: why should I bother learning theory? After all, you don’t install SPF in your network; you install a router or switch, which you then configure OSPF or IS-IS on. The SPF algorithm is not exposed to the user, and does not seem to really have any impact on the operation of the network. Such internal functionality might be neat to know, but ultimately–who cares? Maybe it will be useful in some projected troubleshooting situation, but the key to effective troubleshooting is understanding the output of the device, rather than in understanding what the device is doing.

In other words, there is no reason to treat network devices as anything more than black boxes. You put some stuff in, other stuff comes out, and the vendor takes care of everything in the middle. I dealt with a related line of thinking in this video, but what about this black box argument? Do network engineers really need to know what goes on inside the vendor’s black box?

Let me anser this question with another question. Wen you shift to a new piece of hardware, how do you know what you are Continue reading

THE ENTERPRISE IT CHECKLIST FOR DOCKER OPERATIONS

At Docker, we believe the best insights come from the developers and IT pros using the Docker platform every day. Since the launch of Docker Enterprise Edition, we learned three things from our customers.

  1. First, a top goal in enterprise IT is to deliver value to customers (internal business units or external clients)…and to do so fast.
  2. Second, most enterprises believe that Docker is at the center of their IT platform.
  3. Finally, most enterprises’ biggest challenge is moving their containerized applications to production in time to prove value. My DockerCon talk focused on addressing the third item, which seems to be a critical one for many of our customers.

In our recent customer engagements, we’ve seen a pattern of common challenges when designing and deploying Docker in an enterprise environment. Particularly, customers are struggling to find best practices to speed up their move to production. To address some of these common challenges, we put together a production readiness checklist (https://github.com/nicolaka/checklist) for Docker Enterprise Edition. This list was discussed thoroughly during my DockerCon EU 2017 session. Here’s a video of that talk:

I go through 10 key topics (shown below) that a typical enterprise should  go through when deploying Continue reading

5 tricks for using the sudo command

The sudoers file can provide detailed control over user privileges, but with very little effort, you can still get a lot of benefit from sudo. In this post, we're going to look at some simple ways to get a lot of value out of the sudo command in Linux.Trick 1: Nearly effortless sudo usage The default file on most Linux distributions makes it very simple to give select users the ability to run commands as root. In fact, you don’t even have to edit the /etc/sudoers file in any way to get started. Instead, you just add the users to the sudo or admin group on the system and you’re done.Adding users to the sudo or admin group in the /etc/group file gives them permission to run commands using sudo.To read this article in full, please click here

ARM Benchmarks Show HPC Ripe for Processor Shakeup

Every year at the Supercomputing Conference (SC) an unofficial theme emerges. For the last two years, machine learning and deep learning were focal points; before that it was all about data-intensive computing and stretching even farther back, the potential of cloud to reshape supercomputing.

What all of these themes have in common is that they did not focus on the processor. In fact, they centered around a generalized X86 hardware environment with well-known improvement and ecosystem cadences. Come to think of it, the closest we have come to seeing the device at the center of a theme in recent years

ARM Benchmarks Show HPC Ripe for Processor Shakeup was written by Nicole Hemsoth at The Next Platform.

Turn Network Engineers into Software Engineers

Peyton Koran, Director of Technical Engagement at Electronic Arts, delivered a great session on why network vendors are losing to open source and whitebox. His view is that network engineers need to embrace software engineering, be flexible. Vendors and VARs are no longer working to benefit of the customer but to benefit themselves with increased […]

Cassandra NoSQL Data Model Design

We at Instaclustr recently published a blog post on the most common data modelling mistakes that we see with Cassandra. This post was very popular and led me to think about what advice we could provide on how to approach designing your Cassandra data model so as to come up with a quality design that avoids the traps.

There are a number of good articles around that with rules and patterns to fit your data model into: 6 Step Guide to Apache Cassandra Data Modelling and Data Modelling Recommended Practices.

However, we haven’t found a step by step guide to analysing your data to determine how to fit in these rules and patterns. This white paper is a quick attempt at filling that gap.

Phase 1: Understand the data

This phase has two distinct steps that are both designed to gain a good understanding of the data that you are modelling and the access patterns required.

Define the data domain

The first step is to get a good understanding of your data domain. As someone very familiar with relation data modelling, I tend to sketch (or at least think) ER diagrams to understand the entities, their keys and relationships. However, Continue reading

Mellanox Poised For HDR InfiniBand Quantum Leap

InfiniBand and Ethernet are in a game of tug of war and are pushing the bandwidth and price/performance envelopes constantly. But the one thing they cannot do is get too far out ahead of the PCI-Express bus through which network interface cards hook into processors. The 100 Gb/sec links commonly used in Ethernet and InfiniBand server adapters run up against bandwidth ceilings with two ports running on PCI-Express 3.0 slots, and it is safe to say that 200 Gb/sec speeds will really need PCI-Express 4.0 slots to have two ports share a slot.

This, more than any other factor, is

Mellanox Poised For HDR InfiniBand Quantum Leap was written by Timothy Prickett Morgan at The Next Platform.

Thwarting the Tactics of the Equifax Attackers

Thwarting the Tactics of the Equifax Attackers

We are now 3 months on from one of the biggest, most significant data breaches in history, but has it redefined people's awareness on security?

The answer to that is absolutely yes, awareness is at an all-time high. Awareness, however, does not always result in positive action. The fallacy which is often assumed is "surely, if I keep my software up to date with all the patches, that's more than enough to keep me safe?". It's true, keeping software up to date does defend against known vulnerabilities, but it's a very reactive stance. The more important part is protecting against the unknown.

Something every engineer will agree on is that security is hard, and maintaining systems is even harder. Patching or upgrading systems can lead to unforeseen outages or unexpected behaviour due to other fixes which may be applied. This, in most cases, can cause huge delays in the deployment of patches or upgrades, due to requiring either regression testing or deployment in a staging environment. Whilst processes are followed, and tests are done, systems are sat vulnerable, ready to be exploited if they are exposed to the internet.

Looking at the wider landscape, an increase in security research Continue reading

Reflections from Copenhagen: RIPE NCC IPv6 Hackathon and Danish IPv6 Day

On 4-5 November, a group of enthusiastic and skillful people gathered at the 6th RIPE NCC hackathon with a theme of IPv6. The event was organized by RIPE NCC and DKNOG, sponsored by Comcast, hosted by IT University of Copenhagen and aimed to bring together open-minded developers and network engineers to work on different ideas and projects from the IPv6 field.

I was honoured to be a jury member and even before the hackathon we were quite busy rating all the submissions that came in, as the number of hackathon participants was limited. All potential participants had to submit a short bio, explain what kind of development (programming) knowledge they had, and also what their ideas or expectations for the hackathon were. We selected 24 participants – and what a skillful bunch that was! In total we were 33 people in the room, 24 participants, 5 jurors and 4 RIPE NCC staff for on-site support.

On Saturday,  4 November, the group came together at IT University of Copenhagen and after a short opening and update on logistics and rules of the hackathon, people got to work. First was a “speaker’s corner”, where everyone with an idea for a Continue reading

Supercomputing is becoming super-efficient, Top500 list shows

Supercomputing is becoming super-efficient. The highest climber in the latest Top500 list of the world's fastest supercomputers is also one of the highest scorers on the Green500 ranking of the world's most efficient.But the November 2017 edition of the Top500 and Green500 is also remarkable in other ways, as it marks a tipping point in U.S. dominance of the list.[ See these top supercomputers at our slideshow 10 of the world’s fastest supercomputers. ]Chinese systems now outnumber U.S. systems on the list by 202 to 144, a reversal of the situation just six months ago, when the U.S. had 169 systems in the Top500 vs China's 160. It will still be a long while before third-placed Japan overtakes the U.S.: It has 35 systems in the list, followed by Germany with 20, France with 18, and the UK with 15.To read this article in full, please click here

Supercomputing is becoming super-efficient, Top500 list shows

Supercomputing is becoming super-efficient. The highest climber in the latest Top500 list of the world's fastest supercomputers is also one of the highest scorers on the Green500 ranking of the world's most efficient.But the November 2017 edition of the Top500 and Green500 is also remarkable in other ways, as it marks a tipping point in U.S. dominance of the list.[ See these top supercomputers at our slideshow 10 of the world’s fastest supercomputers. ]Chinese systems now outnumber U.S. systems on the list by 202 to 144, a reversal of the situation just six months ago, when the U.S. had 169 systems in the Top500 vs China's 160. It will still be a long while before third-placed Japan overtakes the U.S.: It has 35 systems in the list, followed by Germany with 20, France with 18, and the UK with 15.To read this article in full, please click here

10 of the world’s fastest supercomputers

10 of the world's fastest supercomputersImage by Henrik5000 / Getty ImagesThe semi-annual Top500 ranking of the world’s fastest supercomputers is in for fall 2018 with China claiming 227 of the 500 spots on the list, although it managed to take just two places in the top 10. The United states took five of the top 10, including first and second place. New to the Top500 rankings at number 205 is Astra, an HPE-built machine at Sandia National Laboratories that is the first powered by ARM chips to make the list. The top 10 highlighted in this slideshow demonstrate what might become available in corporate data centers.To read this article in full, please click here