Hiding the information from network core

Hiding the information from network core is important.   But why hiding the information is important ? What type of information we are trying to hide ? Why from network core only ? How we can hide the information from network core ?   Let’s start, why information hiding is important.   One of the […]

The post Hiding the information from network core appeared first on Cisco Network Design and Architecture | CCDE Bootcamp | orhanergun.net.

How log rotation works with logrotate

Log rotation on Linux systems is more complicated than you might expect. Which log files are rotated, when and how often, whether or not the rotated log files are compressed, and how many instances of the log files are retained all depend on settings in configuration files.Rotating log files is important for several reasons. First, you probably don't want older log files eating up too much of your disk space. Second, when you need to analyze log data, you probably don't want those log files to be extremely large and cumbersome. And last, organizing log files by date probably makes spotting and analyzing changes quite a bit easier (e.g., comparing last week's log data to this week's).To read this article in full or to leave a comment, please click here

NAT64check proves popular

We’ve already mentioned this a few times this year, but we’ve just published an more in-depth article about NAT64check over on the RIPE Labs and APNIC websites.

NAT64check is a tool developed by the Internet Society, Go6, SJM Steffann and Simply Understand that allows you to enter the URL of a particular website, and then run tests over IPv4, IPv6 and NAT64 in order to check whether the website is actually reachable in each case, whether identical web pages are returned, and whether all the resources such as images, stylesheets and scripts load correctly. The rationale behind NAT64check is also explained, how it works, and how you can use it.

If you just want to take a look at the tool, then please go to either https://nat64check.go6lab.si/ or https://nat64check.ipv6-lab.net/, type the URL you wish to check into the box at the top of the page, and the result should be returned within a few seconds. It’s simple and easy, and will help you identify what needs to be done to make your website accessible with IPv6.

Deploy360 also want to help you deploy IPv6, so please take a look at our Start Here page to learn more.

 

The post Continue reading

Introducing Host Pack — host software essentials for fabric-wide connectivity + visibility

From day one, Cumulus Networks has always believed in making data center networking easier and better. To us, that never stopped at just an operating system. Our goal has always been to unify the entire stack on Linux and bring web-scale principles to all aspects of the data center networking process — from network to operations; from host to switch. This was one of the many key drivers behind our introduction of NetQ, a fabric validation system designed to make network operator’s lives easier by ensuring the network is behaving as intended. Today, we launch the next critical step in web-scale networking — Cumulus Host Pack.

Host Pack offers software essentials that bring the host into the network. It optimizes visibility and connectivity into Cumulus Linux network fabric from end to end. Your entire stack can now be unified with the same language and the same tooling using the Linux networking model. Host Pack ensures real-time reliability and uptime to the container by leveraging NetQ to enhance visibility of the host. In addition to visibility, Host Pack enhances network scalability and connectivity by enabling the host to be part of the layer 3 network, while completely supporting popular layer 2 Continue reading

VxLAN on the CSR1Kv

By now, VxLAN is becoming the standard way of tunneling in the Datacenter.
Using VxLAN, i will show how to use the CSR1Kv to extend your Datacenter L2 reach between sites as well.

First off, what is VxLAN?
It stands for Virtual Extensible LAN. Basically you have a way of decoupling your vlan’s into a new scheme.

You basically map your VLAN into a VNI (Virtual Network Identifier), which in essence makes your VLAN numbering scheme locally significant.

Also, since the numbering for VNI’s is a 24 bit identifier, you have alot more flexibility than just the regular 4096 definable VLAN’s. (12 Bits .1q tags)

Each endpoint that does the encapsulation/decapsulation is called a VTEP (VxLAN Tunnel EndPoint). In our example this would be CSR3 and CSR5.

After the VxLAN header, the packet is further encapsulated into a UDP packet and forwarded across the network. This is a great solution as it doesnt impose any technical restrictions on the core of the network. Only the VTEPs needs to understand VxLAN (and probably have hardware acceleration for it as well).

Since we wont be using BGP EVPN, we will rely solely on multicasting in the network to establish who is the Continue reading

IDG Contributor Network: The future of disaster recovery lies in a future without the public internet

What is it that’s driving enterprises to the cloud?That’s a long list: Web-based storage, stability, easier remote access and reductions in maintenance and associated costs are a few of the most frequently cited reasons. But, the number one application on organizations’ minds when they’re mapping out cloud migration strategies is disaster recovery (DR). Consequently, disaster recovery has become a primary source of value for enterprises who are not only pursuing cloud adoption, but who are also building out hybrid- or multi-cloud strategies.To read this article in full or to leave a comment, please click here

ROI is not a cybersecurity concept

In the cybersecurity community, much time is spent trying to speak the language of business, in order to communicate to business leaders our problems. One way we do this is trying to adapt the concept of "return on investment" or "ROI" to explain why they need to spend more money. Stop doing this. It's nonsense. ROI is a concept pushed by vendors in order to justify why you should pay money for their snake oil security products. Don't play the vendor's game.

The correct concept is simply "risk analysis". Here's how it works.

List out all the risks. For each risk, calculate:

  • How often it occurs.
  • How much damage it does.
  • How to mitigate it.
  • How effective the mitigation is (reduces chance and/or cost).
  • How much the mitigation costs.

If you have risk of something that'll happen once-per-day on average, costing $1000 each time, then a mitigation costing $500/day that reduces likelihood to once-per-week is a clear win for investment.

Now, ROI should in theory fit directly into this model. If you are paying $500/day to reduce that risk, I could use ROI to show you hypothetical products that will ...

  • ...reduce the remaining risk to once-per-month for an additional $10/day.
  • ... Continue reading

Inside View: Tokyo Tech’s Massive Tsubame 3 Supercomputer

Professor Satoshi Matsuoka, of the Tokyo Institute of Technology (Tokyo Tech) researches and designs large scale supercomputers and similar infrastructures. More recently, he has worked on the convergence of Big Data, machine/deep learning, and AI with traditional HPC, as well as investigating the Post-Moore Technologies towards 2025.

He has designed supercomputers for years and has collaborated on projects involving basic elements for the current and more importantly future Exascale systems. I talked with him recently about his work with the Tsubame supercomputers at Tokyo Tech. This is the first in a two-part article. For background on the Tsubame 3 system

Inside View: Tokyo Tech’s Massive Tsubame 3 Supercomputer was written by Nicole Hemsoth at The Next Platform.

Docker brings containers to mainframes

Docker announced the first major update to its flagship Docker Enterprise Edition 17.06, with a clear eye to on-premises data centers and DevOps. Docker rolled out the rebranded Docker EE in March, based on what was previously known as the Docker Commercially Supported and Docker Datacenter products. With that launch, Docker added the ability to port legacy apps to containers without having to modify the code.The major new feature of this update — which seems to borrow from Microsoft’s year/month naming convention for Windows 10 updates — is support for IBM z Systems mainframes running Linux. Now containerized apps can be run on a mainframe, with all of the scale and uptime reliability it brings, and they run with no modifications necessary.To read this article in full or to leave a comment, please click here

Docker brings containers to mainframes

Docker announced the first major update to its flagship Docker Enterprise Edition 17.06, with a clear eye to on-premises data centers and DevOps. Docker rolled out the rebranded Docker EE in March, based on what was previously known as the Docker Commercially Supported and Docker Datacenter products. With that launch, Docker added the ability to port legacy apps to containers without having to modify the code.The major new feature of this update — which seems to borrow from Microsoft’s year/month naming convention for Windows 10 updates — is support for IBM z Systems mainframes running Linux. Now containerized apps can be run on a mainframe, with all of the scale and uptime reliability it brings, and they run with no modifications necessary.To read this article in full or to leave a comment, please click here

Docker brings containers to mainframes

Docker announced the first major update to its flagship Docker Enterprise Edition 17.06, with a clear eye to on-premises data centers and DevOps. Docker rolled out the rebranded Docker EE in March, based on what was previously known as the Docker Commercially Supported and Docker Datacenter products. With that launch, Docker added the ability to port legacy apps to containers without having to modify the code.The major new feature of this update — which seems to borrow from Microsoft’s year/month naming convention for Windows 10 updates — is support for IBM z Systems mainframes running Linux. Now containerized apps can be run on a mainframe, with all of the scale and uptime reliability it brings, and they run with no modifications necessary.To read this article in full or to leave a comment, please click here

IoT is about to tell you when your food is spoiled

“It’s not getting any younger.”In my house, that’s code for either eating or trashing something in the refrigerator that’s flirting with its “best-by” date — or just no longer looks as appetizing as it once did.Sensors are the core of the Internet of Things But what if Internet of Things (IoT) sensor technology could tell you whether that lasagna was still safe for dinner or whether it’s time to toss the hair-coloring product slowly drying out in the back of your medicine cabinet? That promise is what’s on the menu at the 254th National Meeting & Exposition of the American Chemical Society (ACS) in Washington, D.C., this week. So, what does the world’s largest scientific society, with more than 157,000 members, have to do with IoT? To read this article in full or to leave a comment, please click here

An Early Look at Baidu’s Custom AI and Analytics Processor

In the U.S. it is easy to focus on our native hyperscale companies (Google, Amazon, Facebook, etc.) and how they design and deploy infrastructure at scale.

But as our regular readers understand well, the equivalent to Google in China, Baidu, has been at the bleeding edge with chips, systems, and software to feed its own cloud-delivered and research operations.

We’ve written much over the last few years about the company’s emphasis on streamlining deep learning processing, most notably with GPUs, but Baidu has a new processor up its sleeve called the XPU. For now, the device has just been demonstrated

An Early Look at Baidu’s Custom AI and Analytics Processor was written by Nicole Hemsoth at The Next Platform.