Natural Ways to Support Brain Health

June is Alzheimer’s and Brain Awareness Month. And with 50 million people suffering from Alzheimer’s disease and other types of dementia worldwide, this sadly common brain health condition is one of the biggest health problems in the world today.

All About Alzheimer’s

Alzheimer’s is a degenerative brain disease. Although its cruel effects are not noticeable at first, doctors believe that brain degeneration begins around 20 years before symptoms first appear, possibly earlier. This means that although younger people may consider that Alzheimer’s only affects “old people” it is important to remember that this disease does not suddenly appear fully formed once you have hit senior age. Alzheimer’s can hit people in their early 50s in some cases, so what you do in your 20s and 30s will certainly have an impact on your long-term brain health.

Nobody fully understands the causes and risk factors for Alzheimer’s disease. There could be a genetic link to the condition, but growing evidence points towards dietary factors. Studies show that a diet rich in antioxidants, with plenty of fresh fruit and vegetables and oily fish – such as the Mediterranean diet does have a neuroprotective effect.

However with the average American diet which is Continue reading

Adopting the Default Route Table of an AWS VPC using Pulumi and Go

Up until now, when I used Pulumi to create infrastructure on AWS, my code would create all-new infrastructure: a new VPC, new subnets, new route tables, new Internet gateway, etc. One thing bothered me, though: when I created a new VPC, that new VPC automatically came with a default route table. My code, however, would create a new route table and then explicitly associate the subnets with that new route table. This seemed less than ideal. (What can I say? I’m a stickler for details.) While building a Go-based replacement for my existing TypeScript code, I found a way to resolve this duplication of resources. In this post, I’ll show you how to “adopt” the default route table of an AWS VPC so that you can manage it in your Pulumi code.

Let’s assume you are creating a new VPC using code that looks something like this:

vpc, err := ec2.NewVpc(ctx, "testvpc", &ec2.VpcArgs{
	CidrBlock: pulumi.String("10.100.0.0/16"),
	Tags: pulumi.Map {
		"Name": pulumi.String("testvpc"),
		k8sTag: pulumi.String("shared"),
	},
})

(Note that this snippet of code doesn’t show anything happening with the return values of the ec2.NewVpc function, which Go will complain about. Make Continue reading

Kernel of Truth season 3 episode 8: Cumulus Linux in action at Cloudscale

Subscribe to Kernel of Truth on iTunesGoogle PlaySpotifyCast Box and Sticher!

Click here for our previous episode.

If you’ve listened to the podcast before you may have heard us reference our customers from time to time. In this episode we’re switching things up and instead of referencing a customer, you’re going to hear directly from one! Manuel Schweizer, CEO of Cloudscale, joins host Roopa Prabhu, Attilla de Groot and Mark Horsfield to chat about Cloudscale’s first hand experience with open networking and what they hope the near and distant future of open networking will look like.

Guest Bios

Roopa Prabhu: Roopa Prabhu is a Linux Architect at Cumulus Networks, now NVIDIA. At Cumulus she and her team work on all things kernel networking and Linux system infrastructure areas. Her primary focus areas in the Linux kernel are Linux bridge, Netlink, VxLAN, Lightweight tunnels. She is currently focused on building Linux kernel dataplane for E-VPN. She loves working with the Linux kernel networking and debian communities. Her past experience includes Linux clusters, ethernet drivers and Linux KVM virtualization platforms. She has a BS and MS in Computer Science. You can find her on Twitter at Continue reading

Running a container in Microsoft Azure Container Instances (ACI) with Docker Desktop Edge

Earlier this month Docker announced our partnership with Microsoft to shorten the developer commute between the desktop and running containers in the cloud. We are excited to announce the first release of the new Docker Azure Container Instances (ACI) experience today and wanted to give you an overview of how you can get started using it.

The new Docker and Microsoft ACI experience allows developers to easily move between working locally and in the Cloud with ACI; using the same Docker CLI experience used today! We have done this by expanding the existing docker context command to now support ACI as a new backend. We worked with Microsoft to target ACI as we felt its performance and ‘zero cost when nothing is running’ made it a great place to jump into running containers in the cloud.

ACI is a Microsoft serverless container solution for running a single Docker container or a service composed of a group of multiple containers defined with a Docker Compose file. Developers can run their containers in the cloud without needing to set up any infrastructure and take advantage of features such as mounting Azure Storage and GitHub repositories as volumes. For production cases, you can Continue reading

Data Is The New Solar Energy

You’ve probably been hearing a lot about analytics and artificial intelligence in the past couple of years. Every software platform under the sun is looking to increase their visibility into the way that networks and systems behave. They can then take that data and plug it into a model and make recommendations about the way things need to be configured or designed.

Using analytics to aid troubleshooting is nothing new. We used to be able to tell when hard disks were about to go bad because of SMART reporting. Today we can use predictive analysis to determine when the disk has passed the point of no return and should be replaced well ahead of the actual failure. We can even plug that data into an AI algorithm to determine which drives on which devices need to be examined first based on a chart of performance data.

The power of this kind of data-driven network and systems operation does help our beleaguered IT departments feel as though they have a handle on things. And the power that data can offer to us has it being tracked like a precious natural resource. More than a few times I’ve heard data referred to Continue reading

Seven Reasons Why Network Automation Is Important

Organizations today constantly seek greater agility and speed in their IT operations. They’re looking to seize market advantage by innovating with new technology and quickly responding to shifting market trends. Meanwhile, IT teams seek higher levels of simplicity and automation – and more efficient allocation of limited resources – in order to support these larger business goals.

Why Businesses Need Network Automation

A major roadblock many organizations face in the drive for efficiency is that their enterprise network is far more difficult to manage than ever before. Distributed workloads and distributed IT resources have led to extremely complex configurations and poor visibility across the environment. To make matters worse, much of the management work on these networks has traditionally been performed manually, via command-line entry. That’s proved to be tedious, costly, unnecessarily rigid, and prone to error. 

Industry reports find as much as 40-80% of network failures are the result of human error

Network outages are of course a large pain point in enterprise networking, but there are certainly others. Complex, hard-to-manage networks are hindering business innovation, making critical security improvements more difficult, and driving up costs. This set of drawbacks has naturally led to a search for better Continue reading

Adding integration tests to Ansible Content Collections

In the previous installment of our "let us create the best Ansible Content Collection ever" saga, we covered the DigitalOcean-related content migration process. What we ended up with was a fully functioning Ansible Content Collection that unfortunately had no tests. But not for long; we will be adding an integration test for the droplet module.

 

We do not need tests, right?

If we were able to write perfect code all of the time, there would be no need for tests. But unfortunately, this is not how things work in real life. Any modestly useful software has deadlines attached, which usually means that developers need to strike a compromise between polish and delivery speed.

For us, the Ansible Content Collections authors, having a semi-decent Collection of integration tests has two main benefits:

  1. We know that the tested code paths function as expected and produce desired results.
  2. We can catch the breaking changes in the upstream product that we are trying to automate.

The second point is especially crucial in the Ansible world, where  one team of developers is usually responsible for the upstream product, and a separate group maintains Ansible content.

With the "why integration tests" behind us, we can Continue reading

BGP Navel Gazing on Software Gone Wild

This podcast introduction was written by Nick Buraglio, the host of today’s podcast.

As we all know, BGP runs the networked world. It is a protocol that has existed and operated in the vast expanse of the internet in one form or another since early 1990s, and despite the fact that it has been extended, enhanced, twisted, and warped into performing a myriad of tasks that one would never have imagined in the silver era of internetworking, it has remained largely unchanged in its operational core.

The world as we know it would never exist without BGP, and because of the fact that it is such a widely deployed protocol with such a solid track record of “just working”, the transition to a better security model surrounding it has been extraordinarily slow to modernize.

Accelerated Databases In The Fast Lane

Hardware accelerated databases are not new things. More than twenty years ago, Netezza was founded and created a hybrid hardware architecture that ran PostgreSQL on a big, wonking NUMA server running Linux and accelerated certain functions with adjunct accelerators that were themselves hybrid CPU-FPGA server blades that also stored the data.

Accelerated Databases In The Fast Lane was written by Timothy Prickett Morgan at The Next Platform.

Amateur packet radio walkthrough

An earlier version of this post that did data over D-Star was misleading. This is the new version.

This blog post aims do describe the steps to setting up packet radio on modern hardware with Linux. There’s lots of ham radio documentation out there about various setups, but they’re usually at least 20 years old, and you’ll find recommendations to use software that’s not been updated is just as long.

Specifically here I’ll set up a Kenwood TH-D74 and ICom 9700 to talk to each other over D-Star and AX.25. But for the latter you can also use use cheap Baofengs just as well.

Note that 9600bps AX.25 can only be generated by a compatible radio. 1200bps can be send to a non-supporting radio as audio, but 9600bps cannot. So both D-Star and AX.25 here will give only 1200bps. But with hundreds of watts you can get really far with it, at least.

I’ll assume that you already know how to set up APRS (and therefore KISS) on a D74. If not, get comfortable with that first by reading the manual.

DMR doesn’t seem to have a data mode, and SystemFusion radios don’t give the user access Continue reading

Ripple20 TCP/IP flaws can be patched but still threaten IoT devices

A set of serious network security vulnerabilities collectively known as Ripple20 roiled the IoT landscape when they came to light last week, and the problems they pose for IoT-equipped businesses could be both dangerous and difficult to solve.Ripple20 was originally discovered by Israel-based security company JSOF in September 2019. It affects a lightweight, proprietary TCP/IP library created by a small company in Ohio called Treck, which has issued a patch for the vulnerabilities. Several of those vulnerabilities would allow for remote-code execution, allowing for data theft, malicious takeovers and more, said the security vendor.That, however, isn’t the end of the problem. The TCP/IP library that contains the vulnerabilities has been used in a huge range of connected devices, from medical devices to industrial control systems to printers, and actually delivering and applying the patch is a vast undertaking. JSOF said that “hundreds of millions” of devices could be affected. Many devices don’t have the capacity to receive remote patches, and Terry Dunlap, co-founder of security vendor ReFirm Labs, said that there are numerous hurdles to getting patches onto older equipment in particular.To read this article in full, please click here

The Hedge Episode 41: Centralized Architectures with Jari Arkko

Consolidation is a well-recognized trend in the Internet ecosystem—but what does this centralization mean in terms of distributed systems, such as the DNS? Jari Arkko joins this episode of the Hedge, along with Alvaro Retana, to discuss the import and impact of centralization on the Internet through his draft, draft-arkko-arch-infrastructure-centralisation.

download

Day Two Cloud 054: Real Life VMware Cloud On AWS

We discuss the reality of running VMware Cloud (VMC) on AWS with Adam Fisher, Cloud & DevOps Engineer at RoundTower. Adam's been deploying VMC on AWS in the real world for customers since the product's early days, and has plenty of insights. VMC on AWS presents a VMware software defined data center (SDDC) hosted on bare metal in AWS data centers. If you're trying to vacate your own data centers or colos, but aren't going to refactor your applications to do it, VMC on AWS presents a compelling technical solution.

Day Two Cloud 054: Real Life VMware Cloud On AWS

We discuss the reality of running VMware Cloud (VMC) on AWS with Adam Fisher, Cloud & DevOps Engineer at RoundTower. Adam's been deploying VMC on AWS in the real world for customers since the product's early days, and has plenty of insights. VMC on AWS presents a VMware software defined data center (SDDC) hosted on bare metal in AWS data centers. If you're trying to vacate your own data centers or colos, but aren't going to refactor your applications to do it, VMC on AWS presents a compelling technical solution.

The post Day Two Cloud 054: Real Life VMware Cloud On AWS appeared first on Packet Pushers.

Getting AWS Availability Zones using Pulumi and Go

I’ve written several different articles on Pulumi (take a look at all articles tagged “Pulumi”), the infrastructure-as-code tool that allows users to define their infrastructure using a general-purpose programming language instead of a domain-specific language (DSL). Thus far, my work with Pulumi has leveraged TypeScript, but moving forward I’m going to start sharing more Pulumi code written using Go. In this post, I’ll share how to use Pulumi and Go to get a list of Availability Zones (AZs) from a particular region in AWS.

Before I proceed, I feel like it is important to provide the disclaimer that I’m new to Go (and therefore still learning). There are probably better ways of doing what I’m doing here, and so I welcome all constructive feedback on how I can improve.

With that disclaimer out of the way, allow me to first provide a small bit of context around this code. When I’m using Pulumi to manage infrastructure on AWS, I like to try to keep things as region-independent as possible. Therefore, I try to avoid hard-coding things like the number of AZs or the AZ names, and prefer to gather that information dynamically—which is what this code does.

Here’s the Continue reading

Top 5 Questions from “How to become a Docker Power User” session at DockerCon 2020

This is a guest post from Brian Christner. Brian is a Docker Captain since 2016, host of The Byte podcast, and Co-Founder & Site Reliability Engineer at 56K.Cloud. At 56K.Cloud, he helps companies to adapt technologies and concepts like Cloud, Containers, and DevOps. 56K.Cloud is a Technology company from Switzerland focusing on Automation, IoT, Containerization, and DevOps.

It was a fantastic experience hosting my first ever virtual conference session. The commute to my home office was great, and I even picked up a coffee on the way before my session started. No more waiting in lines, queueing for food, or sitting on the conference floor somewhere in a corner to check emails. 

The “DockerCon 2020 that’s a wrap” blog post highlighted my session “How to Become a Docker Power User using VS Code” session was one of the most popular sessions from DockerCon. Docker asked if I could write a recap and summarize some of the top questions that appeared in the chat. Absolutely.

Honestly, I liked the presented/audience interaction more than an in-person conference. Typically, a presenter broadcasts their content to a room full of participants, and if you are lucky and Continue reading