Dell EMC speeds up backups and restores in its storage appliances

Dell EMC has introduced new software for its Data Domain and Integrated Data Protection Appliance (IPDA) products that it claims will improve backup and restore performance from anywhere from 2.5 times to four times the previous version.Data Domain is Dell EMC’s purpose-built data deduplicating backup appliance, originally purchased by EMC long before the merger of the two companies. The IPDA is a converged solution that offers complete backup, replication, recovery, deduplication, with cloud extensibility.Performance is the key feature Dell is touting with Data Domain OS 6.2 and IDPA 2.3 software. Dell says Data Domain on-premises restores are up to 2.5 times faster than prior versions, while data restoration from the Amazon Web Services (AWS) public cloud to an on-premises Data Domain appliance can be up to four times faster.To read this article in full, please click here

How to make CI/CD with containers viable in production

Continuing Integration and Continuing Development (CI/CD), and containers are both at the heart of modern software development. CI/CD developers regularly break up applications into microservices, each running in their own container. Individual microservices can be updated independently of one another, and CI/CD developers aim to make those updates frequently.

This approach to application development has serious implications for networking.

There are a lot of things to consider when talking about the networking implications of CI/CD, containers, microservices and other modern approaches to application development. For starters, containers offer more density than virtual machines (VMs); you can stuff more containers into a given server than is possible with VMs.

Meanwhile, containers have networking requirements just like VMs do, meaning more workloads per server. This means more networking resources are required per server. More MAC addresses, IPs, DNS entries, load balancers, monitoring, intrusion detection, and so forth. Network plumbing hasn’t changed, so more workloads means more plumbing to instantiate and keep track of.

Containers can live inside a VM or on a physical server. This means that they may have different types of networking requirements than traditional VMs, (only talking to other containers within the same VM, for example) than other workloads. Continue reading

Cumulus content roundup: January

The new year is now in full swing and we’re excited about all the great content we’ve shared with you so far! In case you missed some of it, here’s our Cumulus content roundup- January edition. As always, we’ve kept busy last month with lots of great resources and news for you to read. One of the biggest things we announced was our new partnership with Nutanix but wait, there’s so much more! We’ve rounded up the rest of the right here, so settle in and stay a while!

From Cumulus Networks:

Cumulus + Nutanix = Building and Simplifying Open, Modern Data Centers at Scale: We are excited to announce that Cumulus and Nutanix are partnering to build and operate modern data centers with open networking software.

Moving a Prototype Network to Production: With prototyping production networks, the network becomes elevated to a standard far superior to the traditional approaches.

Operations guide: We thought it would be great Continue reading

IDG Contributor Network: Named data networking: names the data instead of data locations

Today, connectivity to the Internet is easy; you simply get an Ethernet driver and hook up the TCP/IP protocol stack. Then dissimilar network types in remote locations can communicate with each other. However, before the introduction of the TCP/IP model, networks were manually connected but with the TCP/IP stack, the networks can connect themselves up, nice and easy. This eventually caused the Internet to explode, followed by the World Wide Web.So far, TCP/IP has been a great success. It’s good at moving data and is both robust and scalable. It enables any node to talk to any other node by using a point-to-point communication channel with IP addresses as identifiers for the source and destination. Ideally, a network ships the data bits. You can either name the locations to ship the bits to or name the bits themselves. Today’s TCP/IP protocol architecture picked the first option. Let’s discuss the section option later in the article.To read this article in full, please click here

IDG Contributor Network: Named data networking: names the data instead of data locations

Today, connectivity to the Internet is easy; you simply get an Ethernet driver and hook up the TCP/IP protocol stack. Then dissimilar network types in remote locations can communicate with each other. However, before the introduction of the TCP/IP model, networks were manually connected but with the TCP/IP stack, the networks can connect themselves up, nice and easy. This eventually caused the Internet to explode, followed by the World Wide Web.So far, TCP/IP has been a great success. It’s good at moving data and is both robust and scalable. It enables any node to talk to any other node by using a point-to-point communication channel with IP addresses as identifiers for the source and destination. Ideally, a network ships the data bits. You can either name the locations to ship the bits to or name the bits themselves. Today’s TCP/IP protocol architecture picked the first option. Let’s discuss the section option later in the article.To read this article in full, please click here

Get TotalAV Essential AntiVirus for $19.99 (80% off)

The term “computer virus” calls to mind imagery of pathogenic creepy-crawlies bringing down a device’s operating system, their flagella wriggling as they multiply into hordes that infiltrate its chips and wires. And while it’s true that our computers can be infected with literal biological bacteria like staphylococci, per Science Illustrated, the threat of malicious codes and programs intent on corrupting data and files looms far larger: According to a recent study from the University of Maryland’s Clark School of Engineering, attacks on computers with internet access is virtually ceaseless, with an incident occurring every 39 seconds on average, affecting a third of Americans every year. To read this article in full, please click here

What is hyperconvergence?

Hyperconvergence is an IT framework that combines storage, computing and networking into a single system in an effort to reduce data center complexity and increase scalability.Hyperconverged platforms include a hypervisor for virtualized computing, software-defined storage, and virtualized networking. They typically run on standard, off-the-shelf servers and multiple nodes can be clustered to create pools of shared compute and storage resources, designed for convenient consumption.The use of commodity hardware, supported by a single vendor, yields an infrastructure that's designed to be more flexible and simpler to manage than traditional enterprise storage infrastructure. For IT leaders who are embarking on data center modernization projects, hyperconvergence can provide the agility of public cloud infrastructure without relinquishing control of hardware on their own premises.To read this article in full, please click here

What is hyperconvergence?

Hyperconvergence is an IT framework that combines storage, computing and networking into a single system in an effort to reduce data center complexity and increase scalability. Hyperconverged platforms include a hypervisor for virtualized computing, software-defined storage, and virtualized networking, and they typically run on standard, off-the-shelf servers. Multiple nodes can be clustered together to create pools of shared compute and storage resources, designed for convenient consumption.The use of commodity hardware, supported by a single vendor, yields an infrastructure that's designed to be more flexible and simpler to manage than traditional enterprise storage infrastructure. For IT leaders who are embarking on data center modernization projects, hyperconvergence can provide the agility of public cloud infrastructure without relinquishing control of hardware on their own premises.To read this article in full, please click here

What is hyperconvergence?

Hyperconvergence is an IT framework that combines storage, computing and networking into a single system in an effort to reduce data center complexity and increase scalability. Hyperconverged platforms include a hypervisor for virtualized computing, software-defined storage, and virtualized networking, and they typically run on standard, off-the-shelf servers. Multiple nodes can be clustered together to create pools of shared compute and storage resources, designed for convenient consumption.The use of commodity hardware, supported by a single vendor, yields an infrastructure that's designed to be more flexible and simpler to manage than traditional enterprise storage infrastructure. For IT leaders who are embarking on data center modernization projects, hyperconvergence can provide the agility of public cloud infrastructure without relinquishing control of hardware on their own premises.To read this article in full, please click here

Get TotalAV Essential AntiVirus for $19.99 (80% off)

The term “computer virus” calls to mind imagery of pathogenic creepy-crawlies bringing down a device’s operating system, their flagella wriggling as they multiply into hordes that infiltrate its chips and wires. And while it’s true that our computers can be infected with literal biological bacteria like staphylococci, per Science Illustrated, the threat of malicious codes and programs intent on corrupting data and files looms far larger: According to a recent study from the University of Maryland’s Clark School of Engineering, attacks on computers with internet access is virtually ceaseless, with an incident occurring every 39 seconds on average, affecting a third of Americans every year. To read this article in full, please click here

What is hyperconvergence?

Hyperconvergence is an IT framework that combines storage, computing and networking into a single system in an effort to reduce data center complexity and increase scalability. Hyperconverged platforms include a hypervisor for virtualized computing, software-defined storage, and virtualized networking, and they typically run on standard, off-the-shelf servers. Multiple nodes can be clustered together to create pools of shared compute and storage resources, designed for convenient consumption.The use of commodity hardware, supported by a single vendor, yields an infrastructure that's designed to be more flexible and simpler to manage than traditional enterprise storage infrastructure. For IT leaders who are embarking on data center modernization projects, hyperconvergence can provide the agility of public cloud infrastructure without relinquishing control of hardware on their own premises.To read this article in full, please click here

Get TotalAV Essential AntiVirus for $19.99 (80% off)

The term “computer virus” calls to mind imagery of pathogenic creepy-crawlies bringing down a device’s operating system, their flagella wriggling as they multiply into hordes that infiltrate its chips and wires. And while it’s true that our computers can be infected with literal biological bacteria like staphylococci, per Science Illustrated, the threat of malicious codes and programs intent on corrupting data and files looms far larger: According to a recent study from the University of Maryland’s Clark School of Engineering, attacks on computers with internet access is virtually ceaseless, with an incident occurring every 39 seconds on average, affecting a third of Americans every year. To read this article in full, please click here

Ansible Operator: What is it? Why it Matters? What can you do with it?

Blog-Ansible-and-Openshift

The Red Hat Ansible Automation and Red Hat OpenShift teams have been collaborating to build a new way to package, deploy, and maintain Kubernetes native applications: Ansible Operator. Given the interest in moving workloads to Kubernetes, we are happy to introduce a new tool that can help ease the move toward cloud native infrastructure.

What is Kubernetes? The simplest definition of Kubernetes I’ve ever used is, “Kubernetes is a container orchestrator.” But that is a simplified definition.

What is OpenShift? Red Hat OpenShift Container Platform is an enterprise-grade Kubernetes distribution. It enables management of container applications across hybrid cloud and multicloud infrastructure.

First, let’s identify the problem operators can help us solve. Operators help simplify deployment, management, and operations of stateful applications in Kubernetes. But, writing an operator today can be difficult because of the knowledge of Kubernetes components required to do so. The Operator SDK is a framework that uses the controller-runtime library to help make writing operators more simple. The SDK enables Operator development in Go, Helm, or Ansible.

Why It Matters

What can an Ansible Operator give us that a generic operator doesn’t? The same things Ansible can give its users: a lower barrier to entry, faster iterations, Continue reading

The IPv6 Problem is IPv4

At the end of the day, most engineers want to implement IPv6 because they know, deep down, that it is an eventual necessity.  One problem is that no one is talking about quitting IPv4. If you add IPv6 to your network, you increase costs, complexity and operations. IPv4 is going to be around for 25 […]

The post The IPv6 Problem is IPv4 appeared first on EtherealMind.

Give your automated services credentials with Access service tokens

Give your automated services credentials with Access service tokens

Cloudflare Access secures your internal sites by adding authentication. When a request is made to a site behind Access, Cloudflare asks the visitor to login with your identity provider. With service tokens, you can now extend that same level of access control by giving credentials to automated tools, scripts, and bots.

Authenticating users and bots alike

When users attempt to reach a site behind Access, Cloudflare looks for a JSON Web Token (a JWT) to determine if that visitor is allowed to reach that URL. If user does not have a JWT, we redirect them to the identity provider configured for your account. When they login successfully, we generate the JWT.

When you create an Access service token, Cloudflare generates a unique Client ID and Secret scoped to that service. When your bot sends a request with those credentials as headers, we validate them ourselves instead of redirecting to your identity provider. Access creates a JWT for that service and the bot can use that to reach your application.

Getting started

Within the Access tab of the Cloudflare dashboard, you’ll find a new section: Service Tokens. To get started, select “Generate a New Service Token.”

Give your automated services credentials with Access service tokens

You’ll be asked to Continue reading

A Free and Open Course on Data Protection in the Post-GDPR World

Last year, we published “The Dawn of New Digital Rights for Finnish Citizens,” about the launch of the New Digital Rights MOOC, a collaboration between Open Knowledge Finland and the Internet Society’s Finland Chapter. Raoul Plommer wrote, “The aim of the project is to make citizens more aware of their digital rights, initially focusing on explaining GDPR (General Data Protection Regulation) and MyData…through a MOOC platform and series of workshops that create content and train people and organizations to use it.” Plommer has written an update on the project:

We have come a long way from the beginning of last year, when we were given funding for the project from Internet Society’s Beyond the Net Funding Programme, and Eurooppatiedotus, which is a sub-organization of the Finnish Foreign Ministry.

It took us several months to agree on what is essential to know about the General Data Protection Regulation (GDPR) and how we would present it to the general public. It was also challenging to get all the content done without actually paying everyone for all their hard work. Both of our funders had a strict limit on how much money could be spent on salaries (15% and 30%). On Continue reading