Finding out whether backup and recovery systems work well is more complicated than just knowing how long backups and restores take; agreeing to a core set of essential metrics is the key to properly judging your system to determine if it succeeds or needs a redesign.Here are five metrics every enterprise should gather in order to insure that their systems meet the needs of the business.Storage capacity and usage
Let's start with a very basic metric: Does your backup system have enough storage capacity to meet your current and future backup and recovery needs? Whether you are talking a tape library or a storage array, your storage system has a finite amount of capacity, and you need to monitor what that capacity is and what percentage of it you're using over time.To read this article in full, please click here
As the number of places where we store data increases, the basic concept of what is referred to as the 3-2-1 rule often gets forgotten. This is a problem, because the 3-2-1 rule is easily one of the most foundational concepts for designing data protection. It's important to understand why the rule was created, and how it's currently being interpreted in an increasingly tapeless world.What is the 3-2-1 rule for backup?
The 3-2-1 rule says there should be at least three copies or versions of data stored on two different pieces of media, one of which is off-site. Let's take a look at each of the three elements and what it addresses.[Get regularly scheduled insights by signing up for Network World newsletters.]
3 copies or versions: Having at least three different versions of your data over different periods of time ensures that you can recover from accidents that affect multiple versions. Any good backup system will have many more than three copies.
2 different media: You should not have both copies of your data on the same media. Consider, for example, Apple's Time Machine. You can fool it using Disc Utility to split your hard drive into Continue reading
Yes, your container infrastructure needs some type of backup. Kubernetes and Docker will not magically build themselves after a disaster. As discussed in a separate article, you don’t need to back up the running state of each container, but you will need to back up the configuration used to run and manage your containers.Here’s a quick reminder of what you’ll need to back up.[Get regularly scheduled insights by signing up for Network World newsletters.]
Configuration and desired-state information
The Dockerfiles used to build your images and all versions of those files
The images created from the Dockerfile and used to run each container
Kubernetes etcd & other - K8s databases that info on cluster state
Deployments - YAML files describing each deployment
Persistent data created or changed by containers
Persistent volumes
Databases
Dockerfiles
Docker containers are run from images, and images are built from Dockerfiles. A proper Docker configuration would first use some kind of repository such as GitHub as a version-control system for all Dockerfiles. Do not create ad hoc containers using ad hoc images built from ad hoc Dockerfiles. All Dockerfiles should be stored in a repository that allows you to pull historical Continue reading
Containers are breaking backups around the world, but there are steps you can take to make sure that the most critical parts of your container infrastructure are protected against the worst things that can happen to your data center.At first glance it may seem that containers don’t need to be backed up, but on closer inspection, it does make sense in order to protect against catastrophic events and for other, less disastrous eventualities.[Get regularly scheduled insights by signing up for Network World newsletters.]
Container basics
Containers are another type of virtualization, and Docker is the most popular container platform. Containers are a specialized environment in which you can run a particular application. One way to think of them is like lightweight virtual machines. Where each VM in a hypervisor server contains an entire copy of an operating system, containers share the underlying operating system, and each of them contains only the required libraries needed by the application that will run in that container. As a result, many containers on a single node (a physical or virtual machine running an OS and the container runtime environment) take up far fewer resources than the same number of VMs.To Continue reading
Containers are breaking backups around the world, but there are steps you can take to make sure that the most critical parts of your container infrastructure are protected against the worst things that can happen to your data center.At first glance it may seem that containers don’t need to be backed up, but on closer inspection, it does make sense in order to protect against catastrophic events and for other, less disastrous eventualities.[Get regularly scheduled insights by signing up for Network World newsletters.]
Container basics
Containers are another type of virtualization, and Docker is the most popular container platform. Containers are a specialized environment in which you can run a particular application. One way to think of them is like lightweight virtual machines. Where each VM in a hypervisor server contains an entire copy of an operating system, containers share the underlying operating system, and each of them contains only the required libraries needed by the application that will run in that container. As a result, many containers on a single node (a physical or virtual machine running an OS and the container runtime environment) take up far fewer resources than the same number of VMs.To Continue reading
The failure to back up data that is stored in a cloud block-storage service can be lost forever if not properly backed up. This article explains how object storage works very differently from block storage and how it offers better built-in protections.What is Object Storage?
Each cloud vendor offers an object storage service, and they include Amazon's Simple Storage Service (S3), Azure’s Blob Store, and Google’s Cloud Storage.Think of object storage systems like a file system with no hierarchical structure of directories and subdirectories. Where a file system uses a combination of a directory structure and file name to identify and locate a file, every object stored in an object storage system gets a unique identifier (UID) based on its content.To read this article in full, please click here
The failure to back up data that is stored in a cloud block-storage service can be lost forever if not properly backed up. This article explains how object storage works very differently from block storage and how it offers better built-in protections.What is Object Storage?
Each cloud vendor offers an object storage service, and they include Amazon's Simple Storage Service (S3), Azure’s Blob Store, and Google’s Cloud Storage.Think of object storage systems like a file system with no hierarchical structure of directories and subdirectories. Where a file system uses a combination of a directory structure and file name to identify and locate a file, every object stored in an object storage system gets a unique identifier (UID) based on its content.To read this article in full, please click here
A recent Amazon outage resulted in a small number of customers losing production data stored in their accounts. This, of course, led to typical anti-cloud comments that follows such events. The reality is that these customers data loss had nothing to do with cloud and everything to do with them not understanding the storage they were using and backing it up.Over Labor Day weekend there was a power outage in one of the availability zones in the AWS US-East-1 region. Backup generators came on, but quickly failed for unknown reasons. Customers’ Elastic Block Store (EBS) data is replicated among multiple servers, but the outage affected multiple servers. While the bulk of data stored in EBS was fine or was able to be easily recovered after outage, .5 percent of the data could not be recovered. Customers among the .5 percent who did not have a backup of their EBS data actually lost data.To read this article in full, please click here
The concept of instant recovery is relatively simple – the ability to run a virtual machine directly from a backup of that VM – but the possibilities offered by such a simple concept are virtually limitless, which explains why it’s considered one of the most important advances in backup and recovery for many years.Before the advent of instant recovery all restores were basically the same, starting with how backups were stored – in some type of container or image. Prior to commercial backup-and-recovery software, backups were stored in formats such as tar, cpio, or dump.
More about backup and recovery:To read this article in full, please click here
Companies migrating to hyperconverged infrastructure (HCI) systems are usually doing so to simplify their virtualization environment. Since backup is one of the most complicated parts of virtualization, they are often looking to simplify it as well via their migration to HCI.Other customers have chosen to use HCI to simplify their hardware complexity, while using a traditional backup approach for operational and disaster recovery. Here’s a look at cover both scenarios.To read this article in full, please click here
Companies migrating to hyperconverged infrastructure (HCI) systems are usually doing so to simplify their virtualization environment. Since backup is one of the most complicated parts of virtualization, they are often looking to simplify it as well via their migration to HCI.Other customers have chosen to use HCI to simplify their hardware complexity, while using a traditional backup approach for operational and disaster recovery. Here’s a look at cover both scenarios.To read this article in full, please click here
Everyone agrees that backups must be sent off site in order to protect your data from large disasters such as fire, earthquake, tornado, hurricane or flood.To read this article in full, please click here(Insider Story)
Everyone agrees that backups must be sent off site in order to protect your data from large disasters such as fire, earthquake, tornado, hurricane or flood.To read this article in full, please click here(Insider Story)
Everyone agrees that backups must be sent off site in order to protect your data from large disasters such as fire, earthquake, tornado, hurricane or flood.To read this article in full, please click here(Insider Story)
Any backup experts worth their salt switched to disk as the primary target for backups many years ago. Tape still reigns in long-term archival, for the reasons laid out here. But tape is also quite problematic when it comes to day-to-day operational backup and recovery.To read this article in full, please click here(Insider Story)
It might surprise some that despite the popularity of disk storage, the amount of data currently being placed on tape continues to rise.To read this article in full, please click here(Insider Story)
It might surprise some that despite the popularity of disk storage, the amount of data currently being placed on tape continues to rise.To read this article in full, please click here(Insider Story)
One of the most basic things to understand in backup and recovery is the concept of backup levels and what they mean.Without a proper understanding of what they are and how they work, companies can adopt bad practices that range from wasted bandwidth and storage to actually missing important data on their backups. Understanding these concepts is also crucial when selecting new data-protection products or services.[ Check out 10 hot storage companies to watch. | Get regularly scheduled insights by signing up for Network World newsletters. ]
Full backupTo read this article in full, please click here
Deduplication is arguably the biggest advancement in backup technology in the last two decades. It is single-handedly responsible for enabling the shift from tape to disk for the bulk of backup data, and its popularity only increases with each passing day. Understanding the different kinds of deduplication, also known as dedupe, is important for any person looking at backup technology.What is data deduplication?
Dedupe is the identification and elimination of duplicate blocks within a dataset. It is similar to compression, which only identifies redundant blocks in a single file. Deduplication can find redundant blocks of data between files from different directories, different data types, even different servers in different locations.To read this article in full, please click here
If you want to make a backup person apoplectic, call an old backup an archive.It’s just shy of saying that data on a RAID array doesn’t need to be backed up. The good news is that the differences between backup and archive are quite stark and easy to understand.[ Check out AI boosts data-center availability, efficiency. Also learn what hyperconvergence is and whether you’re ready for hyperconverged storage. | For regularly scheduled insights sign up for Network World newsletters. ]
What is backup?
Backup is a copy of data created to restore said data in case of damage or loss. The original data is not deleted after a backup is made.To read this article in full, please click here