Organizations today are making use of many different types of infrastructure across the landscape of their business. Over the past decade, infrastructure backing business-critical workloads have shifted to virtualized environments capable of running many workloads on top of the physical hardware. In addition to the efficiency of operations, it has opened up many excellent capabilities for organizations in terms of high-availability, scalability, and disaster recovery.
VMware vSphere is arguably the most popular and powerful enterprise virtualization platform that powers business-critical systems across business sectors. It provides excellent capabilities, including the high-availability of your data. However, even as far as virtualized platforms such as VMware vSphere have come, protecting your data is still a priority.
This post will take a closer look at data backup in virtualized environments, with a keen focus on VMware vSphere virtualization. What best practices and other guidelines need attention to ensure business-critical data is protected?
Why Data Backup in virtualized environments is critically important
Why is data backup in virtualized environments critical in the day and age where hypervisors such as VMware ESXi are incredibly resilient and can build out highly available infrastructure? One of the first key elements to understand with virtualized environments is that high-availability is not the same as disaster recovery.
Within the vSphere cluster, shared resources and vSphere High Availability allow failed workloads to recover on a healthy ESXi host. With vSphere 6.5, VMware introduced a proactive HA that takes this a step further with proactive health checks that monitor host hardware. If an issue is detected, VMs are vMotioned from the unhealthy host before workloads are affected by a host failure.
However, as significant as these high-availability features are, it merely ensures that your data is accessible despite an infrastructure failure. It does not protect your data from other types of data disasters such as the following:
- Ransomware
- End-user deletion of data
- Rogue administrator
1. Ransomware
Ransomware is one of the most dangerous threats to business-critical data. Ransomware can encrypt even highly available data. Even with no underlying infrastructure failure related to the virtualized platform, your business can suffer disaster at the hands of ransomware. There are only two ways to recover from ransomware successfully:
- Restore from backup
- Pay the ransom
Modern ransomware variants perform other insidious actions, such as threatening to start deleting data if the ransom is not paid in a timely manner.
2. End-user deletion of data
When end users accidentally or intentionally delete business-critical data, it impacts business continuity until the data is recovered. End-users may delete a large amount of data accidentally by misidentifying the information they intended to delete. An unscrupulous end user leaving the organization on bad terms may attempt to sabotage access to critical data to harm the business.
3. Rogue administrator
What about a rogue administrator? An administrator that damages infrastructure or deletes data maliciously and purposely can wreak havoc on business continuity. The only way to get data back that has been deleted or corrupted by maliciously sabotaged infrastructure is restoring from data backups.
Data backup in virtualized environments
Due to the various risks to business-critical data in virtualized environments, organizations must pay attention to data backup in virtualized environments, including VMware vSphere. Keying in on your VMware vSphere environment, what data backup best practices do you need to give attention to protect your data. Let’s take a look at the following:
- 3-2-1 backup rule
- Efficient backups and data storage
- Think about backup security
- Design backups in the context of applications
- Test your backups
3-2-1 backup rule
In the world of data backups, the 3-2-1 backup rule is the quintessential data backup methodology that provides the foundational concepts for ensuring your data is protected. By following the 3-2-1 backup best practice rule, it helps to ensure that when you need your backups during a disaster, you will be able to recover your data.
What are the key features of this backup best practice methodology?
- You should have at least (3) copies of your data, stored on at least (2) different forms of media, with at least (1) copy stored offsite
Why are the methodologies found in the 3-2-1 backup rule essential to follow? The 3-2-1 backup rule underscores the importance of diversifying backup locations and separating backups from production infrastructure. You certainly don’t want to have backups of VMs running in your VMware vSphere environment stored on the same infrastructure as your production VMs. If you do this, losing production data will no doubt mean also losing your backup data.
Having backups stored on different media types also helps to ensure that you will have at least one good copy of your data. An excellent example of this is using hard disk storage and tape media for storing your backups. If you look at the situation where ransomware infects your environment, it usually spreads through the network. It can impact any data stored in online storage (hard disks) that is reachable over the network. However, if you also have a copy of your data in offline storage, such as a tape media library, this will be unaffected by ransomware traversing the network.
For offsite copies of your VMware vSphere backups, many are leveraging cloud storage for satisfying the (1) copy offsite requirement. Cloud offers many advantages over other offsite options. By leveraging cloud storage, you have access to virtually unlimited storage space for your VM backups, no infrastructure to maintain, and you can locate your offsite copies anywhere in the world.
A variation of the 3-2-1 backup rule, the 3-2-1-0 backup rule, extends the traditional rule. The extra (0) in this version denotes (0) data loss or 100% recoverability of data found in backups. The 3-2-1-0 rule provides even more emphasis on protecting your backups and validating the data contained therein.
Efficient backups and data storage
When backing up your VMware vSphere environment, you want to strive for efficiency, both in the amount of data contained in your data backups as well as storing your backups. When it comes to ensuring the most efficient data backups, tracking incremental changes is hugely beneficial.
Changed Block Tracking (CBT) is a feature introduced by VMware to perform efficient, incremental backups. Essentially, CBT tracks and identifies the disk sectors in the VMDK that have changed between two set IDs. Changed Block Tracking is made available to any third-party backup application as part of the vSphere APIs for Data Protection (VADP).
When data backup software initiates a backup, it can request that only the blocks that have changed since the last backup are copied across to backup storage. As a result, copies of data across the network are much smaller, backup intervals are much shorter, and much less backup storage is required.
Think about backup security
Backup security is essential to consider. Security is an increasingly important topic across the board. It affects every aspect of infrastructure today, including backups. Securing backups of your VMware vSphere environment involves a combination of best practices. First, let’s think about preventing someone from being able to read the data in your backups.
When you think about it, backups of your vSphere environment running business-critical workloads contain production data. If an attacker gets access to unprotected data backups, they, in essence, have compromised the data from your production environment. Encrypting data backups is a great way to ensure prying eyes cannot read the data contained in your backups. Make sure to encrypt your backups, both in-flight and at-rest.
Make sure to protect your backup infrastructure as well. Cybercriminals using ransomware know that good data backups make paying the ransom much less likely. So, they also go after your backups. Many ransomware variants look for specific file extensions for well-known backup solutions to prioritize locking your backup data up as well.
Ensure that backup infrastructure (storage, networks, credentials) does not share resources with production. Many administrators may use a standard domain administrator login to manage both production and backup infrastructure. The danger with this is if an attacker compromises the credentials, they now own both sides of your infrastructure.
Design backups in the context of applications
Today, more than ever, applications are the most crucial aspect to consider when protecting your vSphere environment. Applications, not virtual machines, are what business stakeholders, customers, and end-users use to carry out business-critical tasks. While the underlying infrastructure may consist of virtual machines running in vSphere, design and prioritize data backups in the context of your applications.
As you create backups of your vSphere virtual machines, create groups of VMs based on the application they serve. A backup job protecting an application may include a web server, an application server, and a database server. Designing your backups in this way helps to ensure application resources are adequately protected.
Test your backups
Backup testing is often one of the most overlooked components of a well-architected data backup strategy. When thinking back to the 3-2-1-0 backup rule, the (0) denotes ensuring 100% recoverability. Never testing backups leads to uncertainty on the recoverability of your data.
Testing the data backups of your vSphere environment helps ensure there are no issues with the backups themselves, the backups contain the expected data, and the information is consistent. The only way to know this for sure is by testing and verifying your data backups.
Manually performing backup testing leads to inconsistent testing of your data. Using a backup solution that can perform automated testing of data backups helps to take the heavy lifting out of the process and alert to backup issues in an automated fashion. Using backup testing automation validates the VM boot process and verifies the integrity of the data.
In addition to testing the backups themselves, testing replica VMs and the failover process is essential. This tests VMs replicated to a DR facility and validates the data. Using a backup solution to replicate VMs to a DR facility, organizations can keep a warm standby copy of virtual machines ready to assume workloads if there is a failure in the main production facility.
Testing replicated virtual machines and the failover mechanism is a great way to ensure the failover process and the replicated data is valid and ready in the event of a real disaster. Having a way to simulate a failover provides a great way to test the replicas’ viability. Also, since network address changes are generally needed, testing the orchestration of network address changes on replica VMs validates this workflow. Testing is essential to ensure you have good data backups.
Wrapping Up
With the numerous cybersecurity and other threats facing organizations today, data backup in virtualized environments is crucial to protecting business-critical data. Even with the countless capabilities provided with modern hypervisors that allow powerful high availability mechanisms, this still does not protect data from ransomware and deletion by end-users. Using a fully-featured data backup solution like NAKIVO Backup & Replication allows your organization to easily meet best practice recommendations. As shown, best practice objectives include diversifying your data, achieving efficiency needs, backup security, application context, and the ability to test your backups. Achieving these and other best practice recommendations, your organization can protect your VMware vSphere environment and the business-critical data it houses
0 Comments