Windows IT Pro
Best Practices for
Protecting Your Windows
Server 2012 and Hyper-V
Based Infrastructures
sponsored by
Tech Advisor • Symantec | p. 2
Many administrators remember the challenges of architecting a
backup solution for their datacenter, ensuring all the right data
was protected and finding solutions to files that were open
during backup processes. The Virtual Shadow Copy Service (VSS)
completely changed how Windows operating systems were
backed up. Application vendors can now create components
(VSS writers) that the VSS backup framework could call, allowing
an application to flush all transactions and data to disk, thus
ensuring a backup that would be usable in a restore scenario by
having all application data on disk in a consistent state.
Looking at a datacenter today and that of 10 years ago demonstrates
the shift in how IT datacenters are architected. Organizations
have moved from one operating system per server to many
operating systems per server, which is achieved through server
virtualization. Windows Server 2012 provides many new features,
including a new version of Hyper-V. The new hypervisor has seen
large increases in its scalability, allowing for virtual machines
with 64 virtual processors, a terabyte of memory, and virtual hard
disks that are 64 terabytes in size. Additional new features include
virtual fibre channel, SMB 3.0 support allowing virtual machines
to be stored on SMB file shares, and shared nothing live migration
that allow virtual machines to be migrated between hosts that
are not clustered or that share storage with no downtime. The
new scalability and functionality means systems that were previously
not virtualized due to limitations in virtualization are now
capable of being virtualized. The percentage of virtual operating
systems will increase and very large, critical systems will now be
virtualized, making the backup of the virtualization environment
even more important.
The adoption of virtualization adds a new dimension to your organization’s
backup plans and this paper will walk through some
key considerations when hosting services on Windows Server
2012 Hyper-V.
The importance of backups, even in a
virtual environment
Virtualization offers a number of very useful features related to
the state of a virtual machine that can sometimes seem to reduce
the need for backups; however, this is not the case. Likewise
many services offer replication capabilities that are also not
replacements for solid backup processes.
Snapshots are a common feature of virtualization platforms,
including Hyper-V, that allow a point-in-time view of a virtual
machine to be taken. If a virtual machine is running when a
snapshot is taken its current storage content is saved and its
memory and device state is stored. While snapshots provide a
point-in-time copy of a virtual machine the operating system
within the virtual machine is unaware that a snapshot has been
taken, which means data on disk may not be in a consistent state
because the VSS backup framework is not utilized. Additionally,
when a snapshot is applied to a virtual machine it restores a virtual
machine to that point in time and the OS has no knowledge
that its state has been changed back in time. This may cause serious
problem for certain types of service and can cause security
and data problems. Typically, snapshots should be avoided in any
production environment; they are best utilized in development
environments where it can be very useful as part of testing or
troubleshooting to be able to revert an operating system to a
known state repeatedly.
Hyper-V also provides a feature called Hyper-V Replica, which is
an asynchronous replication of storage changes of enabled virtual
machines every five minutes to an alternate Hyper-V server in
a separate location. The goal of this feature is to provide disaster
recovery capability for organizations using inbox capabilities
without the need for separate storage replication technologies.
The typical Hyper-V Replica operation works by sending the content
of the Hyper-V Replica log file, which contains the changes
to the storage over the previous five minutes, to the alternate
Hyper-V server, which then merges the changes into its copy of
the virtual hard disks. Like a snapshot, Hyper-V Replica is not typically
notifying the operating system of the storage replication,
making the data on disk possibly inconsistent. Hyper-V Replica
does offer the capability to initiate a VSS request to the virtual
machine periodically, which forces data to be flushed to disk,
making that specific Hyper-V Replica application data consistent.
However, this feature is in no way designed to be a backup solution
and requires a completely separate Hyper-V server for the
continuous receiving of five-minute storage deltas.
Both snapshots and Hyper-V Replica only work with virtual hard
disks, which means if applications are storing data using pass-
through storage or on storage accessed via iSCSI or virtual fibre
channel, then the data is not backed up. Even more importantly
both snapshots and Hyper-V Replica are only aware at the operating
system level rather than understanding specific applications
and data groups within the virtual machine, which would
severely limit the granularity of restoration.
Where to perform backups
Traditionally, a backup is performed via a backup agent running
on the operating system being backed up. The backup agent
communicates to a central backup service, sending the data to
be protected. For operating systems that are virtualized this approach
can still be used; however, there is another option.
For Hyper-V supported guest operating systems (Windows Server
2003 and above), integration services are provided that enhance
the functionality and performance of the operating systems running
within the virtual machines. Once installed the integration
services add a number of specific capabilities between the operating
system running in the virtual machine and the Hyper-V host.
One of these integrations is “Backup (volume snapshot),” which allows
the Hyper-V host to notify the guest operating system within
the virtual machine when a backup it taken of the virtual machine
at the Hyper-V host. The guest operating system then calls all the
registered VSS writers within the virtual machine, which causes all
the applications to flush information to disk and then notifies the
Hyper-V host that the virtual machines virtual hard disks can be
backed up, thus ensuring an app-consistent backup.
Remember that the granularity of what you are protecting and,
therefore, what can be restored is the most important factor.
When a backup agent is running within the virtual machine on
the guest OS it has direct interaction with registered VSS writers
and knowledge of applications running within the operating
system. This enables the backup to be configured to backup
specific units of application data; for example, for a database
server specific databases could be protected; for a mail server,
specific mailboxes. When the backup is application aware the
restore can equally be application aware, offering application-
specific restoration. If, however, the virtual machine was backed
up at the Hyper-V host level—although the VSS writers in the
virtual machine are still called to ensure the data on disk is in a
consistent state—the data being backed up is the entire virtual
machine, which means at restoration time the only thing that
could be restored is the entire virtual machine or perhaps files
from the associated hard disks. This would mean that backing
up within the virtual machine would be the best option where
application-aware backups are required. However, some backup
solutions on the market take the backup pass-through capability
native to Hyper-V to another functional level by also exposing
application awareness from the virtual machines. This means that
even though backups are taken at the Hyper-V host level, the
backups can still be configured to back up particular application
data and restore at application data unit levels.
Another aspect of the virtual machine’s data must be considered
when performing backups: the actual location of the virtual
machine configuration and virtual hard disks. In basic scenarios
the virtual hard disks and configuration files for virtual machines
are stored on direct-attached storage. However, for environments
that leverage clusters of Hyper-V hosts or that wish to use consolidated
storage, then storage local to a host is not optimal and
shared storage must be used.
Windows Server 2008 R2 introduced Cluster Shared Volumes
(CSV,) which allow an NTFS formatted LUN on a SAN to be concurrently
accessed by all hosts in a cluster. This removed previous
problems associated with dismounting and mounting LUNs
when a virtual machine is migrated between hosts. A special process
is required to back up CSV-enabled volumes, which means
it’s critical that your backup solution has CSV support. Windows
Server 2012 provides improvements to CSV processes by labelling
CSV volumes as CSVFS instead of NTFS, making them easy to
identify. In addition, backups of CSV volumes no longer have to
be performed on a specific member of the cluster, known as the
coordinator node. In Windows Server 2012 volume-level backup
of a CSV can be performed from any node connecting to the CSV
and backups do not interfere with running virtual machines.
Another new shared storage option for Windows Server 2012
Hyper-V virtual machines is to use a Server Message Block (SMB)
3.0 file share. This file share can be hosted on a Windows Server
2012 file server or cluster or a storage appliance with SMB 3.0
support. Windows Server 2012 includes a new “File Server VSS
Agent Service,” which must be enabled on all servers acting as
SMB 3.0 servers for Hyper-V. This enables remote VSS backups
to be performed. This is a very new feature so very few backup
solutions have support for remote VSS SMB backups at the time
of writing. However, talk to your backup vendor to ascertain their
plans and timing for SMB 3.0 support. If your organization wishes
to leverage SMB 3.0 prior to support from the backup solution,
one option is to run the backup agent within the virtual machine
to ensure protection of the operating systems and applications.
What to back up
With all the different components in a virtualized datacenter and
the replication capabilities of many services it can often seem
confusing to decide which operating system instances and which
copies of data should be backed up. Often there is no absolute
right or wrong answer, but there are certain must haves—and
generally you can’t back up too much. It’s better to have
redundant backups of the same data than to miss data.
As previously discussed it’s critical that all data in the organization is
backed up, even if it’s also replicated because backup and
replication meet different needs. But if a data set is replicated three
times is it necessary to back up all three copies? For example,
Exchange has the concept of Database Availability Groups (DAG)
where mailbox databases are stored on multiple servers. The
general rule to follow is to make sure all unique data is backed up
by at least one backup process, which means if a database is
replicated between three services make sure at least one of them is
backed up. The same applies to domain controllers within the
same domain. Typically, domain controllers are highly replaceable
so if a domain controller has a problem another one can be
provisioned in its place very quickly. But it’s important there is a
backup of the Active Directory routinely on at least one, and ideally
two, domain controllers. Care must be taken to ensure that backup
is not lost as databases are moved between different servers and
there is a risk if backups are not running on all servers that a data
set falls out of scope of a backup. As a side note if certain copies are
not backed up its important to ensure there are no negative
effects, such as log files never being truncated or deleted which
normally occurs as part of a backup.
It’s important to back up the operating systems and the
application installs that utilize application data because often it
can be very time consuming to perform an installation of a server
application. Thus restoring the server operating system and the
installed application is the most expedient recovery possible.
Ensure any servers that are required for the primary workload
being backed up are also backed up and recoverable. For
example, a service may have requirements on Active Directory,
on DNS or another server running a middleware service. Make
sure all these systems are backed up to provide protection in a
disaster situation where whole racks of servers or even entire
datacenters are lost.
For the Hyper-V host backup it is important in the event of a server
failure that the virtual machines are not affected, so always use
clusters of Hyper-V hosts which enable virtual machines to move
between hosts. It’s critical to replace failed hosts quickly to restore
resiliency from further failure. Performing backups of Hyper-V hosts
provides a very efficient way of restoring this protection.
When considering the backup of the Hyper-V host and virtual
machines from the host do not back up the same guest
operating system twice. If backups are being performed within
the virtual machine via a backup agent installed on the guest
operating system inside the virtual machine, do not also back up
the virtual machine from the Hyper-V host. This leads to wasted
space and possible conflicts.
Putting it all together
Given how critical backups are to every environment one aspect
that is often overlooked in a process that often backs up to disk is
ensuring protection of the backup itself. While backing up to
disks local to the datacenter provides great performance and very
fast restores it leaves the backup vulnerable to the same disaster
scenarios that could affect the protected servers themselves.
Therefore, always ensure backups are also stored offsite (e.g., to a
second location via disk replication, to a repository on tape, or
replicated to public cloud-based storage).
Additionally, ensure backup and restore processes are frequently
tested and revised. Performing regular test recoveries helps
ensure in the event the backup is really needed it contains the
required information and can be used as desired. Any time a new
system is added to the environment ensure backup and restore
processes are updated accordingly to include the new system
and any systems it is dependent upon.
By following these basic guidelines you can help ensure that your data
and your organization are well protected in the most efficient way.
Discover more from Escape Business Solutions
Subscribe to get the latest posts sent to your email.
