How Often Should I be Taking Backups

How Often Should I be Taking Backups

Now you understand your organization’s data protection needs and you have the means to implement them. To bring it to life, you need to design the schedules for your backups. Unless you have very little data or a high budget for backup, you will use more than one schedule. You will use three metrics:
  1. Value
  2. Frequency of change
  3. Application features

Understanding How the Value of Data Affects Backup Scheduling

The frequency of your full backup schedule directly determines how many copies of data that you will have over time. The more copies you have of any given bits, the greater the odds that at least one copy will survive catastrophe. So, if you have data that you cannot lose under any circumstances, then your schedule should reflect that.

Understanding How the Frequency of Change Affects Backup Scheduling

Data that changes frequently may need an equally frequent backup. As you recall from part one, recovery point objectives (RPO) set the maximum amount of time between backups, which establishes the boundaries of how much recent data you can lose. You must also consider how often that data changes independently of RTO.

If you have data that does not change often, then you might consider a longer RPO. If you only modify an item every few months, then it might not make sense to back it up every week. However, that might have unintended consequences.

As an example, you set a monthly-only schedule for your domain controller because you rarely have staff turnover and only replace a few computers per year. Then, you hire a new employee and supply them with a PC the day after a backup.

If anything happens to Active Directory during that month, then you will lose all that new information. Your schedule needs to consider such possibilities.

Understanding How Backup Application Features Affect Scheduling

You will find that modern commercial backup applications have more in common than different. They all have some way to schedule jobs. Each one uses some way to optimize backups. The exact features in the solutions that you use will influence how you schedule.

The following list provides a starting point for you to determine how to leverage the features in your selected program:

Virtual machine awareness

If your backup software understands how to back up virtual machines, then you can allow it to handle efficient ordering. If not, then you will need to schedule to back up the guest operating systems such that the jobs do not overwhelm your resources.

Space-saving features

If your backup tool can preserve storage space, that has obvious benefits. Everything involves trade-offs – ensure that you know what you give up for that extra space.

Some common considerations:

  • Traditional differential and incremental backups complete more quickly than the full backups they depend on. They mean nothing without their source full backup. Design your schedule to accommodate full backups as time and space allow;
  • Newer delta and deduplication techniques save even more space than differential and incremental jobs but require calculation and tracking in addition to the requisite full backups. They should not use significant CPU time, but you need to test it. Also check to see if and how your application tracks changes. Some will use space on your active disks;
  • If you have extra space in your storage media, then do not depend overly on these technologies. Create more full backups if you can.

Time-saving features

Many of the features in the previous bullet point save time as well as space. As with space, do not try to save time that you do not require.

Replication

Replication functions require bandwidth, which can cause severe bottlenecks when crossing Internet links. If a replication job is not completed before the next job begins, then you might end up with unusable backups.

Media types

Due to the wide variance in performance of the various backup media types, the option(s) that you choose will determine how you schedule backups and what space-saving features they use. For instance, if you need to back up several terabytes to tape and a full backup requires twelve hours to perform, then you can only run a full backup when you have twelve hours available.

Snapshot features

If your backup application integrates with VSS (Volume Shadow copy Services – a feature of the Windows Operating System) or uses some other technique to take crash-consistent or application-consistent backups, then you have greater scheduling options.

Backup uses system resources and you do not want one job to conflict with another, but snapshotting allows you to run backups while systems are in use.

You should have become well-acquainted with your backup program during the deployment phase. Take the time to fully learn how your backup program operates. Keep in mind the need for periodic full backups.

Putting It in Action

Since taking full backups every time would quickly exceed any rational quantity of time and media, you must make compromises. Remember that, if possible, you would take a complete backup of all your data at least once per day.

Guidelines for backup scheduling:

  • Full backups need time and resources, even with non-interrupting snapshot technologies. Try to schedule them during low activity periods.
  • Full backups do not depend on other backups. Therefore, they have greatest value after major changes. As an example, some organizations have intricate month-end procedures. Taking a backup immediately afterward could save a lot of time in the event of a restore.
  • Incremental, differential, delta, and deduplicated backups require relatively little time and space compared to full backups, but they depend on other backups. Use them as fillers between full backups.
  • If your backup scheme primarily uses online storage, make certain to schedule backups to offline media. If that is a manual process, implement an accountability plan.
  • Just as administrators tend to perform backups at night, they also like to schedule system and software updates at night. Ensure that schedules do not collide.

Grandfather-father-son sample plan

“Grandfather-father-son” (GFS) schemes are very common. They work best with rotating media such as tapes. One typical example schedule:

  • “Grandfather”: full backup taken once monthly. Grandfather media is rotated annually (overwrite the January 2020 tape with January 2021 backup, February 2020 with February 2021 data, etc.). One “grandfather” type per year, typically the one that follows your organization’s fiscal year end, is never overwritten, following data retention policy.
  • “Father”: full backup taken weekly. “Father” media is rotated monthly (i.e., you have a “Week 1” tape, a “Week 2” tape, etc.).
  • “Son”: incremental or differential backups are taken daily, and their media overwritten weekly (i.e., you have a “Monday” tape, a “Tuesday” tape, etc.).

The above example is not the only type of GFS scheme. The relationship of the different rotation components is how it qualifies. You have one set of very long-term full media, one shorter-lived set of full media, and rapidly rotated media.

Some implementations do not keep the annual media. Others do not rotate the monthly full, instead keeping them for the full backup retention period. Some do not rotate the daily media every week. Your organization’s needs and budget dictate your practices.

With a GFS scheme, you are never more than a few pieces of media away from a complete restore. Remember that a “differential” style backup needs the latest “son” media and the “father” immediately preceding whereas an “incremental” style backup needs the latest “father” media and all of its “sons”.

The downside of a GFS scheme is that you quickly lose the granular level of daily backups. Once you rotate the daily, then anything overwritten will, at best, survive on the most recent monthly or perhaps an annual backup. The greatest risk is to data that is created and destroyed between full backup cycles.

Online media sample plan

If your backup solution uses primarily online media, then the venerated GFS approach might not work well. Most always-online systems do not have the same concept of “rotation”. Instead, they age out old data once it reaches a configured retention policy expiration period.

For these, your configuration will depend on how your backup program stores data. If it uses a deduplication scheme and only keeps a single full backup, then you have little to do except configure backup frequency and retention policy.

Continuous backup sample plan

Many applications have some form of “continuous” backup. They capture data in extremely small time increments. As an example, Hornetsecurity’s VM Backup has a “Continuous Data Protection” (CDP) feature that allows you to set a schedule as short as five minutes.

Scheduling these types of backups involves three considerations:

  1. How does the backup application store the “continuous” backup data?
  2. How quickly does the protected data change?
  3. How much does the protected data change within the target time frame?

If your backup program takes full, independent copies at each interval, then you could run out of media space very quickly. If it uses a deduplication-type storage mechanism, then it should use considerably less. Either way, your rate of data churn will determine how much space you need.

For systems with a very high rate of change, your backup system might not have sufficient time to make one backup before the next starts. That can lead to serious problems, not least of which is that it cannot provide the continuous backup that you want.

You can easily predict how some systems will behave; others need more effort. You may need to spend some time adjusting a setting, watching how it performs, and adjusting again.

Mixed backup plan example

You do not need to come up with a one-size-fits-all schedule. You can set different schedules. Use your RTOs, RPOs, retention policies, and capacity limits as guidance.
One possibility:

  • Domain controllers: standard GFS with one-year retention
  • Primary line-of-business application server (app only): monthly full, scheduled after operating system and software updates, with three-month retention
  • Primary line-of-business database server: continuous, six-month retention
  • Primary file server: standard GFS with five-year retention
  • E-mail server: uses a different backup program that specializes in Exchange, daily full, hourly differential, with five-year retention
  • All: replicated to remote site every day at midnight
  • All: monthly full offline, following retention policies

To properly protect your virtualization environment and all the data, use Hornetsecurity VM Backup to securely back up and replicate your virtual machine.

We ensure the security of your Microsoft 365 environment through our comprehensive 365 Total Protection Enterprise Backup and 365 Total Backup solutions.

For complete guidance, get our comprehensive Backup Bible, which serves as your indispensable resource containing invaluable information on backup and disaster recovery.

To keep up to date with the latest articles and practices, pay a visit to our Hornetsecurity blog now.

Sum Up

Remember to document everything!

FAQ

What is a backing schedule?

A backup schedule defines the frequency of data backups and the required backup media. Each hardware type offers various rotation schemes, including industry-standard strategies, and these schemes can be customized after creating the backup job.

What is a good schedule to run a backup?

Typically, performing incremental backups for user files during the daytime is recommended. However, it’s advisable to set maximum speed limits for backups to avoid bandwidth saturation. Full backups are best scheduled nightly and weekly on weekdays.

What is the importance of backup schedule?

The importance of a backup schedule lies in its ability to mitigate data loss in the event of a computer or system failure. By scheduling nightly or weekly backups, you can minimize the potential loss of data. Having a scheduled backup provides peace of mind, ensuring that all your information is regularly backed up, thereby reducing the risk of substantial data loss.

The Pros and Cons of All Backup Storage Targets Explained

The Pros and Cons of All Backup Storage Targets Explained

The days of tape-only solutions have come to an end. Other media have caught up to it in cost, capacity, convenience, and reliability. You now have a variety of storage options. Backup applications that can only operate with tape have little value in modern business continuity plans.

Unless you buy everything from a vendor or service provider that designs your solution, make certain to match your software with your hardware.

Use the software’s trial installation or carefully read through the manufacturer’s documentation to determine which media types it works with and how it uses them. Backup software targets the following media types:

  • Magnetic tape;
  • Optical disc;
  • Direct-attached hard drives and mass media devices;
  • Media-agnostic network targets;
  • Cloud storage accounts.

Magnetic Tape in Backup Solutions

IT departments have relied on tape for backup since the dawn of the concept of backup. This technology has matured well and mostly kept up with the pace of innovation in IT systems. However, the physical characteristics of magnetic tape place a severe speed limit on backup and restore operations.

Pros of magnetic tape

  • Most backup software can target it;
  • Tape media has a relatively low cost per gigabyte;
  • Reliable for long-term storage;
  • Lightweight media, easy to transport offsite;
  • Readily reusable.

Cons of magnetic tape

  • Extremely slow;
  • Tape drives have a relatively high cost;
  • Media susceptible to magnetic fields, heat, and sunlight.

For most organizations, the slow speed of tape presents its greatest drawback. You can find backup applications that support on-demand features such as operating directly from backup media. That will not happen from a tape.

Having said that, tape has a good track record of reliability. Tapes stored on their edges in cool locations away from magnetic fields can easily survive ten years or more. Sometimes, the biggest problem with restoring data from old tape is finding a suitable, functioning tape drive. I have seen many techniques for tape management through the years.

One of the worst involved a front desk worker who diligently took the previous night’s tape offsite each night – leaving it on their car dashboard. It would bake in the sunlight for a few hours each evening. So, even though the company and its staff meant well, and dutifully followed the recommendation to keep backups offsite, they wound up with warped tapes that had multiple dead spots.

At the opposite end, one customer used a padded, magnetically shielded carrying case to transport tapes to an alternative site. There, they placed the tapes into a fireproof safe in a concrete room.

I was called upon once to try to restore data from a tape that was ten years old. It took almost a week to find a functioning tape drive that could accommodate it. That was the only complication. The tape was still readable.

Optical Media in Backup Solutions

For a brief time, optical technology advances made it attractive. Optical equipment carries a low cost and interfaces well with operating systems. It even supports drag-and-drop interactivity with Windows Explorer. They were most popular in the home market. Some optical systems found their way into datacenters. However, magnetic media quickly regained the advantage as capacities outgrew optical media exponentially.

Pros of optical media

  • Very durable media;
  • Shelf life of up to ten years;
  • Inexpensive, readily interchangeable equipment;
  • Drag-and-drop target in most operating systems;
  • Lightweight media, easy to transport offsite.

Cons of optical media

  • Very limited storage capacity;
  • Extremely slow;
  • Few enterprise backup applications will target optical drives;
  • Poor reusability;
  • Wide variance in data integrity after a few years.

When recordable optical media first appeared on the markets, people found its reliability attractive. CDs and DVDs do not care about magnetic fields at all and have a higher tolerance for heat and sunlight. Also, because the media itself has no mechanism, they survive rough handling better than tape.

However, they have few other advantages over other media types. Even though the ability to hold 700 megabytes on a plastic disc was impressive when recordable CDs first appeared, optical media capacities did not keep pace with magnetic storage.

By the time recordable DVDs showed up with nearly five gigabytes of capacity, hard drives and tapes were already moving well beyond that limit.

Furthermore, people discovered – often the hard way – that even though optical discs have little observable structural material, their data-retaining material has a much shorter life. Even though a disc may look fine, its contents may have become unreadable long ago.

Recordable optical media has a wide range of data life, from a few years to several decades. Predicting media life span has proven difficult.

Because of its speed, low capacity, and need for frequent testing, you should avoid optical media in your disaster recovery solution.

Direct-Attached Storage and Mass Media Devices in Backup Solutions

You do not need to limit your backup solutions to systems that distinguish between devices and media. You can also use external hard drives and multi-bay drive chassis. Some attach temporarily, usually via USB. Others, especially the larger units, use more permanent connections such as Fiber Channel.

These types of systems have become more popular as the cost of magnetic disks has declined. They have a somewhat limited scope of applications in a disaster recovery solution, but some organizations can put them to great use.

Pros of directly attached external devices

  • Fast;
  • Reliable for long-term storage;
  • Inexpensive when using mechanical drives;
  • Easily expandable;
  • High compatibility;
  • Use as a standard file system target.

Cons of directly attached external devices

  • Difficult to transport;
  • Additional concerns when disconnecting;
  • Mechanical drives have many failure points;
  • Expensive when using solid-state drives;
  • Not a valid target in every backup application.

Portability represents the greatest concern when using directly attached external devices for backup. Unlike tapes and discs, the media does not simply eject once the backup concludes.

With USB devices, you should notify the operating system of pending removal so that it has a chance to wrap up any writes, which could include metadata operations and automatic maintenance.

Directly connected Fiber Channel devices usually do not have any sort of quick-detach mechanism. In an emergency, people should concern themselves more with evacuation than spending time going through a lengthy detach process. In normal situations, people tend to find excuses to avoid tedious processes. Expect these systems to remain stationary and onsite.

Once upon a time, such restrictions would have precluded these solutions from a proper business continuity solution. However, as you will see in upcoming sections, other advances have made them quite viable. With that said, you should not use a directly attached device alone. Any such equipment must be part of a larger solution.

You may run into some trouble using external devices with some backup applications. Fortunately, it would help if you never ran into any modern programs that absolutely cannot backup to a disk target. However, some may only allow you to use disk for short-term storage.

Others may not operate correctly with removable disks. If you purchase your devices before your software, make certain to test interoperability thoroughly.

Even though mechanical hard drives have advanced significantly in terms of reliability, they still have a lot of moving parts. Furthermore, the designers of the typical 3.5-inch drive did not build them for portability. They can travel, but not as well as tapes or discs. Even if you don’t transport them, they still have more potential failure points than tapes. Do not overestimate this risk, but do not ignore it, either.

Networked Storage in Backup Solutions

Network-based solutions share several characteristics with directly attached storage. Where you find differences between the two, you also find trade-offs. You could use the same pro/con list for networked solutions as you saw above for direct attached systems. We emphasize different points, though.

In the “pros” column, networked storage gets even higher marks for expandability. Almost every storage unit built for the network provides multiple bays. You can start with a few drives and add more as needed. Some even allow you to connect multiple chassis, physically or logically. In short, you can extend your backup storage indefinitely with such solutions.

The network components result in a higher cost per gigabyte for network-attached storage. However, the infrastructure necessary to enable a storage device to participate on a network tends to have a side effect: more features.

Almost all these systems provide some level of security filtering. Less expensive devices, typically marketed simply as “Network-Attached Storage” (NAS), may not provide much more than that.

Higher-end equipment, commonly called “Storage Area Network” (SAN), boasts many more features. You can often make SAN storage show up in connected computers much like directly attached disks. All in all, the more you pay, the more you get. Unfortunately, though, cost increases more rapidly than features.

What you gain in capacity and features, you lose in portability. Many NAS and SAN systems are rack mounted, so you cannot transport them offsite without significant effort.

But, because these devices have a network presence, you can place them in remote locations. Using remote storage requires some sort of site-to-site network connection, which introduces higher costs, complexity, security concerns, possible reduction in speed, and more points of failure.

Even though placing networked storage offsite involves additional risks, it also presents opportunity. Most NAS and SAN devices include replication technology. You can back up to a local device and configure it to replicate to one or more remote sites automatically.

If your device cannot perform replication, or if you have different devices and they cannot replicate to each other, your backup software may have its own replication methods.

In the worst case, you can use readily available free tools such as XCOPY and RSYNC with your operating system’s built-in scheduler.

Using commodity computing equipment as backup storage

Up to this point, we have talked about network-attached devices only in terms of dedicated appliances. SANs have earned a reputation for carrying price tags that exceed their feature sets. In the best case, that reduces your budget’s purchasing power. More commonly, an organization cannot afford to put a SAN to its fullest potential – if they can afford one at all.

As a result, you now have choices in software-based solutions that run on standard server-class computing systems. Some backup applications can target anything that presents a standard network file protocol, such as NFS or SMB.

Software vendors and open-source developers provide applications that provide network storage features on top of general-purpose operating systems. These solutions fill the price and feature space between NAS and SAN devices. They do require more administrative effort to deploy and maintain than dedicated appliances, however.

When I built my first backup solution with the intent of targeting a dedicated appliance, I quickly learned that hardware vendors emphasize the performance features of their systems. Since I only needed large capacity, I priced a low-end rack-mount server with many drive bays filled with large SATA drives. I saved quite a bit over the appliance options.

Using commodity computing equipment as backup storage

The role of hyper-converged infrastructure in backup

A comparatively new type of system, commonly known as “hyper-converged infrastructure” (HCI), has taken on a growing role in datacenter infrastructure. In the traditional scale-out model, server-class computers handle the compute work, SAN or NAS devices hold the data, and physical switches and routers connect them all together.

In HCI, the server-class computers take over all the roles, even much of the networking.

Few organizations will design an HCI just for backup. Instead, they will deploy HCI as their foundational datacenter solution.

Originally, datacenters used purpose-built hosts for specific roles, such as domain controllers and SQL servers. As technologies matured, vendors and administrators enhanced their resilience by clustering hosts.

These clusters stayed on the purpose-built path of their constituent hosts. In the second generation, server virtualization started breaking down the pattern of single-use physical hosts. However, for the sake of organization and permission scoping, most administrators continued to deploy hosts and storage around themes.

HCI supersedes that paradigm by enabling true “cloud” concepts. With HCI, we can still define logical boundaries for compute, storage, and networking groups, but the barriers only exist logically. We may not know which physical resource hosts a particular server or database file.

Even if we find out, it could move in response to an environmental event.

With files, the storage tier can scatter the bits across the datacenter – possibly even between well-connected datacenters. In short, HCI administrators only need to concern themselves with the organization’s overall capacity.

If some resource runs low, they purchase more equipment and extend their HCI footprint. When done well, hardware purchases and allocations occur in different cycles and levels than server provisioning and storage allocation.

All this gives you two considerations for backup with HCI:

  1. You could place your infrastructure for on-premises backup hosting and public cloud relays in HCI just like any other server role
  2. You may have concerns about mixing the things that you backup with the backup itself

The first viewpoint has the strongest supportable argument. You should have multiple independent copies of backup anyway, so pushing data to offsite locations reduces the impact of dependence on HCI.

Also, many administrators (and the non-technical people above them in the reporting chain) cannot understand that coexistence does not automatically mean line-of-sight.

You can architect your HCI such that the production components have no effective visibility into backup. It works the same basic way that we have always set up datacenter backup, but the dividers exist in software instead of hardware. However, it does not matter how much anyone can justify their fears.

If you encounter significant resistance to bundling backup in with the rest of your HCI deployment, then architect traditionally. It sacrifices some efficiency, but not to a crippling degree.

Portability represents the greatest concern when using directly attached external devices for backup. Unlike tapes and discs, the media does not simply eject once the backup concludes.

With USB devices, you should notify the operating system of pending removal so that it has a chance to wrap up any writes, which could include metadata operations and automatic maintenance.

Directly connected Fiber Channel devices usually do not have any sort of quick-detach mechanism. In an emergency, people should concern themselves more with evacuation than spending time going through a lengthy detach process. In normal situations, people tend to find excuses to avoid tedious processes. Expect these systems to remain stationary and onsite.

Once upon a time, such restrictions would have precluded these solutions from a proper business continuity solution. However, as you will see in upcoming sections, other advances have made them quite viable. With that said, you should not use a directly attached device alone. Any such equipment must be part of a larger solution.

You may run into some trouble using external devices with some backup applications. Fortunately, it would help if you never ran into any modern programs that absolutely cannot backup to a disk target. However, some may only allow you to use disk for short-term storage.

Others may not operate correctly with removable disks. If you purchase your devices before your software, make certain to test interoperability thoroughly.

Even though mechanical hard drives have advanced significantly in terms of reliability, they still have a lot of moving parts. Furthermore, the designers of the typical 3.5-inch drive did not build them for portability. They can travel, but not as well as tapes or discs. Even if you don’t transport them, they still have more potential failure points than tapes. Do not overestimate this risk, but do not ignore it, either.

Cloud Storage in Backup Solutions

Several technological advances in the past few years have made Internet-based storage viable. Most organizations now have access to reliable, high-speed Internet connections at low cost. You can leverage that to solve one of the most challenging problems in backup: keeping backup data in a location safe from local disasters. Of course, these rewards do not come without risk and expense.

Pros of cloud backup

  • Future-proof;
  • Offsite from the beginning;
  • Wide geographical diversity;
  • Highly reliable;
  • Effectively infinite expandability;
  • Access from anywhere;
  • Security.

Cons of cloud backup

  • Dependencies outside your control;
  • Expensive to switch to another vendor;
  • Possibility of unrecoverable interruptions;
  • Speed.

To keep their promises to customers, cloud vendors replicate their storage across geographical regions as part of the service (cheaper plans may not offer this protection).

So, even though do you need to worry about failures in the chain of network connections between you and your provider and about outages within the cloud provider, you know that you will eventually regain access to your data. That gives cloud backup an essentially unrivaled level of reliability.

The major cloud providers all go to great lengths to assure their customers of security. They boast of their compliance with accepted, standardized security practices. Each has large teams of security experts with no other role than keeping customer data safe.

That means that you do not need to concern yourself much with breaches at the cloud provider’s level.

Yet, you will need to maintain the security of your account and access points. As with any other Internet-based resource, the provider must make your data available to you somehow.

Malicious attackers might target your entryway instead of the provider itself. So, you still accept some responsibility for the safety of your cloud-based data.

When using cloud storage for backup, two things have the highest probability of causing failure. Your Internet provider presents the first.

If you cannot maintain a reliable connection to your provider, then your backup operations may fail too often. Even if you have a solid connection, you might need more bandwidth to support your backup needs.

For the latter problem, you can choose a backup solution such as Hornetsecurity’s VM Backup that provides compression and deduplication features specifically to reduce the network load.

Your second major concern is interim providers. While you can trust your cloud provider to exercise continuous security diligence, many third-party providers follow less stringent practices.

If your backup system transmits encrypted data directly to a cloud account that you control, then you have little to worry about. Verify that your software uses encryption and keep up on updates, and you will have little to worry about beyond the walls of your institution.

However, some providers ship your data to an account under their control that they resell to customers. If they fall short on security measures, then they place your data at great risk. Vet such providers very carefully.

“Cost” did not appear on either the pro or con list. Cost will always be a concern, but how it compares to onsite storage will differ between organizations. Using cloud storage allows you to eliminate so-called “capital expenditures”: payments, usually substantial, made up-front for tangible goods.

If you have an Internet connection, you will not need to purchase any further equipment. You also wipe out some “operational expenses”: recurring costs to maintain goods and services.

You will need to pay your software licensing fees, and your cloud provider will regularly bill you for storage and possibly network usage.

However, you will not need to purchase storage hardware, nor will your employees need to devote their time to maintaining it. You transfer all the hassle and expense of hardware ownership to your provider in exchange for a lower overall fee.

Unfortunately, you should not transfer your entire backup load to a cloud provider. Due to the risks and speed limits of relying on an internet connection, it still makes the most sense to keep at least some of your solution on site. So, you should still expect some capital expense and local maintenance activities.

Putting It in Action

The previous section helped you to work through your software options. If you have made a final selection, then that has at least some control over your hardware purchase. If not, then you can explore your hardware options and work backward to picking software.

The exact deployment style that you use, especially for the on-premises portion of your solution, only matters to the degree that it enables your backups to function flawlessly.

Prioritize satisfying your needs above aligning with any paradigm. You need space to store your backups, software to capture them, and networking and transport infrastructure to move them from live systems.

Four steps to performing hardware selection

Truthfully, your budget plays the largest restrictor in hardware options. So, start there. Work through the features that you want to arrive at your project scope. Your general process looks like this:

1. Determine budget

2. Establish other controlling parameters:

  • Non-cloud replication only works effectively if you have multiple, geographically distant sites;
  • Inter-site and cloud replication need sufficient bandwidth to carry backup data without impeding business operations;
  • Rack space.

3. Decide on preferred media type(s). The above explanations covered the pros and cons of the types. Now you need to decide what matters to your organization:

  • Cost per terabyte;
  • Device/media speed;
  • Media durability;
  • Media transportability.

4. Prioritize desired features:

  • Deduplication;
  • Internal redundancy (RAID, etc.);
  • External redundancy (hardware-based replication);
  • Security (hardware-based encryption, access control, etc.).

If you find that the cost of a specific hardware-based feature exceeds your budget, then your software might offer it. That can help you to achieve the coverage that you need at a palatable expense.

Once you have concluded your hardware selection, you could proceed to acquiring your software and equipment. However, it makes sense to work through the next article on security before making any final decision. You might decide on a particular course for securing data that influences your purchase.

To properly protect your virtualization environment and all the data, use Hornetsecurity VM Backup to securely back up and replicate your virtual machine.

We ensure the security of your Microsoft 365 environment through our comprehensive 365 Total Protection Enterprise Backup and 365 Total Backup solutions.

For complete guidance, get our comprehensive Backup Bible, which serves as your indispensable resource containing invaluable information on backup and disaster recovery.

To keep up to date with the latest articles and practices, pay a visit to our Hornetsecurity blog now.

Conclusion

In summary, exploring various backup storage options reveals both advantages and disadvantages. Each target, whether it’s cloud-based, on-premises, or a hybrid approach, offers unique benefits and challenges. The choice ultimately depends on specific business needs, budgets, and data security concerns.

By carefully considering the factors mentioned in this article, organizations can make informed decisions to ensure data protection and accessibility align with their objectives.

FAQ

What is the main purpose of storage?

Storage serves as a system that empowers a computer to store data, whether temporarily or permanently. Devices like flash drives and hard disks constitute a foundational element in most digital devices, enabling users to safeguard a wide array of data, including videos, documents, images, and raw information.

What are the benefits of backup storage?

Backups provide the means to recover deleted files or retrieve data that may have been unintentionally overwritten. Moreover, backups often represent the most reliable choice for recuperating from incidents like ransomware attacks or significant data loss events, such as a data center fire.

What are the benefits of more storage?

Here are several notable benefits associated with additional space that will persuade you to maximize your storage capacity:

  • Enhance organization and minimize clutter;
  • Boost efficiency and productivity;
  • Ensure safety;
  • Enhance accessibility for everyone;
  • Elevate comfort and ergonomics.

These advantages underscore the importance of making the most of available storage space.

The Foolproof Method of Maintaining your Backup System

The Foolproof Method of Maintaining your Backup System

As you might expect, setting up backup is just the beginning. You will need to keep it running into perpetuity. Similarly, you cannot simply assume that everything will work. You need to keep constant vigilance over the backup system, its media, and everything that it protects.

Monitoring Your Backup System

Start with the easiest tools. Your backup program almost certainly has some sort of notification system. Configure it to send messages to multiple administrators. If it creates logs, use operating system or third-party monitoring software to track those as well. Where available, prefer programs that will repeatedly send notifications until someone manually stops it or it detects problem resolution.

Set up a schedule to manually check on backup status. Partially, you want to verify that its notification system has not failed. Mostly, you want to search through job history for things that didn’t trigger the monitoring system. Check for minor warnings and correct what you can. Watch for problems that recur frequently but work after a retry. These might serve as early indications of a more serious problem.

Testing Backup Media and Data

You cannot depend on even the most careful monitoring practices to keep your backups safe. Data at rest can become corrupted. Thieves, including insiders with malicious intent, can steal media. You must implement and follow procedures that verify your backup data. After all, a backup system is only valuable if the data can be restored when needed.

Keep an inventory of all media. Set a schedule to check on each piece. When you retire media due to age or failure, destroy it. Strong magnets work for tapes and spinning drives. Alternatively, drill a hole through mechanical disks to render them unreadable. Break optical media and SSDs any way that you like.

Organizations that do not track personal or financial information may not need to keep such meticulous track of media. However, anyone with backup data must periodically check that it has not lost integrity. The only way you can ever be certain that your data is good is to restore it.

Establish a regular schedule to try restoring from older media. If successful, make spot checks through the retrieved information to make sure that it contains what you expect.

Use this article as a basic discussion on testing best practices. We will revisit the topic of testing in a dedicated post towards the end of this article series.

The activities in this article will take time to set up and perform. Do not allow fatigue to prevent you from following these items or tempt you into putting them off. You need to:

  • Configure your backup system to send alerts on failed jobs at least
  • Establish an accountability for manually verifying that the backup program is functioning on a regular basis;
  • Configure a monitoring system to notify you if your backup software ceases running;
  • Establish a regular schedule and accountability system to test that you can restore data from backup. Test a representative sampling of online and offline media.

Too many organizations do not realize until they’ve lost everything that their backup media did not successfully preserve anything. Some have had backups systems sit in a failed state for months without discovering it. A few minutes of occasional checking can prevent such catastrophes.

Monitoring backup, especially testing restores, is admittedly tedious work. However, it is vital. Many organizations have suffered irreparable damage because they found out too late that no one knew how to restore data properly.

Maintaining Your Systems

The intuitive scope of a business continuity plan includes only its related software and equipment. When you consider that the primary goal of the plan is data protection, then it makes sense to think beyond backup programs and hardware. Furthermore, all the components of your backup belong to your larger technological environment, so you must maintain it accordingly.

Fortunately, you can automate common maintenance. Microsoft Windows will update itself over the Internet. The package managers on Linux distributions have the same ability. Windows also allows you to set up an update server on-premises to relay patches from Microsoft. Similarly, you can maintain internal repositories to keep your Linux systems and programs current.

Maintaining Your Systems

In addition to the convenience that such in-house systems provide, you can also leverage them as a security measure. You can automatically update systems without allowing them to connect directly to the Internet. In addition to software, keep your hardware in good working order.

Of course, you cannot simply repair modern computer boards and chips. Instead, most manufacturers will offer a replacement warranty of some kind.

If you purchase fully assembled systems from a major systems vendor, such as Dell or Hewlett-Packard Enterprise, they offer warranties that cover entire systems as a whole. They also have options for rapid delivery or in-person service by a qualified technician. If at all possible, do not allow out-of-warranty equipment to remain in service.

Putting It into Action

Most operating systems and software have automated or semi-automated updating procedures. Hardware typically requires manual intervention. It is on the system administrators to keep current.

  • Where available, configure automated updating. Ensure that it does not coincide with backup, or that your backup system can successfully navigate operating system outages.
  • Establish a pattern for checking for firmware and driver updates. These should not occur frequently, so you can schedule updates as one-off events.
  • Monitor the Internet for known attacks against the systems that you own. Larger manufacturers have entries on common vulnerabilities and exposures (CVE) lists. Sometimes they maintain their own, but you can also look them up at: https://cve.mitre.org/. Vendors usually release fixes in standard patches, but some will issue “hotfixes”. Those might require manual installation and other steps.
  • If your hardware has a way to notify you of failure, configure it. If your monitoring system can check hardware, configure that as well. Establish a regular routine for visually verifying the health of all hardware components.

To properly protect your virtualization environment and all the data, use Hornetsecurity VM Backup to securely back up and replicate your virtual machine.

We ensure the security of your Microsoft 365 environment through our comprehensive 365 Total Protection Enterprise Backup and 365 Total Backup solutions.

For complete guidance, get our comprehensive Backup Bible, which serves as your indispensable resource containing invaluable information on backup and disaster recovery.

To keep up to date with the latest articles and practices, pay a visit to our Hornetsecurity blog now.

Final Words

Maintenance activities consume a substantial portion of the typical administrator’s workload, so these procedures serve as a best practice for all systems, not just those related to backup. However, since your disaster recovery plan hinges on the health of your backup system, you cannot allow it to fall into disrepair.

FAQ

What is a data backup system?

A data backup system is a method or process designed to create and maintain duplicate copies of digital information to ensure its availability in the event of data loss, corruption, or system failures.

What is an example of a data backup?

An example of a data backup is storing copies of files, documents, or entire systems on external hard drives, cloud services, or other storage media. This safeguards against potential data loss and facilitates recovery if the original data is compromised.

How do companies backup their data?

Companies use a variety of methods to backup their data, including regular backups to external servers, cloud-based solutions, tape drives, or redundant storage systems. Automated backup software is often employed to streamline and schedule the backup process, ensuring data integrity and accessibility.

Hornetsecurity, as cloud security experts, is here to assist global organizations and empower IT professionals with the necessary tools, all delivered with a positive and supportive attitude.

How to Get the Absolute Most Out of Your Backup Software

How to Get the Absolute Most Out of Your Backup Software

In the past, we could not capture a consistent backup. Operations would simply read files on disk in order as quickly as possible.

But, if a file changed after the backup copied it but before the job completed, then the backup’s contents were inconsistent. If another program had a file open, then the backup would usually skip it.

How to Get the Absolute Most Out of Your Backup Software

Microsoft addressed these problems with Volume Shadow Copy Services (VSS). A backup application can notify VSS when it starts a job. In response, VSS will pause disk I/O and create a “snapshot” of the system.

The snapshot isolates the state of all files as they were at that moment from any changes that occur while the backup job runs. The backup signals VSS when it has finished backing up, and VSS merges the changed data into the checkpoint and restores the system to normal operation.

With this technique, on-disk files are completely consistent.

However, it cannot capture memory contents. If you restore that backup, it will be exactly as though the host had crashed at the time of backup. For this reason, we call this type of backup “crash-consistent”. It only partially addresses the problem of open files.

How to Get the Absolute Most Out of Your Backup Software

VSS-aware applications can ensure complete consistency of the files that they control. Their authors can write a component that registers with VSS (called a “VSS Writer”). When VSS starts a snapshot operation, it will notify all registered VSS writers. In turn, they can write all pending operations to disk and prevent others from starting until the checkpoint completes.

Because it has no active I/O (sometimes called “in-flight”) at the time the backup is taken, the backup will capture everything about the program. We call this an “application-consistent” backup.

As you shop for backup programs, keep in mind that not everyone uses the terms “crash-consistent” and “application-consistent” in the same way. Also, Linux distributions do not have a native analog to VSS. Research the way that each candidate application deals with open files and running applications.

Hypervisor-Aware Backup Software

If you employ any hypervisors in your environment, you should strongly consider a backup solution that can work with them directly.

You can back up client operating systems using agents installed just like physical systems if you prefer. However, hypervisor-aware backup applications can appropriately time guest backups to not overlap and employ optimization strategies that greatly reduce time, bandwidth, and storage needs.

When it comes to your hypervisors, investigate applications with the same level of flexibility as Hornetsecurity VM Backup.

You can install it directly on a Hyper-V host and operate it from there, use a management console from your PC, or make use of Hornetsecurity’s Cloud Management Console to manage all of your backup systems from a web browser. Such options allow you to control your backup in a way that suits you.

Agent-Based Versus Agentless

Usually, backup solutions require you to install a software component on each system you want to protect. That software will gather the data from its system and send it directly to media or to a central system. You saw examples of both in the “The Golden Rules to Choosing a Backup Provider” article. The software piece that install on the targets is called an “agent”.

Other products can back up a system without installing an agent. You won’t find much in that category for taking complete backups of physical servers. Some software will back up networked file storage.

These “agentless” products rule the world of virtualization. Hornetsecurity VM Backup serves as a prime example. You install the software in your Hyper-V or VMware environment, and it backs up virtual machines without modifying them.

While VM Backup and similar programs can interact with guest operating systems to give them an opportunity to prepare for a backup operation, they can also work on virtual machines without affecting them.

Without such an agentless solution, you would need to place some piece of software inside every virtual machine. That introduces more potential failure points, increases your attack surface, and burdens you with more overhead.

You need to schedule all backup jobs carefully so that they do not interfere with each other. Agentless systems coordinate operations automatically. They also have greater visibility over your data, making it easier for them to perform operations such as deduplication for smaller, faster backups.

Standard Physical Systems Backup Software

Few organizations have moved fully to virtualized deployments. So, you likely have physical systems to protect in addition to your virtual machines. Some vendors, such as Hornetsecurity, provide a separate solution to cover physical systems.

Others use customized agents or modules within a single application. However, some companies have chosen to focus on one type of system and cannot protect the other.

Single Vendor vs. Hybrid Application Solutions

In small environments, administrators rarely even consider using solutions that involve multiple vendors. Each separate product has its own expertise requirements and licensing costs. You cannot manage backup software from multiple vendors using a single control pane.

You may not be able to find an efficient way to store backup data from different manufacturers. Using a single vendor allows you to cover most systems with the least amount of effort.

On the other hand, organizations with more than a handful of servers almost invariably have some hybridization – in operating systems, third-party software, and hardware. Using different backup programs might not pose a major challenge in those situations. Using multiple programs allows you to find the best solution for all your problems instead of accepting one that does “enough”.

I once had a customer that was almost fully virtualized. They placed high priority on a granular backup of Microsoft Exchange with the ability to rapidly restore individual messages. Several vendors offer that level of coverage for Exchange in addition to virtual machine backup.

Unfortunately, no single software package could handle both to the customer’s satisfaction.

To solve this problem, we selected one application to handle Exchange and another to cover the virtual machines. The customer achieved all their goals and saved substantially on licensing.

Putting It in Action

Using the above guidance and the plan that you created in earlier articles in this series, you have enough information to start investigating programs that will satisfy your requirements.

Phase one: Candidate software selection

Begin by collecting a list of available software. You will need to find a way to quickly narrow down the list.

To that end, you can apply some quick criteria while you search, or you can build the list first and work through it later. Maintain this list and the reasons that you decided to include or exclude a product.

Create a table to use as a tracking system. As an example:

Phase one: Candidate software selection

It might seem like a bit much to create this level of documentation, but it has benefits:

  • Historical purposes: Someone might want to know why a program was tested or skipped
  • Reporting: You may need to provide an accounting of your selection process
  • Comparisons: Such a table forms a feature matrix

Because this activity only constitutes the first phase of selection, use criteria that you can quickly verify. To hasten the process, check for any deal-breaking problems first. You can skip any other checks for that product. While the table above shows simple yes/no options, you can use a more nuanced grading system where it makes sense.

Keep in mind that you want to shorten this list, not make a final decision.

Phase two: In-depth software testing

You will spend the most time in phase two. Phase one should have left you with a manageable list of programs to explore more completely. Now you need to spend the time to work through them to find the solution that works best for your organization.

Keep in mind that you can use multiple products if that works better than a single solution.

For this phase, you will need to acquire and install software trials. Some recommendations:

  • Install trialware on templated virtual machines that you can quickly rebuild;
  • Use test systems that run the same programs as your production systems;
  • Test backing up multiple systems;
  • Test encryption/decryption;
  • Test complete and partial restores.

Extend the table that you created in phase one. If you used spreadsheet software to create it, consider creating tabs for each program that you test. You could also use a form that you build in a word processor.

Make sure to thoroughly test each program. Never assume that any given program will behave like any other.

Phase three: Final selection

Hopefully, you will end phase two with an obvious choice. Either way, you will need to notify the key stakeholders from phase one of your selection status. If you need additional input or executive sign-off to complete the process, work through those processes.

Phase three: Final selection

Unless you choose a completely cloud-based disaster recovery approach, you will still need to acquire hardware. Remember that, due to threats of malware and malicious actors, all business continuity plans should include some sort of in-house solution that you can take offline and offsite.

To properly protect your virtualization environment and all the data, use Hornetsecurity VM Backup to securely back up and replicate your virtual machine.

We ensure the security of your Microsoft 365 environment through our comprehensive 365 Total Protection Enterprise Backup and 365 Total Backup solutions.

For complete guidance, get our comprehensive Backup Bible, which serves as your indispensable resource containing invaluable information on backup and disaster recovery.

To keep up to date with the latest articles and practices, pay a visit to our Hornetsecurity blog now.

Conclusion

Optimizing your backup software is crucial for ensuring the integrity and consistency of your data. When dealing with virtualization and hypervisors, consider solutions that are hypervisor-aware and agentless, as they can offer greater flexibility and efficiency.

For organizations with both physical and virtual systems, it’s essential to select a solution that can cover both adequately.

When deciding between a single-vendor or hybrid approach, weigh the pros and cons carefully to meet your unique needs, as the phased approach to selecting the right backup software involves candidate selection, in-depth testing, and final selection, ensuring you make the best choice for your organization’s data protection and recovery needs.

FAQ

What is the backup software?

Backup software is a type of computer program designed to create and manage copies of data, files, or entire systems for the purpose of data protection, disaster recovery, and data preservation. These software applications automate the process of backing up data to ensure that it can be restored in case of data loss, hardware failure, or other unforeseen events.

What is an example of backup software?

An example of backup software is our Hornetsecurity VM Backup, which is a comprehensive backup solution provided by Hornetsecurity. Hornetsecurity VM Backup is a virtual machine backup solution provided by Hornetsecurity.

It’s designed specifically for virtualized environments and focuses on creating backups of virtual machines. This type of backup software is essential for protecting and recovering data in virtualized server environments.

What is free backup software?

Free backup software refers to backup solutions that are available at no cost, typically with limited features compared to their paid counterparts. These free backup software options are suitable for individuals or small organizations with basic backup needs.

The Golden Rules to Choosing a Backup Provider

The Golden Rules to Choosing a Backup Provider

The connection point is usually when you have received the bulk of your hardware and software purchase and can put it to use. If you have not even submitted orders yet, that’s ideal. If you already have everything, that’s fine as well.

You must design the architecture, which you might find easier to perform before you decide what to buy.

In simple terms, you must move on from deciding what to protect to deciding how to protect it. For some things, your organization might choose to use printed hard copies. Those survive power outages and need no technical expertise and can last essentially forever. You will need to find a way to adequately keep these items safe.

Consider their risk from events such as fire, flood, and theft. If the contents of the documents are vital but not a risk to security, then perhaps creating and distributing multiple copies is the best answer. Technology may not help much for these types of problems.

To guard your digital information, you need three major things:

  • Backup software
  • Backup storage
  • Security strategy

If you start by selecting your backup application, that can guide you toward the most appropriate hardware platform and security approach. You could also start with a physical storage system that you like, but this may restrict your options for software solutions.

In the past, companies rarely put much thought or effort into backup security. Soon, they learned – the hard way – that bad actors found enough value in data backups to steal them. That prompted the backup industry to introduce security features into their products.

Later, ransomware authors began targeting backup applications to prevent them from saving victims’ data, or even worse, corrupting that data so it can’t be recovered.

This article focuses on the topic of choosing the right backup and disaster recovery provider for you and your business.

Choosing the right backup and recovery software

Your software selection will have a monumental long-term impact on your disaster recovery and business continuity operations. Once you successfully implement your choice of application(s), inertia will set in almost immediately.

Most vendors offer renewal pricing substantially below their first-year cost, which makes loyalty attractive. Switching to another provider might prove prohibitively expensive. Even if you get attractive pricing from a competitor, you still need to invest considerable time and effort to make the switch. For these reasons, you should not rush to a determination.

At its core, every single backup application has exactly one purpose: make duplicate copies of bits. Any reasonably talented scripter can build a passable bit duplication system in a short amount of time. Due to the ease of satisfying that core function, the backup software market has a staggering level of competition.

With so many available choices, you get some good and some bad news. The good news: you have no shortage of feature-rich, mature options to choose from. The bad news: you have no shortage of feature-rich, mature options to choose from.

You likely will not try out more than a few vendors before you either run out of time or become overwhelmed. In the upcoming sections, you will find many pointers to help you quickly pare down your options to a reasonable subset before installing your first trial package.

Backup application features

To distinguish themselves in a marketplace crowded with dozens of other companies trying to sell a product that performs the same fundamental role, backup program manufacturers spend a great deal of time on the supporting features.

Like anyone else, they tend to brag about whatever they feel that they do especially well. So, you can often get an overall feeling about a product just by looking at its marketing literature.

If they frequently use words like “simple” and “easy”, then you should expect to find a product that will not need a lot of effort to use. If you see several references to “fast” and “quick” and the like, then the application likely focuses on optimizations that reduce the amount of time to perform backup or restore operations.

Businesses that work from a value angle tend to use words like “affordable” and “economical”. Words like “trusted” and “leader” tend to indicate a mature product with a dedicated following.

So, if you go to the homepage of a backup vendor and see phrases containing words that speak to you, then you are almost certainly in that company’s target market. At the very least, they think that they have something to offer that fits your needs.

You will have to do more work to determine if their product lives up to the promise. However, if you see nothing that addresses your primary concerns, take that as a warning sign.

For instance, if you mostly want a stable product with responsive support that you can afford, you might want to avoid a company that prides itself on bleeding-edge capabilities, places its support links after everything else, and makes it difficult to even find pricing.

It’s important to match the scope of the solution with your unique deployment characteristics and business requirements rather than simply opt for the cheapest or most feature rich.

Trial and free software offerings – what to look for

Every major backup application manufacturer offers a trial, and most offer a limited but free version of their product. You should take advantage of these opportunities. With so many quality products on the market, avoid anything that you cannot try prior to purchase.

As you test software, use your plan from the earlier article as your guide. If the program cannot satisfy anything on that list, then you must gauge the importance of that deficit. Find out if the program provides an alternative method to achieve the goal.

If it does not, then you must choose between augmenting this program with another or skipping the product altogether.

As for free software, it works perfectly well for trial purposes. However, exercise extreme caution if you intend to use it long-term. Commercial software companies need income to survive, so they invariably build their free tiers in some way that showcases the power of their software but still makes the paid tiers desirable.

You can even find a few completely free programs provided by contributors out of the goodness of their hearts. These are rarely enterprise-ready and almost never maintained for very long. In all cases, you cannot expect to receive significant support for free products.

Think long and hard before deciding to entrust your organization’s disaster recovery and business continuity to such tools.

Security considerations for backup

Organizations have always needed to consider the security of their data, whether on a live system or on backup media. However, “security” and “backup” mostly stayed separate. When security crossed into the backup conversation, it mostly meant protecting the media from data thieves. The world has changed.

Various disasters have always threatened systems and data. The appearance of ransomware has forced the world to rethink the nature of those threats. Once upon a time, backup was the security blanket for catastrophe. Backup has become a target. At the same time, nothing else can guarantee survival of a ransomware infestation.

Hornetsecurity VM Backup v9, for example, offers ransomware protection through immutability. Know what the software will handle and what will fall to you before deciding.

As you look through your software options, you will find considerable differences in deployment and management behaviors. Take note of their installation requirements and procedures. Common options:

  • Per-host installation, data direct to storage, no centralization
Security considerations for backup : Per-host installation, data direct to storage, no centralization
  • Per-host installation, data direct to storage, managed from a central console
Security considerations for backup : Per-host installation, data direct to storage, managed from a central console
  • Central installation, agents on hosts, data direct to storage
Security considerations for backup : Central installation, agents on hosts, data direct to storage
  • Central installation, agents on hosts, data funneled through a central system
Security considerations for backup : Central installation, agents on hosts, data funneled through a central system
  • Appliance-based installation, agents on hosts, data stored on or funneled through appliance
Security considerations for backup : Appliance-based installation, agents on hosts, data stored on or funneled through appliance

You will find other architectures. Before you purchase anything, ensure that you understand how to deploy it. If you need to rack a physical appliance or make capacity for a virtual appliance, you do not want that to catch you by surprise.

If your preferred program requires a dedicated server instance, that may have licensing implications beyond the backup application’s cost.

To properly protect your virtualization environment and all the data, use Hornetsecurity VM Backup to securely back up and replicate your virtual machine.

We ensure the security of your Microsoft 365 environment through our comprehensive 365 Total Protection Enterprise Backup and 365 Total Backup solutions.

For complete guidance, get our comprehensive Backup Bible, which serves as your indispensable resource containing invaluable information on backup and disaster recovery.

To keep up to date with the latest articles and practices, pay a visit to our Hornetsecurity blog now.

Conclusion

Selecting the right backup and disaster recovery provider is a critical decision for the long-term security and resilience of your data. It’s essential to move beyond merely choosing what to protect and focus on how to protect it.

This article has highlighted the key considerations, including architecture design and the choice between digital and hard-copy data protection. To safeguard your digital information, you need a robust combination of backup software, storage solutions, and a security strategy.

It’s crucial to make an informed decision, as your software selection will significantly impact your disaster recovery and business continuity operations.

FAQ

What is a cloud backup service provider?

A cloud backup service provider is a company that offers cloud-based storage and data backup solutions to help users protect and recover their digital information.

What is backup as a service?

Backup as a service (BaaS) is a cloud computing service that provides data backup and recovery capabilities. It allows users to back up their data to a remote, cloud-based server managed by a third-party provider.

What is an example of an online backup provider?

There is no better example than Hornetsecurity as we are a leading backup provider worldwide!

How to Make the Undeniable Business Case for Backup

How to Make the Undeniable Business Case for Backup

With the input of business-oriented personnel, you can determine how IT will deliver an appropriate business continuity design. To that end, you need to discover the capabilities of the technologies available to you.

Once you know that, you can predict the costs. You can take that analysis back to the business groups to build a final plan that balances what your organization wants for disaster recovery against its willingness to pay for it.

Mapping out your backup requirements will then help you plan software subscriptions to fulfil your needs. Hornetsecurity recognizes the necessity for multiple backup solutions and as such provides data backup and recovery service for all your critical Microsoft 365 services (Exchange mailboxes, SharePoint, OneDrive, Teams etc.) and also virtual machine backup.

Discovering the Technological Capabilities of Data Protection Systems

At this point, you have an abstract list of high-level business items. Few backup solutions target Line of-Business (LOB) applications. So, you need to break that list down into items that backup and replication programs understand.

To attract the widest range of customers, their manufacturers specify services and products that most organizations use. Common protections include:

  • Windows Server and Windows desktop;
  • UNIX/Linux systems;
  • Database servers;
  • Mail servers;
  • Virtual machines;
  • Cloud-based resources;
  • Physical hardware configurations.

You’ll need to create a map from the prioritized business-level items to their underlying technologies. Bring in technical experts to ensure that you don’t miss anything. Gather input on what needs to happen in order to recover the various systems in use at your organization.

Many require more effort than a simple restore-from-backup procedure. Some examples:

  • Active Directory;
  • Log-based SQL recovery;
  • Mail servers;
  • Multi-tier systems;
  • Cluster nodes.

Take input from line-of-business application experts as well as server and infrastructure experts. Seek out the experience of those that have faced a recovery situation with the systems that you rely on most. You might find exceptions or special procedures that would surprise generalists.

First Line of Defense: Fault-Tolerant Systems

Ideally, you would never need to enact a recovery plan. While you can never truly eliminate that possibility, you can reduce its likelihood with fault-tolerant systems. “Fault-tolerance” refers to the ability to continue functioning with a failed component.

Most fault-tolerant systems largely function at a low level, usually on the internal components of computer systems. To provide protection, they usually employ some method of hardware-level data duplication.

In the event of a failure, they use the redundant copy to continue providing expected functionality. Examples include multiple power supplies, disks, Network Interface Cards (NICs) and so forth.

However, until someone replaces the defective part, the system does not provide redundancy. Further failures will result in an outage and possibly data loss.

QR codes – The criminal’s new best friend 

Storage technologies make up the bulk of fault tolerant systems. Not coincidentally, they also have the highest failure rate. You can protect short-term storage (main system memory) and long-term storage (spinning and solid-state disks).

System memory fault tolerance

To provide full fault tolerance, memory controllers allow you to pair memory modules. Every write to one module makes an identical copy to the other. If one fails, then the other continues to function by itself.

If the computer also supports memory hot-swapping and technicians have a way to access the inside without unplugging anything, then a replacement can be installed without halting the system.

Of course, system memory continues to be one of the more expensive components, and each system has a limited number of slots. So, to use fault-tolerant memory, you must cut your overall density in half.

Doubling the number of hosts presents more of a cost than most organizations want to undertake. Fortunately, memory modules have a low rate of total failure. It is much more likely that one will experience transient problems, which can be addressed with cheaper solutions.

Server-class computer systems usually support error-correcting code (ECC) memory modules. ECC modules incorporate technologies that allow for detection and correction of memory errors.

Some vendors provide proprietary technologies to defend against problems.

In most cases, you will choose ECC memory over fully fault-tolerant schemes. ECC cannot defend against module failure, but such faults occur rarely enough to make the risk worthwhile. ECC costs more than non-ECC memory, but it still has a substantially lower price tag than doubling your host purchase.

Hard drive fault tolerance

Hard drives, especially the traditional spinning variety, have a high failure rate. Since they hold virtually all of an organization’s live data, they require the most protection. Due to the pervasiveness of the problem, the industry has produced an enormous number of fault-tolerant solutions for hard drives.

RAID (redundant array of independent disk) systems make up the bulk of hard drive fault tolerance designs. These industry-standard designs use a combination of the following technologies to protect data:

Mirroring

Every bit written to one disk is written to the same location on at least one other disk. If a disk fails, the array uses the mirror(s).

Mirroring

Striping

Every bit written to one disk is written to the same location on at least one other disk. If a disk fails, the array uses the mirror(s).

Striping

Parity

Parity also uses a striping pattern, with a major difference. One or more blocks in each stripe holds parity data instead of live data. The operating system or array controller calculates parity data from the live data as it writes the stripe.

If any disk in the array fails, it can use the parity data in place of the live data. A parity array can continue to function with the loss of one disk per parity block per stripe.

Parity

If you wish to use RAID, you can choose from a number of “levels”. Each level of RAID provides its own balance of redundancy, speed, and capacity. With the exception of RAID-0 (pure striping for performance, no redundancy), all RAID levels require you to sacrifice some available space for protection.

Disks present a relatively low expense when compared to system memory, and you have many expansion options beyond the base capacity of a system chassis. So, while RAID presents a higher cost per stored bit than single disk systems, it is usually not prohibitive.

You have several choices when it comes to RAID. Many levels have fallen out of favor due to insufficient protection in comparison to others, and some simply consume too much space for cost efficiency. You will typically encounter these types:

  • RAID-1 – A simple mirror of two disks. Provides adequate protection, slightly lower than normal write speeds, higher than normal read speeds, and a 50% loss of capacity.
Parity RAID-1
  • RAID-5 – A stripe with a single parity block. Requires at least three disks. Each stripe alternates which disk holds the parity data so that in a failure scenario, parity calculations only need to occur for 1/n stripes. Can withstand the loss of a maximum of one disk. Provides adequate protection, above normal write speeds, above normal read speeds, and a loss of 1/nth capacity. Not recommended for arrays that use very large disks due to the higher probability of additional disk failure during rebuilds and the higher odds of a failure occurring between patrol reads (scheduled reads that look for bit failures).
Parity RAID-5
  • RAID-6 – Like RAID-5, but with two parity blocks per stripe. Requires at least four disks. Safer than RAID-5, but with similar concerns on large disks. Slower than RAID-5 and a capacity loss of 2/n.
  • RAID-10 – Disks are first paired into mirrors, then a non-parity stripe is written on one side of the mirror set, which is then duplicated to the corresponding mirror disk. Can function with the loss of one disk in each mirror but cannot lose two disks in the same mirror. Provides better performance and a higher safety rate than parity schemes, but at a loss of 50% of total drive capacity.
Parity RAID-5

Due to the preponderance of drive failures and reduced performance of standardized redundancy schemes, many vendors have introduced proprietary solutions that seek to address particular shortcomings of RAID.

Whereas RAID works at the bit and block levels, most vendor-specific systems add on some type of metadata-level techniques to provide protection or performance enhancements.

You have an overwhelming number of choices when it comes to fault-tolerant disk storage, so keep a few anchor points in mind:

  • Storage vendors naturally want you to buy their highest-cost equipment. Use planning tools to predict your capacity and performance needs before you start the purchasing process. Businesses frequently overestimate their space and performance requirements.
  • You can almost always expand your storage after initial implementation. You do not need to limit yourself to the capacity of a single chassis as you do with system memory.
  • Solid-state disks have a substantially lower failure rate than spinning disks. You can leverage hybrid systems that incorporate both as a way to achieve an acceptable balance of performance, redundancy, and cost.

The most important point: downtime costs money. Storage redundancy directly reduces the odds of an unplanned outage.

Advanced storage fault tolerance

The advent of affordable, truly high-speed networking (ten gigabit and above) has brought exciting new options in storage protection. Today’s networking speeds exceed even high-end storage equipment.

Once the sole purview of high-end (and very high-cost) storage area network (SAN) devices, you can now acquire chassis-level, and even datacenter-level, storage redundancy at commodity prices.

These technologies depend on real-time, or synchronous, replication of data. In the simplest design, two storage units mirror each other.

Systems that depend on them can either connect to a virtual endpoint that can fail over as needed or they connect to one unit at a time in an active/passive configuration. In more complex designs, control systems distribute data across multiple storage units and broker access dynamically.

We discuss real-time replication more completely in the article titled “How to Use Replication to Easily Achieve Business Continuity”.

The most advanced examples of these technologies appear in relatively new hyper-converged solutions. These use software to combine the compute layer with the storage layer on standard server-class computing hardware.

In most cases, they involve a hypervisor to control the software layer and proprietary software to control storage.

While costs for distributed storage and hyper-converged systems have declined dramatically, they remain on the higher end of the expense spectrum.

Unlike traditional discrete systems, you will need significant infrastructure and technical expertise to support them properly. You can consider the duplicated data in this fashion as a “hot” copy. It’s updated instantaneously and you can fail over to it quickly.

Some synchronous replication systems even allow for transparent failover or active/active use.

Application and operating system fault tolerance

At the highest layer, you have the ability to mirror an operating system instance to another physical system. To make that work, you must run the instance under a hypervisor capable of mirroring active processes.

It’s a complex configuration with many restrictions. Few hypervisors offer it, it won’t work universally, it won’t survive every problem, and the performance hit might make it unworkable for the applications that you want to protect most.

At a more achievable level, some applications allow a measure of fault tolerance through tiering. For instance, you can often run a web front-end for a database. You can use load balancers that instantly move client connections from one web server to another in the event of failure.

Some database servers also allow for multiple simultaneous instances that can instantly redirect connections to a functioning node. These technologies have greater functionality and feasibility than operating system fault tolerance.

In most cases, when an application offers its own in-built redundancy option (Exchange Server Database Availability Groups, or SQL Server Always On availability groups for example), these are always preferred over generic OS or Hypervisor high availability options, see below.

Caveats of fault tolerance

As you explore options for fault tolerance, you’ll quickly notice that it comes at a substantial cost. Almost all the technologies will require you to purchase at least two of everything. Most of them will necessitate additional infrastructure.

All of them depend on expertise to install, configure, and maintain. Those costs always need to be scoped against the cost of equivalent downtime.

The primary purpose of fault tolerance is to rely on duplicates to continue functioning during a failure. That has a negative side effect: your fault-tolerant solution might duplicate something that you don’t want.

For example, if ransomware attacks your storage system, having RAID or a geographically redundant SAN will not help you in any way. Even in the absence of a malicious actor, redundant systems will happily copy accidental data corruption or delete all instances of a vital e-mail on command.

While fault tolerance will serve your organization positively, it cannot stand alone. You will always need to employ a backup solution for asynchronous data duplication. However, you have options between fault tolerance and backup. Those technologies reside in the high availability category.

Second Line of Defense: High Availability

You can’t use fault tolerance for everything. Some systems have no way to implement it. Some have a prohibitively high price tag. Instead, you can deploy high-availability solutions. High availability has a more nebulous definition than fault tolerance. It applies less to actual technologies and more to outcomes.

Where fault tolerance means working through a failure without interruption, high availability measures actual uptime against expected uptime.

As an example, your organization sets a target of 99.99% annual availability for a system that they want always to work. To achieve that, you would need to ensure that the system does not experience more than a few minutes of total downtime in the course of a year.

365 days times 99.99% equals 364.9635 days of uptime, which allows a little less than 48 minutes. That’s an aggressive goal.

When you build high availability goals, ensure that you distinguish whether or not you include planned outages in the metric. If you include them, then you may substantially reduce your tolerance for failures.

If systems expected achieve 99.99% uptime require five minutes per month to fail from active systems to backup systems during patch cycles and you include that in the metric, then they will violate the availability expectation by 12 minutes per year even without unexpected outages.

Along with adjusting for planned maintenance, you can also set the scope of availability. As an example, you can keep the 99.99% goal, but indicate that it only applies from 6:00 AM to 6:00 PM on weekdays. You could exclude company holidays.

Take care to follow two critical steps:

  1. Clearly outline any non-obvious exceptions. If you set an expectation of 99.99% in large font and subtly list conditions below, then you will eventually experience the wrath of someone that feels deceived and betrayed. Avoid that from the beginning.
  2. Define a precise standard for “uptime”. Favor the user experience in these results, but also have something that you can objectively measure. For instance, “customer can place a complete order on the website” works well as an abstract goal, but how do you measure that? If a system failure would have prevented a customer from ordering, but no customer tried, does that count as an outage? If a customer order fails, how do you know if the system was at fault?

From the technology angle, any tool that specifically helps to improve uptime falls under the high availability umbrella. All fault-tolerant technologies qualify. However, you also have some that allow a bit of downtime in exchange for reduced cost, wider application, and simpler operation. Among these, clustering is generally the most common.

High Availability with Clustering

Clustering involves using multiple computer or appliance nodes, usually in an active/passive configuration, to host a single-instance resource. Some examples that depend on Microsoft’s failover clustering technology:

Microsoft SQL

A clustered Microsoft SQL database runs on one of many nodes. In a planned failover, the database becomes unavailable for a few seconds while its active node stops and one of the passive nodes start. In the event of active node failure, the database is offline for a few seconds while a passive node starts it. Active transactions might drop in an unplanned failover.

Hyper-V

A clustered virtual machine can quickly move online (Live Migration) or offline (Quick Migration) to another node in a planned failover. If its active node fails, the virtual machine crashes but another node can quickly restart it.

File server

The standard clustered Microsoft file server hosts through an active node, with planned and unplanned failovers occurring quickly. Microsoft also provides a scale-out file server, which operates in a more fault-tolerant mode.

Storage Spaces Direct

Commonly called “S2D”, Storage Spaces Direct is Microsoft’s distributed file system offering. It works on Windows Server for plain storage needs. Azure Stack HCI also implements it to provide a complete hyper-converged infrastructure solution.

You will find clustering technologies in other operating systems, hypervisors, and physical appliances. Remember that these differ from fault-tolerance in that they allow some downtime. However, they greatly reduce downtime risks when compared to standalone systems.

High Availability with Clustering

Caveats of clustering

Clustering provides a duplicate of the compute layer. It ensures that a clustered workload has somewhere to operate. It does not make any copies of data. Without additional technology, a critical storage failure can cause the entire cluster to fail.

Because of the necessity of hardware duplication, clustering costs at least twice as much as operating without a cluster. You might also need to purchase additional software features in order to enable a clustered configuration. Clustering requires staff that know how to install, configure, and maintain it.

You must also take care that the backup solution you choose can properly protect your clustered resources. Solutions such as Hornetsecurity’s VM Backup protect virtual machine clusters. You can sometimes successfully employ a backup solution that doesn’t interoperate with your high-availability solution, but it will require significantly more administrative effort.

High Availability With Asynchronous Replication

You can employ technologies that periodically copy data from one storage unit to another. Asynchronous replication can use a snapshotting technique to maintain complete file system consistency. Some replication applications use a simple file-copy mechanism, which works well enough for basic file shares but not for applications.

Some applications have their own asynchronous replication built in. Microsoft’s Active Directory will automatically send updates between domain controllers. Most SQL servers have a set of replication options. Microsoft Hyper-V can create, maintain, and control virtual machine replicas.

You can consider data created by asynchronous replication as a “warm” copy. It does require some sort of process to bring online after a failure, but you can place it in service quickly.

Caveats of asynchronous replication

Unlike clustering, asynchronous replication requires some human interaction to switch over to a copy after a failure. Clustering technologies use some sort of control technique to prevent split-brain situations in which two copies run actively and simultaneously. Most replication systems have no built-in way to do that. So, if you choose to implement replication, ensure that you plan accordingly.

Replication shares the main drawbacks of clustering: it requires duplicated hardware, special software, and expertise. It also does not protect against data corruption, including ransomware.

The Universal Fail-Safe – Backup

Out of all available disaster recovery and business continuity technologies, only backup is both sufficient on its own and necessary in all cases. You can safely operate an organization without any fault-tolerance or high availability technologies, but you cannot responsibly omit data backup and recovery service.

Please note that the following section contains many terms you will need to know to understand. The glossary contains all the definitions you’ll require.

Before you start shopping, ensure that you understand common backup terms:

  • Full backup – A complete, independent duplication of data that you can use to recover all data without any dependency on any other data.
  • Differential backup – An abbreviated backup that only captures data that changed since the most recent full backup. Usually operates at the file level.
  • Incremental backup – An abbreviated backup that only captures data that changed since the most recent backup of any kind. Usually operates at the file level.
  • Media – Storage for backups. Intended as a catch-all word whether you save to solid state drives, magnetic disks, tapes, optical discs, or anything else.
  • Delta – In backup parlance, delta essentially means “difference”. Most backup vendors use it to mean a measurement of how a file or a block has changed since the last backup. You can reasonably expect the term “delta” to designate technology that operates below the file level.
  • Crash-consistent – A crash-consistent backup captures a system’s data at a precise point in time. It carries the name “crash-consistent” because, if you restore to such a backup, the system will act exactly as though it had crashed when the backup was taken. A crash-consistent backup does not protect any running processes, nor does it give them any opportunity to save active data. However, it captures all files exactly as they were at that moment.
  • Application-consistent – An application-consistent backup interacts with applications to give them an opportunity to save active data for the backup. All other data, including that of applications that the backup applications cannot notify, will save in a crash-consistent state.
  • Restore – The act of retrieving data from a backup. Restoration can return data to a live system or to a test system. Most tools allow you to choose between complete and partial restores.
  • Rotation – Re-using backup media, usually by overwriting older backups. Some backup software has intricate rotation options.

Not everyone agrees on the definitions of “crash-consistent” and “application-consistent”, and some vendors have introduced their own labels.

Ensure that you understand how any given vendor uses these terms when you study their products and talk to their representatives. Also have them explicitly define what they mean by “delta” in their solutions.

As you explore backup solution choices, you need to use the plan created by your business teams as a guideline. You want to try to satisfy all requirements for data protection and retention. Consider these critical components of data backup and recovery service technologies:

  • Backups must create a complete, standalone duplicate of data;
  • Backups must maintain multiple unique, non-interdependent copies of data;
  • Backups should complete within your allotted time frame;
  • Backups should provide application-consistent options;
  • Backups should work with the type of backup media that you want to use;
  • Backups should work with your cloud providers, both to protect your cloud resources and to back up to your.

The above list only constitutes a bare minimum. Realistically, all backup vendors know that they need to hit these targets, so only a few will miss. Usually, those are the built-in free options or small hobbyist-style projects. You will find the greatest variances among the last two items.

Products will distinguish themselves greatly in operation and in optional features. You should avail yourself of trial software to experience these for yourself. Some things to look for:

Ease of operation (especially restores)

In a disaster, you cannot guarantee the availability of your most technically proficient staff, so your backup tool should not require them.

Speed of operations

Backup and restore operations need to complete in a reasonable amount of time. However, they cannot sacrifice vital functionality to achieve that. Most backup vendors utilize some sort of deduplication technology to reduce time and capacity needs, but you absolutely must have a sufficient number of non-interdependent copies of your data.

Retention lengths

Most backup applications allow an infinite number of backups – except in their free editions. If your organization won’t allow you to spend money on backup software, that might prevent you from achieving their requirements.

Support for the products that you use

As mentioned earlier in this article, very few backup applications know anything about line-of-business software. However, they should handle your operating systems and hypervisors. Some will have advanced capabilities that target common programs, such as mail and database servers.

If you choose a solution that does not natively handle your software, ensure that you know how to use it to perform a proper backup and restore.

Offsite support

Because you will use backup to protect against the loss of your primary business location, your backup tool needs to have some method that allows you to take backup data offsite. Traditionally, that meant some sort of portable media.

Today, that also means transmitting to an alternative location or a cloud provider.

Support for alternative hardware

After a disaster, you probably won’t have the luxury to restore data to the same physical hardware that it protected. Make sure that your backup application can target replacement equipment.

Technical support options

Hopefully, you’ll never need to call support for your backup product. However, you don’t know who might need to perform a restore. That task might fall to a person that will need help. You also need to consider future product updates and the possibility of bugs that need attention.

Ensure that you understand your backup provider’s support stance and process. Check public sites and forums for reviews by others, although remember that happy people rarely say anything, and angry people often exaggerate.

Look for complaints that highlight specific problems. If possible, try to talk to someone in support before purchase.

Consider data created by backup as a “cold” copy. You must take some action to transition the data from its backup location before you can use it in production. It usually has a much higher time distance from the failure point than replication.

High Availability with Clustering

Closing the Planning Phase

You have now seen all the basic concepts and have enough knowledge to tackle the planning phase of your disaster recovery strategy.

To properly protect your virtualization environment and all the data, use Hornetsecurity VM Backup to securely back up and replicate your virtual machine.

We ensure the security of your Microsoft 365 environment through our comprehensive 365 Total Protection Enterprise Backup and 365 Total Backup solutions.

For complete guidance, get our comprehensive Backup Bible, which serves as your indispensable resource containing invaluable information on backup and disaster recovery.

To keep up to date with the latest articles and practices, pay a visit to our Hornetsecurity blog now.

Conclusion

In conclusion, crafting an unassailable business case for backup is a multifaceted endeavor. By collaborating with business-focused experts, understanding available technology capabilities, and predicting costs, you pave the way for a robust business continuity strategy.

This process harmonizes the organization’s disaster recovery aspirations with its financial constraints, ensuring a comprehensive, cost-effective solution. With a well-founded plan, you can confidently safeguard your business against unforeseen disruptions.

FAQ

What is data backup and recovery services?

Data backup and recovery services ensure your data’s safety and availability in case of loss. It involves duplicating and archiving computer data to prevent data loss due to corruption or deletion.

What does a data recovery service do?

Data recovery service providers specialize in recovering lost data by understanding data storage and restoration techniques.

Is it safe to use a data recovery service?

Opting for data recovery software is a safer choice compared to physical recovery attempts. Hornetsecurity offers a comprehensive solution for your data backup and recovery needs.