To patch or not to patch, that is the question. No question about it, patch. And patch often. But wait, my application will not work with that new .net update! So, the struggle between security and operations continues. Many engineers would say that if it’s not broken, don’t patch it. This is a very dangerous practice as more and more security breaches are designed to harvest data or resources while not disrupting the network. While in the past, the intent was to bring the network down, cyber hackers now zero in on the company’s network resources and end users with the goal of data mining. Hackers can and will take advantage of these security flaws if patching is not done in a timely manner. As you can see, patching is extremely important and, if neglected, could put your systems and your clients’ systems at risk. This post explains the best practices so you can choose the right patch deployment method for your specific requirements.

Manual Deployment

The first method is the old-school way: manually deploying patches. Many small managed service providers continue to use this method today for many reasons. The main reason would be the ease of getting into the market. It has very little upfront cost. This method is very labor-intensive and slow. It is the ideal patching method for a small managed services provider. Applying patches in this way requires accessing each endpoint and conducting updates as needed. However, it’s not a method that allows a larger managed service provider to scale to a larger size or market. With that said, even with a large managed services provider, those “troublesome servers” remain within this patching strategy. This is primarily due to custom applications and other dependencies that will make patching a challenge. Another issue with this method is that it is pretty much “patch and pray.” You are relying on the vendor to release patches, and they won’t bring the network down.  As a result, labor spent on testing patches is not cost-effective due to the many different system configurations and the requirement of having those resources available for said testing. Combine the labor spent on testing with the labor spent to implement the patches, and things become cost-prohibitive for any managed services provider with more than just a handful of clients.  Most of your hard cost of goods will be in labor and will increase steadily as the endpoint base increases.


  1. Low to no cost for system
  2. Very customizable to accommodate specific client needs


  1. Labor intensive and requires larger workforce
  2. Time consuming when deploying patches
  3. Less scalable with limited potential for growth
  4. More risk in applying a “bad” patch
  5. Decentralized management requiring additional resources

Third-Party Patching Systems

This cannot be understated; you cannot build a cyber-resilient organization without involving every single person who works there. This starts with the basic awareness of asking someone unknown who isn’t wearing a badge in the office to identify themselves, and if the answer doesn’t stack up, calling security. When someone calls you claiming to be from the IT helpdesk and asks you to approve the MFA prompt you’re about to receive on your phone, don’t assume they’re telling the truth. Always double-check their credentials first to ensure that it’s a legitimate request. What you’re trying to foster is “polite paranoia”, making it normal to question unusual requests, and understanding the risk landscape and sharpening instincts. Most people who work in businesses aren’t cyber or IT savvy and weren’t hired for those skills. However, everyone needs to have a basic understanding of how identity theft works in our modern digital world, both in their personal and professional lives. They also need to have a grasp of the business risks introduced by digital processes, including emails. By having this context they’ll be able to understand when things are out of context or unusual and have enough suspicion to ask a question or two before clicking the link, wiring the funds, or approving the MFA prompt. And this isn’t a once-off tick on a form to achieve compliance with a regulation. Often, the long, tedious, and mandatory presentations that organizations conduct once a year or quarterly, followed by multiple-choice quizzes, are perceived as time-wasters by the staff. They want to rush through them quickly and typically forget any insights gained. Instead, the training program should be designed to be ongoing, consisting of bite-sized, interesting, immediately applicable, and fun training modules combined with simulated phishing attacks to test users. If any user clicks on a phishing email, they should be given additional training. Over time, the system should automatically identify users who rarely fall for such attacks and interrupt them with infrequent training, while the persistent offenders are given additional training and simulations on a regular basis. The other reason for ongoing training is that the risk landscape is continuously changing. Some months ago, malicious emails with QR (Quick Response) codes to scan were the exception, now they’re a very familiar sight, requiring ongoing awareness of staff not to scan them on their phones (outside of established business processes). Security experts often lament the priorities of staff, saying, “if they only took a second to read the email properly, they’d spot the signs that it’s phishing”, or “they just don’t take security seriously”. This is a fundamental misunderstanding of the priorities and psychology of the average office worker, clicking a link in an email will at most get you a slap on the wrist, not fulfilling an urgent request by the boss can get you in serious trouble or even fired. And this is why the entire leadership, from middle managers all the way to the C-suite must lead by example. If they do and communicate their understanding of the basics and secure processes, staff will follow suit. But if the CFO requests an exemption from MFA or bypasses security controls regularly because “it’s more efficient”, there’s no chance that his underlings will take cyber security seriously.


  1. Decrease time in deploying patches with scripted patch delivery
  2. One-time cost of software and hardware


  1. Decentralized management requiring additional resource
  2. There is no prior testing of patches before deployment
  3. Requires network resources as each client’s location

Cloud-Based Patch Deployment System

The last method is to deploy a cloud-based patch deployment system. Many current MSPs offer remote monitoring and management (RMM) platforms, such as Labtech, Continuum, and Kaseya, as part of their bundle of services. Some vendors, such as Kaseya, just provide the centralized delivery of the patches, while other vendors, such as Continuum, provide additional value by performing quality testing of the patches prior to implementation. This centralized method allows an MSP to manage patch deployment policies for multiple customers. An added benefit is maximized growth with less labor since the function is driven from one pane of glass and is well-automated. The burden of testing patches on a massive scale is shifted to the cloud-based vendor and away from the managed services provider. This reduces the managed services provider’s labor force needed to maintain this service offering because only after the patches pass the quality assurance testing of the servicer are they deployed to the endpoints. Like third-party patching systems, many cloud-based patch deployment systems are price-based, on tiers. This method normally decreases overall costs as the managed services provider enters higher tiers.  In short – As the managed service provider grows and scales, the cost of this service decreases. Thus, this method will generally benefit the managed services provider with lower costs while maximizing the use of their system. The second method would be to install a third-party patching system that would automate the installation of patches. Several third-party patching systems exist, such as Microsoft SCCM, GFI LanGuard, and Kaseya. However, this is a decentralized management method since the system’s administration is conducted in each client’s environment without a single pane of glass to manage multiple clients. Testing is still required for the proper patches to be installed. However, this method does eliminate the labor of applying patches individually.  Many vendors will have a tiered pricing structure as additional third-party patching systems are deployed. With each new client, there will be a dramatic increase in price. Many of these third-party systems are low-cost for the software. However, they require network resources and a labor force to implement, administrate, and remediate.


  1. Centralized management of patch deployment
  2. Many services have patch testing prior to deployment
  3. Low labor cost as a smaller workforce is needed
  4. Less time consumed in testing and deployment


  1. Higher cost and is normally a monthly expense
  2. Less customization for specific client needs

Which Patch Deployment Method Should You Choose?

Deciding on the right patch deployment method for managed service providers is a critical choice that hinges on various factors, including cost, scale, labor intensity, and specific client needs. Here’s a brief rundown to guide your decision: Manual Deployment:
  • Best for: Small MSPs or those just starting out due to low upfront costs and high customizability.
  • Considerations: Labor-intensive and time-consuming, suitable for limited-scale operations or those with specific, troublesome servers needing careful patching.
Third-Party Patching Systems:
  • Best for: MSPs looking to automate patch installation while maintaining some control over the process.
  • Considerations: Requires network resources and a labor force for administration. Decentralized management might complicate things, and there’s an inherent risk of untested patches causing issues.
Cloud-Based Patch Deployment System:
  • Best for: Larger, growing MSPs aiming for centralized management and reduced labor costs.
  • Considerations: Generally has higher upfront costs but is offset by the scalability and efficiency it offers. Pre-testing of patches by the vendor adds a layer of security and reliability.
To keep up to date with the latest articles and practices, pay a visit to our Hornetsecurity blog now.


In summary, your choice should align with your MSP’s size, growth trajectory, and the specific needs of your clients. Small providers might lean towards manual deployment for its low cost and customization, but as the business grows, the scalability and reduced labor intensity of cloud-based systems might become more appealing. Consider the trade-off between upfront costs, ongoing expenses, and the labor required for each method against your ability to manage and scale your operations effectively. Ultimately, the right choice balances cost, efficiency, and risk management to support your business’s growth and service quality.


How can MSPs ensure minimal disruption when applying patches?

MSPs can ensure minimal disruption by scheduling patch deployment during off-peak hours, conducting thorough testing in a controlled environment before rolling out patches, and utilizing rollback plans in case of failures. Cloud-based systems that offer pre-tested patches also reduce the risk of disruption.

What criteria should MSPs consider when choosing a patch deployment method?

MSPs should consider the size of their client base, the complexity of their clients’ environments, the scalability of the solution, labor and resource availability, cost implications, and the specific needs for customization and control. Additionally, evaluating the security and compliance requirements of their clients can guide the choice.

How do MSPs handle custom applications that may not be compatible with standard patches?

For custom applications, MSPs often use manual deployment methods to apply patches selectively and avoid compatibility issues. They may also work closely with software vendors to obtain custom patches or updates and perform extensive testing in isolated environments to ensure compatibility before deployment.