Article 2 of 5

Applied to Three Real Michigan SMB Profiles

From Starter Math to Professional Risk Intelligence

In our companion article, we introduced Laplace’s Rule of Succession as an accessible entry-level tool for quantifying cyber risk. We showed that even eighteen years of clean history produces a 5% annual breach probability for a law firm, that a single incident doubles it to 10%, and that two breaches in five years pushes the number to 42.9%.

Those numbers are honest. But they are incomplete.

Laplace looks only backward. It knows nothing about your industry, your financial exposure, or what your security controls are doing to your risk right now. For the business owners making real technology budget decisions, and for the companies whose operations genuinely cannot afford a breach, three more sophisticated tools produce far more actionable intelligence.

This article applies all three to the same company profiles introduced previously:

  • Company A: Law Firm: 18 years in operation, zero breaches on record.
  • Company B: Medical Practice: 18 years in operation, one breach five years ago.
  • Company C: Accounting Firm: 5 years in operation, two prior breaches. Currently running partial MFA on some systems with no endpoint detection (EDR).

All three companies operate at approximately $5 million in annual revenue. The simulations below were run across 100,000 modeled scenarios using industry-calibrated data.

This is Part 2 of our Cyber Risk Intelligence Series.
If you haven’t read Part 1, start here:
👉 Laplace’s Rule of Succession: An Introduction to Calculating Cyber Breach Probability

Model One: Beta Distribution, Adding Industry Context to Your Own History

What Beta Distribution Does Differently

Laplace’s formula only knows your company’s own history. Beta Distribution modeling asks a more complete question: given what we know about how often businesses like yours get breached, same size, same industry, same regulatory environment, what should we actually expect your annual risk to be?

It accomplishes this by combining two data sources into a single probability estimate:

  • Your company’s own breach history (the same s and n values used by Laplace).
  • An industry prior, the observed breach rate across hundreds of similar businesses, expressed as a starting belief before your specific data is considered.

The result is a posterior estimate, a probability that reflects both who you are as a company and what the world looks like for businesses operating in your space. It also produces a 95% credible interval, which tells you the realistic range your true annual risk likely falls within.

The Industry Priors Used in This Analysis

The following industry breach rates were used to construct the prior distributions. These figures are derived from published SMB breach data, including IBM’s annual Cost of a Data Breach Report, the Verizon DBIR, and sector-specific regulatory incident disclosures:

  • Legal (Law Firms): approximately 20% annual breach rate for SMB practices. The ABA’s annual legal technology survey consistently shows that roughly one in four small firms reports a security incident.
  • Healthcare (Medical Practices): approximately 30% annual breach rate. Healthcare has led all industries in breach frequency for fourteen consecutive years in published research.
  • Accounting / Financial Services: approximately 25% annual breach rate. The combination of tax data, client financial records, and IRS credentials makes accounting firms a premium target.

Each prior was modeled as equivalent to observing 200 comparable firm-years of data. This means your company’s 18 or 5 years of history is combined with that broader reference class to produce a single, unified estimate.

Beta Distribution Results

Company Profile Industry Rate Laplace Beta Estimate 95% Credible Interval
Company A: Law Firm 20.0% 5.0% 18.3% 13.5% – 23.7%
Company B: Medical Practice 30.0% 10.0% 28.0% 22.2% – 34.1%
Company C: Accounting Firm 25.0% 25.4% 25.4% 19.7% – 31.5%

The numbers above contain one of the most important insights in this entire series. Look at Company A.

Eighteen years of perfect performance produced a Laplace estimate of 5%. That felt reassuring. The Beta model, incorporating the legal industry’s 20% breach rate across hundreds of similar firms, produces an estimate of 18.3%. The 95% credible interval runs from 13.5% to 23.7%. None of those numbers are zero. None of them are even close to 5%.

CRITICAL INSIGHT: Company A’s clean record is real and meaningful. But it exists inside an industry where one in five similar firms is breached in a given year. The Beta model says: your individual record is evidence, but it does not override what we know about the environment you operate in.

Company B tells an equally important story from the opposite direction. One breach five years ago pushed the Laplace estimate to 10%. But in healthcare, the most persistently targeted industry in published breach research, the Beta model corrects that figure to 28%. A medical practice with a single historical incident is not a 10% risk profile. It is a 28% risk profile operating in an industry where nearly one in three similar organizations reports an incident each year.

Company C produces the most counterintuitive result. Two breaches in five years gave a Laplace estimate of 42.9%, alarming by any measure. The Beta model actually pulls that number down to 25.4%. This happens because five years is a short observation window, and the accounting industry’s 25% baseline is meaningfully lower than Company C’s personal incident rate. The model asks: is this company genuinely twice as dangerous as its industry peers, or does a short, unlucky history overstate the underlying risk? The Beta model hedges toward the industry mean until more data accumulates.

That does not mean Company C is safe. A 25.4% annual probability of a breach is serious. But it is a more precise, defensible estimate than 42.9%, and it is the kind of number that can anchor an honest conversation about investment priorities.

Model Two: Monte Carlo Simulation, What a Breach Actually Costs

Turning Probability into Financial Reality

Beta Distribution tells you how likely a breach is. Monte Carlo simulation answers the question that actually drives budget decisions: how much could it cost us?

A Monte Carlo simulation runs thousands; in this analysis, 100,000 hypothetical one-year scenarios for each company. In each scenario, the model independently determines whether a breach occurs (based on the Beta posterior probability) and, if it does, draws a randomized breach cost from a log-normal distribution calibrated to the company’s industry and size. 

The result is not a single scary number. It is a full picture of the range of outcomes a business might realistically face, from the most likely outcome to the worst-case tail scenarios that a prudent owner should plan for.

How Breach Costs Were Modeled

For $5 million revenue companies, breach cost distributions were calibrated separately by industry to reflect real differences in regulatory exposure and liability: 

  • Median breach cost of $135,000, with a heavy upper tail reflecting bar association investigations, client notification obligations under Michigan’s data breach notification statute, and potential malpractice exposure tied to client confidentiality failures. Law Firms:
  • Median breach cost of $185,000, elevated by HIPAA civil penalty exposure, HHS Office for Civil Rights investigations, and the higher cost of notifying patients under federal breach rules. Healthcare consistently shows the highest per-record breach cost in published research. Medical Practices:
  • Median breach cost of $155,000, reflecting IRS e-filing credential compromise, FTC Safeguards Rule requirements, and client financial record notification obligations. Accounting Firms: 

All three distributions use a log-normal model, which reflects the real-world reality that most breaches are expensive but manageable, while a small percentage become catastrophically costly due to litigation, regulatory action, or extended business interruption.

Monte Carlo Results (100,000 Scenarios, $5M Revenue)

Company Expected Annual Loss Median Breach Cost 90th Pct Breach Cost Prob > $100k Exp. 3-Year Loss

Company A: Law  Firm

$43,344 $137,122 $518,488 11.3% $128,462

Company B: Medical Practice

$100,538 $185,118 $814,101 19.7% $295,457

Company C: Accounting Firm

$68,505 $156,760 $604,425 16.7% $205,101

These numbers deserve to be read slowly.

Company A: Law Firm

On an expected value basis, the law firm faces $43,344 in average annual loss, a number that seems manageable against $5 million in revenue. But expected value is not the right frame for a risk that could threaten the business. The 90th-percentile scenario results in a breach costing $518,488. That is more than 10% of annual revenue, from a single incident. Over three years, the model projects a 10% chance of cumulative losses exceeding $379,322.

More telling: there is an 11.3% probability that this firm faces a loss exceeding $100,000 in the next 12 months. For a small practice without reserves or cyber insurance, that is not a theoretical risk. It is a material financial event.

Company B: Medical Practice

The medical practice carries the heaviest expected loss of the three: $100,538 per year, on average. With a 28% annual breach probability and elevated costs tied to HIPAA and federal notification requirements, this profile generates a nearly one-in-five chance (19.7%) of a loss exceeding $100,000 in any given year. The 90th percentile breach scenario reaches $814,101. Over three years, the expected cumulative loss is $295,457, and the 90th percentile three-year scenario approaches $800,000.

For a $5 million medical practice, a worst-case breach scenario approaching $800,000 over three years represents a genuine existential threat. These are not theoretical numbers. They are derived from the same kind of Monte Carlo framework that insurance actuaries use to set premiums.

Company C: Accounting Firm

Company C shows an expected annual loss of $68,505, but with a 90th percentile breach cost of $604,425 and a 16.7% chance of a six-figure loss this year, the exposure is acute. Over three years, a 10% probability of losses exceeding $567,477, against a backdrop of two prior incidents and only partial security controls in place, represents a risk posture that demands immediate attention.

THE BUDGET ARGUMENT: Cyber insurance for a $5M accounting firm typically costs between $3,500 and $8,000 per year. The Monte Carlo model shows an expected annual loss of $68,505 and a 90th percentile exposure of $604,425. The math for coverage is not complicated. 

It is also worth noting what Monte Carlo does that Laplace and Beta Distribution cannot: it translates probability into dollar ranges that a CFO or a board member can evaluate directly. “A 25.4% annual breach probability” requires translation. “A 10% chance of a $604,000 loss this year” does not.

Model Three: Bayesian Networks, Real-Time Risk Intelligence

The Problem All Other Models Share

Laplace, Beta Distribution, and even Monte Carlo share one fundamental limitation: they are all anchored to history. They incorporate past breach counts, industry prior data, and historical cost distributions. None of them knows what happened last Tuesday. 

Bayesian Networks address this directly. They treat risk as a living probability that updates in real time as new signals arrive, both reassuring signals (months of clean performance, newly deployed controls) and warning signals (detected phishing attempts, failed logins, near-miss incidents that were contained before causing damage). 

For this analysis, we apply the Bayesian framework specifically to Company C, the accounting firm with the most urgent risk profile: 5 years, 2 prior breaches, partial MFA deployment, and no endpoint detection (EDR) in place.

Decomposing Risk by Attack Vector

Rather than treating the breach probability as a single black-box number, Bayesian modeling first breaks risk down by how attacks actually occur. Industry data identifies three primary attack categories for SMB accounting firms:

  • Responsible for approximately 35% of successful SMB breaches. These are the attacks that MFA is specifically designed to block. Credential-based attacks (phishing, password spraying):
  • Responsible for approximately 45% of successful SMB breaches. These are the attacks that EDR is specifically designed to detect and contain. Endpoint malware attacks:
  • Responsible for the remaining 20%. General security hardening provides some protection here, but no single control eliminates this category. Other vectors (supply chain compromise, insider threat, social engineering):

With Company C’s partial MFA (covering approximately 50% of accounts) and no EDR in place, the model calculates how each attack pathway contributes to the overall residual risk. Full MFA at 100% account coverage and EDR deployment are then modeled as a contrast scenario.

Attack Vector Weight Partial MFA / No EDR Full MFA + EDR Control Notes
Credential/Phishing 35% 9.61% 4.20% 72% eff. on covered accounts
Endpoint Malware 45% 19.30% 6.76% 65% eff. with EDR deployed
Other Vectors 20% 8.58% 7.72% 10% general hardening benefit

TOTAL RESIDUAL RISK

37.5% 18.7%

The attack vector breakdown exposes where Company C’s security gap is largest. Without EDR, endpoint malware attacks contribute 19.30% of the 37.5% total residual risk, more than half of the company’s entire breach probability coming from a single, addressable gap. Full EDR deployment alone would reduce that contribution from 19.30% to 6.76%, cutting total residual risk nearly in half.

How Bayesian Updating Works

Once the baseline residual risk is established, a Bayesian Network updates it based on signals, observable events that serve as evidence about the current threat environment. The mathematical rule governing this update is Bayes’ Theorem:

P(breach | signal) = P(signal | breach environment) × P(breach) / P(signal) 

In plain language: if a particular warning signal is much more likely to occur when you are in a high-threat environment than a low-threat one, then observing that signal should significantly raise your assessed probability of a breach.

  • For the near-miss signal modeled below, the parameters are: Probability of a detected phishing attempt in a high-threat environment: 88%
  • Probability of a detected phishing attempt in a low-threat environment: 22% 

These asymmetric likelihoods produce a substantial Bayesian update when a near-miss is detected.

Bayesian Signal Table, Company C (Accounting Firm)

Signal State / Condition       Breach Probability Change vs. Partial-Control State
Laplace baseline (no model adjustment) 42.9%
Adjusted for partial MFA / no ED 37.5% -5.4 pts
+ Detected phishing attempt (near-miss signal) 70.6% +33.1 pts
If full MFA + EDR deployed (no near-miss) 18.7% -18.8 pts
Full controls + clean 12-month signal 10.3% -27.2 pts

The table above is where the Bayesian model delivers its most important message.

Company C’s current security controls, partial MFA, no EDR, have reduced their residual risk modestly from the 42.9% Laplace baseline to 37.5%. That improvement is real, even if the Laplace formula cannot see it yet. Controls are working. Just not enough of them.

WARNING SIGNAL: A single detected phishing attempt, the kind of event most businesses log and forget, updates Company C’s Bayesian breach probability from 37.5% to 70.6%. A near-miss is not just a close call. Statistically, it is evidence that you are actively being targeted.

This is the insight that separates reactive security from proactive security. A detected phishing attempt is not merely an inconvenience that was neutralized. It is a data point. A Bayesian system treats it as evidence that the threat environment has shifted, and it updates the risk estimate accordingly. Organizations that ignore near-misses are discarding the most valuable real-time signal available to them.

The final two rows of the table tell the other side of the story. Full MFA deployment across all accounts combined with EDR brings Company C’s residual risk down to 18.7%. That is still not zero, no model should ever promise zero, but it represents a 24.2 percentage point reduction from the Laplace baseline and a meaningful 18.8 point drop from where the company stands today.

And after twelve months of clean performance with those full controls active, the Bayesian model produceighteen-year clean record, sits under Laplace alone. Strong controls, consistently maintained, can close a significant portion of the statistical gap between a troubled history and a clean one. They cannot do it overnight. But they can do it far faster than waiting for decades of clean data to slowly improve a Laplace score.

THE CREDIT SCORE PARALLEL: Under Laplace, Company C needs 53 years of perfect performance to return to a 5% risk level. Under the Bayesian model with full controls deployed, the same company reaches 10.3% after just 12 clean months. Think of it like rebuilding a credit score: you cannot erase the history, but the right moves start improving the number right away. Deploying the right security controls is making those right moves.

Summary: Three Models, One Conclusion

What the Numbers Say, Together

Each of the three models in this analysis answers a different question. Together, they form a complete picture of cyber risk that no single formula can provide.

  • answers: what is your realistic annual breach probability when your own history is placed in the context of your industry? For the law firm, it corrects an overconfident 5% to 18.3%. For the medical practice, it corrects an underestimated 10% to 28.0%. For the accounting firm, it moderates an inflated 42.9% to a more precise 25.4%. Beta Distribution
  • answers: what does that probability mean in dollars? It converts probability into financial exposure ranges that board members, CFOs, and insurance buyers can act on. A 25.4% breach probability sounds abstract. A 16.7% chance of a $100,000-plus loss this year and a 90th percentile three-year exposure of $567,000 does not. Monte Carlo Simulation
  • answer: what is my risk right now, today, given what my systems are detecting? They update in real time. They reward investment in controls immediately, before historical statistics can catch up. And they convert near-miss events from nuisances into actionable intelligence. Bayesian Networks 

None of these tools requires a PhD to use. What they require is a partner who can apply them to your specific situation, your industry, your history, and your current security posture, and translate the output into decisions you can make next quarter.

 THE BOTTOM LINE: The companies that will avoid being a statistic are not the ones who got lucky. They are the ones who took the math seriously, invested before the breach, and built security programs that update faster than the threat landscape can evolve. 

CONSIDER THE STAKES: Company C entered this analysis at 42.9% under Laplace. Without intervention, a third breach would push that number to 57.14%, past the coin-flip threshold where a breach in any given year is more likely than not. The models in this article exist precisely to interrupt that trajectory before it reaches that point.

Frequently Asked Questions

Why is my Beta Distribution estimate so much higher than my Laplace number?

Laplace only knows your company’s own history. If you have had zero breaches, it produces a low number because it is only counting your clean years. Beta Distribution adds a second layer of information: how often businesses in your industry get breached, regardless of their individual track record. For a law firm operating in an industry with a 20% annual breach rate, the Beta model is essentially saying, your clean record is meaningful, but the world you operate in is riskier than your personal history suggests. The higher number is not a penalty. It is a more complete picture.

If Beta Distribution lowers Company C’s risk from 42.9% to 25.4%, does that mean they are actually safer than Laplace suggested

In a narrow statistical sense, yes, the Beta model believes Company C’s true underlying risk is closer to the industry average than its short, unlucky history implies. A five-year window with two incidents is a small sample, and the model hedges toward what we know about the broader accounting industry rather than treating those five years as a definitive verdict. That said, 25.4% is still a serious annual breach probability. The correction does not make the risk comfortable. It makes it more accurate, which is what you need for honest budget planning.

What does “expected annual loss” actually mean? Am I guaranteed to lose that amount

No. Expected annual loss is a probability-weighted average across all 100,000 simulated scenarios, including the majority of years where no breach occurs at all. Think of it like this: if a breach has a 25% chance of occurring and typically costs $155,000 when it does, the expected annual loss is approximately $38,750, even though in any given year you either lose nothing or lose a much larger amount. The expected value is most useful for multi-year planning and insurance decisions. For understanding worst-case exposure, focus on the 90th percentile figures, which reflect what a bad year actually looks like.

Our company has never had a breach. Why does the Monte Carlo model still show a significant chance of a six-figure loss?

Because probability does not care about your streak. A law firm with an 18.3% annual breach probability, even one with eighteen clean years on record, has roughly a one-in-eight chance of experiencing a breach in any given year. When that breach occurs, the median cost runs over $135,000, and the 90th percentile scenario exceeds $500,000. The Monte Carlo model is not predicting that you will be breached. It is showing you what the financial consequences look like if you are, and asking whether your business is prepared to absorb them.

We just had a phishing attempt that our filters caught. Should we really be worried?

Yes, not because the attempt succeeded, but because of what it signals. A detected phishing attempt is evidence that someone is actively targeting your organization. In the Bayesian model applied to Company C, a single detected attempt updates the annual breach probability from 37.5% to 70.6%. That jump reflects a statistical reality: phishing attempts are far more common in high-threat environments than low-threat ones. When your filters catch one, they have done their job, but the alert should trigger a review of your current controls, not a collective sigh of relief.

What is the difference between deploying MFA and deploying EDR, and which should we prioritize first?

MFA (Multi-Factor Authentication) protects against credential-based attacks: phishing, password spraying, and stolen login credentials. EDR (Endpoint Detection and Response) protects against malware that lands on a device and attempts to spread, exfiltrate data, or deploy ransomware. For Company C, endpoint malware attacks represent 45% of the breach pathway, the single largest attack vector, and without EDR, those attacks face no meaningful detection layer. If you already have partial MFA in place, expanding EDR coverage typically delivers a larger marginal risk reduction for most SMBs. The attack vector table in this article shows the exact numbers for Company C’s specific profile.

Can we use these models to negotiate our cyber insurance premiums?

Absolutely, and more insurers are moving in this direction. A documented Beta Distribution analysis demonstrating your risk profile relative to industry peers, combined with a Monte Carlo exposure range and evidence of deployed controls, gives an underwriter far more to work with than a standard questionnaire. Insurers price premiums based on expected loss, the same concept that the Monte Carlo model calculates. Coming to the table with your own modeled exposure and documentation showing how specific controls reduce that exposure puts you in a stronger negotiating position than the vast majority of SMB applicants.

How often should a business update its Bayesian risk model?

A true Bayesian model updates continuously as new signals arrive, that is its core advantage. At a practical level for most SMBs, a formal review should occur at four trigger points: after any security incident or near-miss, after deploying new controls (MFA, EDR, new firewall), after a significant change in business operations such as adding remote employees or a new software platform, and at minimum once per year as part of an annual security review. The goal is to ensure your risk estimate reflects your current reality, not a snapshot from twelve months ago.

Are these models only useful for companies that have already been breached?

Not at all, in fact, they are arguably most valuable for companies with clean records. Laplace already tells breach survivors that their risk is elevated. The Beta Distribution, Monte Carlo, and Bayesian models deliver their sharpest insights to the business owner who has never been hit and therefore believes their risk is low. Company A, the law firm with eighteen clean years, is the clearest example: a Laplace estimate of 5% felt reassuring right up until Beta Distribution placed it at 18.3% and Monte Carlo produced a 90th percentile single-incident cost of $518,000. Clean records are not a reason to avoid rigorous risk modeling. They are a reason to use it before the record changes.

How do I know if my industry’s breach rate actually applies to a company as small as mine?

It is a fair question, and the honest answer is that industry rates are averages that will not fit every company perfectly. Smaller firms often face lower absolute attack volumes than large enterprises but typically carry weaker defenses, making their per-attack success rate higher. The industry priors used in this analysis were calibrated specifically to SMB data, not enterprise breach statistics, which would overstate the financial exposure for a $5 million firm. If your business has characteristics that meaningfully differ from the industry average, a particularly strong security posture, a niche client base with lower data sensitivity, or conversely a role in critical infrastructure, those factors can and should be incorporated into a customized analysis. That is exactly the kind of work Cyber Protect LLC provides.

Advanced Risk Modeling Glossary

This article introduces technical vocabulary specific to Beta Distribution, Monte Carlo, and Bayesian modeling. The definitions below are written for business owners, not statisticians. Terms carried over from the foundational Laplace article are not repeated here; this glossary covers only the new concepts introduced in this installment.

Statistical & Modeling Concepts

Prior Distribution (Industry Prior)

A starting belief about the probability of an event, established before any company-specific data is considered. In Beta Distribution modeling, the industry prior is built from observed breach rates across hundreds of similar businesses in your sector. It represents what we would expect your risk to be based solely on the environment you operate in, before your own history is factored in.

Posterior Estimate

The updated probability is produced after combining the industry prior with your company’s own breach history. The posterior estimate is the Beta Distribution’s final answer, a single probability that reflects both what the industry tells us and what your specific record shows. It is always a blend of the two, weighted by how much data each source contributes. 

Credible Interval (95%)

The range within which your true annual breach probability most plausibly falls, based on the available data. A 95% credible interval means there is a 95% probability that the actual underlying risk lies somewhere between the lower and upper bounds shown. For example, Company A’s interval of 13.5% to 23.7% means the model is highly confident that the true risk is within that range, even if the point estimate of 18.3% cannot be known with certainty. Credible intervals are a Bayesian concept; they are not the same as confidence intervals used in frequentist statistics. 

Log-Normal Distribution

A probability distribution is used to model breach costs in this analysis. It reflects a real-world pattern: most breach events produce losses clustered around a typical cost, while a smaller number of incidents produce catastrophically higher costs due to litigation, extended downtime, or regulatory action. The log-normal model prevents the simulation from assuming breach costs are symmetrically distributed around an average, because in practice, the worst-case scenarios are far more extreme than the best-case savings. 

Pseudocounts

Artificial data points added to a statistical model to prevent extreme conclusions. In Laplace’s Rule, the “+1” and “+2” serve this purpose. In Beta Distribution modeling, the industry prior plays a similar role, it prevents a company’s short personal history from producing overconfident estimates in either direction. 

Reference Class

The group of comparable businesses whose historical data is used to establish the industry prior. A reference class for a Michigan law firm might draw on breach disclosures from hundreds of small legal practices nationwide. The quality of a Beta Distribution estimate depends heavily on how well the reference class actually matches the company being modeled, size, industry, regulatory environment, and data sensitivity all matter. 

Bayesian Updating

The process of revising a probability estimate in response to new evidence. Each time a new signal arrives, a detected phishing attempt, a clean quarter, a newly deployed control, Bayes’ Theorem is applied to produce an updated probability. The update is proportional to how strongly the signal distinguishes between high-threat and low-threat environments. A signal that is equally likely in both environments carries no information and produces no update. 

Bayes’ Theorem

The mathematical rule that governs Bayesian updating: P(breach | signal) = P(signal | breach environment) × P(breach) / P(signal). In plain English, it asks: given that we observed this signal, how much more or less likely is it that we are in a breach-prone environment? The theorem produces a new probability that incorporates both the prior belief and the strength of the new evidence. 

Attack Vector Weighting

The process of breaking total breach probability into contributions from specific pathways, credential attacks, endpoint malware, and other vectors based on industry data about how breaches actually occur. Weighting allows the Bayesian model to calculate how much each security control (MFA, EDR, general hardening) reduces risk, rather than treating all controls as interchangeable. It is the mechanism that makes control-specific investment recommendations statistically defensible.

Financial Risk Terms

Laplace’s Rule of Succession

A mathematical formula [P = (s+1) / (n+2)] is used as an entry-level tool to estimate the probability of a future breach based on past history. It prevents the zero-probability fallacy by adding pseudocounts that acknowledge underlying uncertainty.

Zero-Probability Fallacy

The dangerous misconception is that because a data breach has not occurred in the past, the probability of one in the future is zero. This is considered statistically invalid and practically harmful as a planning assumption.

Lagging Indicator

A metric that is purely backward-looking. Laplace’s Rule is a lagging indicator because it reflects only past events and does not register the new security tools or controls you implement today.

Frequentist Approach

Calculating probability strictly by dividing past incidents by total time periods (s/n). This approach is discouraged in cybersecurity because it assigns 0% risk to any business that has not yet been breached, which is statistically indefensible. 

Time-Unit Bias

A limitation of certain statistical models where the result changes significantly depending on whether you measure by day, month, or year. Laplace’s Rule is subject to this bias, which is one reason it should not be used as a standalone tool for formal risk management.

Advanced Risk Modeling

Expected Annual Loss

The probability-weighted average financial loss across all simulated scenarios in a given year, including the many scenarios where no breach occurs. It is calculated by multiplying the probability of a breach by the average cost of a breach. Expected annual loss is useful for multi-year budgeting and insurance pricing, but it should not be used as the sole planning metric because it smooths over the high-severity tail scenarios that pose the greatest business risk.

90th Percentile Scenario

The breach cost that would be exceeded in only 10% of simulated scenarios, in other words, a realistic worst-case benchmark rather than an average. In the Monte Carlo results, the 90th percentile figures represent what a bad year actually looks like for each company profile. Because cyber losses are log-normally distributed, the 90th percentile cost is typically three to six times the median cost, making it the more relevant figure for business continuity planning and insurance coverage decisions. 

Three-Year Cumulative Exposure

The projected total financial loss across a 36-month window, accounting for the possibility of multiple breach events. The Monte Carlo simulation runs independently for each of the three years and sums the results. The 90th percentile three-year figure represents the loss level a business should be able to withstand if it intends to remain operational through a worst-case three-year period. For most SMBs, this number is the most relevant input for setting cyber insurance coverage limits.

Security Controls & Signals

Multi-Factor Authentication (MFA)

A login security control that requires users to verify their identity through two or more methods, typically a password plus a code sent to a phone or generated by an app. MFA is the primary defense against credential-based attacks such as phishing and password spraying. In this analysis, full MFA deployment at 100% account coverage was modeled as reducing credential attack success by 72% for Company C. Partial MFA, covering only some accounts or systems, provides proportionally less protection and leaves uncovered accounts as viable entry points.

Endpoint Detection and Response (EDR)

A security tool that monitors devices (laptops, desktops, servers) for signs of malicious activity and can automatically isolate or block threats before they spread. EDR is the primary defense against endpoint malware attacks, including ransomware. In this analysis, EDR deployment was modeled as reducing endpoint malware success by 65% for Company C. The absence of EDR was identified as the single largest addressable risk driver in Company C’s attack vector profile, contributing more than half of the firm’s total residual breach probability. 

Near-Miss Signal

An observable security event that was contained before causing a successful breach, a blocked phishing attempt, a failed login from an unfamiliar geography, a quarantined malware file. In a Bayesian model, near-miss signals carry significant evidential weight because they are far more likely to occur in high-threat environments than low-threat ones. In the Company C analysis, a single detected phishing attempt updated the annual breach probability from 37.5% to 70.6%. Near-miss events are not close calls to be forgotten. They are the most timely and actionable risk intelligence a business can receive. 

Control Effectiveness

The percentage reduction in breach probability that a specific security control delivers when properly deployed and maintained. In this article, MFA effectiveness was modeled at 72% against credential attacks, and EDR effectiveness at 65% against endpoint malware. Control effectiveness is not additive across all attack types, MFA does nothing to stop an endpoint malware attack, and EDR does nothing to stop a credential compromise. This is why attack vector decomposition is essential to understanding which controls actually move the needle for a given company’s specific risk profile. 

Residual Risk

The actual probability of a breach that remains after security controls are actively deployed and functioning. Residual risk is what your investments buy down immediately, even before historical statistics like Laplace reflect the improvement. In the Company C Bayesian model, partial MFA and no EDR produced a residual risk of 37.5%, down from the 42.9% Laplace baseline. Full MFA plus EDR drops it further to 18.7%. The gap between a company’s Laplace score and its true residual risk is the measure of how much their security program is actually working.

Your Risk Profile Deserves Real Numbers

Cyber Protect LLC works with Michigan’s small and mid-sized businesses to build cybersecurity programs grounded in real risk data, not assumptions.

We apply the same modeling frameworks described in this article to your specific company profile, your industry, and your current security posture. The result is a clear, plain-English picture of where you stand, what your financial exposure looks like, and which controls will move the needle most effectively for your budget.

We offer tailored pricing built around your risk profile and operational needs. Flat-rate options are available for businesses that prefer budget-line predictability.

Visit www.cyberprotectllc.com or call us   (586) 500-9300 to speak with a Michigan cybersecurity specialist.

“No Geek Speak. No Hassles. Just Real Protection.”

Editorial note: This article was by AI tools and reviewed by cybersecurity professionals at Cyber Protect LLC for accuracy, clarity, and relevance.

About the Author

Cheyenne Harden

Cheyenne Harden

CEO

Cheyenne Harden is the CEO of Cyber Protect LLC with 10+ years of experience in cybersecurity and IT consulting for Michigan businesses.

cyberprotectllc.com