7. Red flags that kill deals (or should)
A deal-team way to separate fixable issues from value killers, with evidence asks and decision triggers that change price, terms, or timing.
The deal had a clean vendor report, a strong management story, and a tight signing window.
Then the buyer’s security lead asked one question: “Show me your last 90 days of log coverage and MFA status for privileged accounts.”
The target couldn’t. Not because the answer was bad. Because there was no answer. Admin access was shared, endpoint tooling was inconsistent, and the “SIEM” was a checkbox in a renewal deck. The business wasn’t unsafe in a day-to-day sense. But it couldn’t be connected to the buyer’s environment on the timeline the synergy plan assumed.
The model had benefits starting in quarter two. Connectivity slipped by a quarter, then another. By the time controls were rebuilt, the value case still existed. The schedule didn’t.
Most “red flags” don’t kill deals. They kill deal math: timing, one-time cash, Day-1 stability, and the buyer’s ability to execute the thesis.
The primary decision is this: when you see a tech red flag, do you (1) walk, (2) reprice, or (3) restructure the deal so you can still win on a realistic clock? If you don’t make that decision explicitly, the default is “we’ll fix it after close,” and the asset becomes the speed limit.
A practical definition of a deal-killing red flag
In tech diligence, a red flag is “deal-killing” when it triggers one of three outcomes:
- Continuity risk: it can disrupt revenue or core operations (orders, billing, payroll, regulated reporting) inside the first 6–12 months.
- Clock risk: it gates the timeline for the value levers in the model (connectivity, TSA exit, integration milestones, data availability).
- Cash risk: it creates mandatory one-time spend or a run-rate step-up that the model can’t absorb.
You don’t need a long list. You need a short set of red flags that reliably hit continuity, clock, or cash.
The mistake: treating red flags as “workstreams”
Deal teams often mis-handle red flags in four predictable ways:
Pattern 1: “We’ll fix it post-close” becomes a substitute for a plan
If a red flag is real, it has a mechanism: missing controls, fragile integrations, end-of-life platforms, undocumented operations. Without a sequence, owners, and dates, “fix it post-close” means “learn it post-close.”
Pattern 2: Verbal assurances replace artifacts
Management teams aren’t lying. They are optimistic inside their own constraints. If there is no evidence, assume you are buying uncertainty.
Pattern 3: The deal keeps its original timetable by default
The model is usually the last thing to change, because it is socially and politically expensive. That’s how the red flag turns into a value-leak: you sign into a schedule you can’t run.
Pattern 4: Teams negotiate price but leave the clock exposed
Price helps with downside, but it doesn’t create time. If the red flag gates connectivity, TSA exit, or the ability to ship product, the main risk is often timing and execution, not purchase price.
A decision tree that keeps you honest
When a red flag appears, run it through three questions. This takes 30 minutes and forces a posture.
- Does it threaten continuity? If yes, can you reduce it to an acceptable level before Day 1 (or before you rely on it operationally)?
- Does it set the clock? If yes, what is the earliest realistic date the gate opens with the resources you will actually have?
- Is it mandatory cash or discretionary investment? If it is mandatory, can the model absorb it without breaking the return story?
If the answer to any of those is “no,” you don’t have a red flag. You have a deal structuring problem.
Seven red flags worth treating as “walk / reprice / restructure”
Each red flag below includes: what it looks like, why it matters, what to ask for, and triggers that force a decision.
1) A security posture that cannot support the buyer’s Day-1 and connectivity plan
What it looks like
- inconsistent MFA (especially for privileged access)
- shared admin accounts and weak joiner/mover/leaver controls
- limited logging and no credible incident response motion
- endpoint tooling coverage unclear or partial
Why it matters
In many deals, the first value gate is not ERP or data. It’s whether you can safely connect environments (or share data) without taking on unacceptable incident risk. If you can’t connect, synergies slip and TSA exit gets harder.
Evidence to ask for (48-hour ask, not a workshop)
- MFA coverage by application (especially admin consoles)
- endpoint tooling deployment coverage (export, not a slide)
- log sources connected to the SIEM and the retention period
- last pen test summary and remediation status
- incident register for the last 24 months (with actions taken)
Decision triggers
- If privileged access is not individually attributable (shared accounts) and there is no plan to fix it inside 60 days, do not plan Day-1 connectivity.
- If logging covers less than ~80% of “crown jewel” systems (ERP/finance, customer data, production environments), assume you own breach risk during integration and budget remediation as mandatory one-time cash.
Default action
Restructure the synergy plan around delayed connectivity (segmented approach, clean-room data, phased access). Price remediation, but treat timing as the primary risk.
2) Core systems at end-of-life inside the hold period or integration window
What it looks like
- ERP or billing platforms out of mainstream support
- customizations that block upgrades
- “we’ve been planning the upgrade” with no funded program
Why it matters
End-of-life platforms create a double bind: you must spend one-time cash to stay supported, and you often can’t integrate or standardize until the upgrade path is real. If the core system is the operational heartbeat, it also becomes a continuity risk.
Evidence to ask for
- vendor support timelines (official dates)
- current version, customization footprint, and upgrade blockers
- last two upgrade attempts (scope, cost, why they stalled)
Decision triggers
- If a core platform goes out of support in <12 months and the upgrade path requires a multi-quarter program, treat it as mandatory one-time cash and a clock-setter.
- If the platform is heavily customized and the only “plan” is vendor hope, assume the timeline is longer than the deal model.
Default action
Reprice for the one-time program and move the value-lever timing. If the thesis depends on fast integration or cost-out in that domain, consider walking.
3) Operational fragility hidden behind “normal” uptime
What it looks like
- critical workflows depend on manual steps, scripts, or one person
- repeated “minor” incidents with no root-cause closure
- no clear recovery objectives or tested disaster recovery
Why it matters
Integration and separation add stress. Fragile operations that “work” today often fail under change: month-end close, peak seasons, new security controls, network segmentation, or vendor transitions. This is where Day-1 and the first 100 days get consumed by stabilization.
Evidence to ask for
- last 12 months of major incidents with root-cause and actions
- backup and restore evidence (not policy documents)
- DR test results and the last time they were executed
- “how we close” walkthrough with the top manual steps
Decision triggers
- If DR has not been tested in >12 months and the business is revenue-dependent on a small number of systems, assume continuity risk and budget a stabilization tranche.
- If month-end close depends on heroics (many manual reconciliations, heavy spreadsheet controls), assume reporting and synergy measurement will be slower than planned.
Default action
Restructure Day-1 expectations (what must work, what can wait), and fund stabilization as mandatory one-time cash with named owners.
4) Technology cost opacity (you can’t normalize run-rate)
What it looks like
- IT budget materially diverges from cash spend
- contractors booked outside IT doing “run” work
- cloud spend shows spikes and credits with no explanation
- shared services allocations that will disappear and be replaced with new costs
Why it matters
You can’t underwrite EBITDA if IT cost is a narrative. Cost opacity also hides operational dependencies (a contractor you didn’t know is running your ERP interfaces).
Evidence to ask for
- 6–12 months of cloud exports (not a single invoice)
- top 20 vendor invoices tied to IT
- contractor list with role and who they report to
- budget vs actual bridge for the last 12 months
Decision triggers
- If you can’t explain ~90% of IT cash spend with evidence, assume the run-rate will move post-close and protect the model (downside case or price/terms).
- If cloud costs are “managed by the vendor” and the buyer can’t get raw usage data, assume cost control will be harder than planned.
Default action
Reprice or structure protections (escrow/holdback tied to cost true-ups). If the deal relies on near-term margin expansion, treat this as a thesis risk, not a diligence nuisance.
5) Key-person dependency that makes the first 100 days unrealistic
What it looks like
- one person (or one contractor) “owns” ERP, integrations, or reporting
- tribal knowledge replaces documentation
- low bench in security, data, and core applications
Why it matters
Most tech programs fail due to capacity, not architecture. A thin team can keep the lights on; it can’t run Day-1 change plus value work plus integration/separation.
Evidence to ask for
- org chart including contractors, tenure, and turnover history
- “who can do what” map for core systems
- current change backlog and major programs already in-flight
Decision triggers
- If fewer than 3–4 people can explain order-to-cash and the close process end to end, retention and backfill are mandatory one-time costs.
- If a critical system has only one effective owner (employee or contractor), assume execution risk and require an explicit retention plan before you count on delivery.
Default action
Restructure the first 100 days around retaining and de-risking key knowledge (retention packages, shadowing, documentation sprint). Move aggressive value levers out until the team can run change safely.
6) Data exposure that can create customer loss or regulatory cost
What it looks like
- unclear data residency for regulated data
- inconsistent customer consent handling
- weak access controls to customer PII
- ad hoc data extracts and “everyone has a spreadsheet”
Why it matters
This is not a theoretical risk. A customer-triggered audit, a regulator inquiry, or a breach can become a direct valuation event: churn, contract renegotiations, and unplanned remediation.
Evidence to ask for
- data map for PII and regulated data (where it lives and who accesses it)
- customer security questionnaire history and exceptions granted
- last compliance audit findings (SOC 2, ISO, PCI, HIPAA, SOX controls as relevant)
Decision triggers
- If regulated data handling is undocumented and access controls are weak, assume remediation is mandatory and the go-to-market team will face more customer friction post-close.
- If the business depends on a handful of enterprise customers with strict security requirements, treat exceptions as a churn risk, not just “security debt.”
Default action
Reprice and structure customer-risk protections (specific indemnities/escrows where possible). More importantly, reset the first 100 days to build a minimum credible security and data control baseline.
7) “Owned technology” that is not actually owned (IP and licensing risk)
What it looks like
- unclear IP assignment for contractors or offshore teams
- customer contracts promise security controls the business can’t prove
- open-source use without policy or tracking (especially in software assets)
Why it matters
IP and licensing risk is one of the few tech topics that can become an immediate legal and valuation issue. It also shows up hard at exit.
Evidence to ask for
- contractor agreements and IP assignment clauses
- open-source policy and scanning approach (or proof none exists)
- the top 5 customer security/compliance commitments and evidence trail
Decision triggers
- If IP assignment is incomplete for core product code, treat it as a legal-close gating item.
- If customer commitments are materially ahead of controls (for example, “centralized logging” without coverage), assume commercial risk and plan remediation before you scale integration or sales.
Default action
Restructure close conditions and add legal protections. If the asset’s value is primarily IP, this can be a walk-away issue unless it is cleanly fixable.
How best teams turn red flags into a deal posture
Best teams don’t “list risks.” They produce an explicit posture with owners and dates.
Step 1: Classify each red flag by its deal impact
- Walk: continuity risk you can’t reduce in time, or IP/legal exposure that can’t be cured
- Reprice: mandatory cash and run-rate increases that the model must absorb
- Restructure: clock-setters that require sequencing changes (delayed connectivity, phased integration, TSA redesign)
Step 2: Force three outputs into the deal pack
- A one-page “gates and dates” view: the 3–5 things that must be true for the model’s first value milestones
- Mandatory one-time cash (and what it buys): stabilize, remediate, separate, stand up
- A Day-1 posture statement: what you will connect on Day 1, what you will not connect, and why
If a red flag doesn’t change one of those three outputs, it’s probably not a red flag. It’s a backlog item.
Step 3: Negotiate terms that match the mechanism
Match protections to the actual failure mode:
- Clock risk → TSA terms, transition support, and explicit sequencing commitments
- Cash risk → price, specific escrows/holdbacks, or seller-funded remediation
- Continuity risk → close conditions, carve-outs to the scope, or walking away
What to do in the next two weeks (owners included)
If you want “red flags” to change outcomes, treat them like underwriting decisions.
- Create the red-flag register with classification (deal lead + tech DD lead). For each red flag: walk/reprice/restructure, mechanism, and the single artifact that proves it.
- Pull evidence in 48 hours for the top 3 flags (tech DD lead). If evidence is missing, treat that as a finding and tighten posture/terms.
- Write the Day-1 and connectivity posture statement (integration/separation lead + security lead). Decide what connects when; stop relying on implied connectivity.
- Force mandatory cash and timing shifts into the model by end of week one (finance + tech DD lead). If returns still work, you have a deal; if not, you have a decision.
- Translate the top 2–3 flags into contractual protections (deal counsel + deal lead). Align protections to the mechanism; avoid generic language that won’t hold in a dispute.
Red flags don’t kill deals. Unpriced continuity, clock, and cash risk does. The job of tech diligence is to name those risks early enough that the deal can still change.