Intent Is Not in the Feed
Intent Is Not in the Feed
By Julian Brownlow Davies | Meaning In the Signal, Week 5
I have attended a considerable number of threat intelligence briefings over the course of my career, and I have noticed a pattern that I consider genuinely revealing about the state of the discipline. The briefings that I find most impressive - the ones that arrive in polished decks with well-sourced data and careful MITRE ATT&CK mapping - tend to answer the following questions with considerable precision: what malware families are active in our sector, which IP ranges are associated with known threat actor infrastructure, what techniques does this adversary group favour, and what indicators should our detection tooling be looking for? These are not small things to know. The analysis behind them is often genuinely sophisticated, the sourcing credible, the tradecraft evident.
What the briefings rarely answer - and what I have come to consider the more important question - is this: what does this adversary actually want, and does what they want align with what we have?
It sounds almost too simple to be worth stating. But the gap between “this threat actor is active in your sector” and “this threat actor would find your specific assets sufficiently valuable to target you deliberately” is a gap that most threat intelligence programmes never bridge - and the cost of that gap, measured in misallocated attention and misdirected investment, is rather significant.
IOCs Are Signal. Intent Is Meaning.
The threat intelligence market has matured considerably over the past decade, and the majority of that maturation has been in the direction of signal: more indicators, higher-fidelity data, faster feeds, broader coverage, better ATT&CK alignment. These are genuine improvements. The industry’s ability to characterise adversary capability and infrastructure - the tools used, the infrastructure operated, the specific techniques employed at each stage of an intrusion - has advanced to a point that would have seemed ambitious ten years ago.
What has advanced rather less is the industry’s ability to characterise adversary intent - the objectives that drive the capability, the targets that the capability is directed toward, and the logic that connects a specific organisation’s specific assets to a specific adversary’s specific goals.
The Diamond Model of Intrusion Analysis, developed by Caltagirone, Pendergast, and Betz in 2013, is instructive here. The model frames every intrusion event across four analytical dimensions: adversary, capability, infrastructure, and victim. These four vertices are interdependent; a complete understanding of any intrusion requires engagement with all four. What I observe in practice is that the industry has invested heavily in the capability and infrastructure vertices - the tools and the tradecraft - whilst treating the adversary and victim vertices, which together encode intent and targeting logic, as the afterthought. We can tell you what the adversary used. We are considerably less clear on why they chose you.
This is, in the language of this series, a comprehension failure. Not a detection failure. The signal is abundant. But signal without context is noise, and the most important piece of context for any threat intelligence finding is whether the adversary who generated it has any particular interest in what you have to offer.
Not Every Adversary Wants the Same Thing
The practical significance of this gap becomes apparent the moment you place two or three distinct adversary archetypes side by side and ask what each of them is actually after.
A commodity ransomware group - the kind that has dominated the incident response caseload for the better part of the last five years - is, in the most direct sense, financially motivated and operationally opportunistic. They are not, on the whole, interested in your crown jewels as I defined them in week three: the proprietary compound data, the trading algorithms, the strategic intellectual property that represents your organisation’s distinctive competitive value. What they want is access and leverage - specifically, access to systems that can be encrypted and leverage over an organisation sufficiently disrupted by that encryption to be willing to pay. Their targeting logic, to the extent that they have one, centres on your backup architecture, your recovery capability, your cyber insurance limit, and your tolerance for downtime. A CISO defending against this profile should be asking very different questions about their environment than a CISO defending against a geopolitical adversary.
A state-sponsored actor with financial objectives - Lazarus Group is perhaps the most well-documented example, operating at the direction of the North Korean state and motivated primarily by hard currency generation - has a different target profile again: cryptocurrency infrastructure, SWIFT-connected financial institutions, exchanges and custodians. The sophistication is considerably higher than commodity ransomware, the patience considerably longer, and the targeting logic follows a strategic rationale specific to the adversary’s geopolitical and economic position. An organisation that is not in their target set is, realistically, not in their target set - regardless of what the threat feed says about their recent activity.
A geopolitical APT - and here I am thinking of groups like APT40, APT29, and their equivalents across multiple attributed nation-states - operates with objectives that are qualitatively different again: strategic intelligence collection, intellectual property acquisition, long-term access and persistence, and in some cases pre-positioning for future disruptive operations. These adversaries are patient, selective, and genuinely interested in the specific crown jewels that give their intelligence services or their state-sponsored industries strategic advantage. They are, in many respects, the adversaries whose intent matters most to understand, because their targeting decisions are driven by a coherent strategic logic - and an organisation that understands that logic can reason about whether they are a plausible target before, rather than after, the breach.
I have had the opportunity to work across markets in Asia-Pacific, Europe, and North America over the course of my career, and one observation I consider underappreciated is that the threat actor profile varies meaningfully by geography and sector in ways that most generic threat feeds do not capture well. An Australian resources company faces a different adversarial environment than a UK financial institution, which differs again from a US defence contractor - and the threat intelligence that is most useful to each is intelligence that reflects the specific interests and capabilities of the adversaries most likely to target them, not a comprehensive index of everything anyone in the threat community is doing. The feeds that treat all sectors and all geographies as equivalent are, in effect, producing undifferentiated signal and asking the recipient to perform the analytical work that the intelligence should have already done.
The Adversary Is Not Irrational
There is a habit of thought in security circles that I find subtly counterproductive: the tendency to treat adversaries as forces of nature rather than as rational actors with comprehensible objectives. We speak of the threat landscape (a geological metaphor, implying something vast and impersonal), of attack waves, of threat exposure, as though the adversary were weather rather than an organisation or an individual with goals, constraints, resource limitations, and a strategic logic of their own.
This framing matters, because it shapes how we respond. If the adversary is weather, the only rational response is comprehensive coverage - you cannot negotiate with a storm, so you build as many shelters as possible. If the adversary is a rational actor with specific objectives, the rational response is to understand those objectives and reason about the conditions under which your organisation presents a worthwhile target.
Richard Heuer, whose work on analytical tradecraft at the CIA remains the most rigorous framework I have encountered for structured intelligence reasoning, argued in his 1999 monograph Psychology of Intelligence Analysis that the quality of analysis depends less on the quantity of information available than on the quality of the mental models analysts bring to that information. His key insight - that analysts systematically overweight information that confirms existing hypotheses and underweight information that challenges them - applies with uncomfortable precision to how most organisations consume threat intelligence. We take the feed, we look for what matches what we already believe about our threat profile, and we treat the output as validation rather than as analysis. What we rarely do is build and test a structured hypothesis about adversary intent: who specifically would want what we specifically have, and why.
Sun Tzu’s observation that knowing your enemy is inseparable from knowing yourself is, at this point, rather overused in security circles - but the frequency of the citation has not been matched by genuine practice. Most organisations know their enemy only at the level of capability: the tools, the techniques, the indicators. Intent - the why beneath the what - remains the analysis that the industry has not yet built the muscle to perform.
What Intent Intelligence Looks Like In Practice
If IOC-based threat intelligence is primarily signal - useful for detection, necessary as a foundation, but insufficient as a basis for prioritisation - then what does meaning-generating threat intelligence look like in practice?
I believe it starts with a question that most threat intelligence programmes do not ask explicitly: which adversary groups, given our sector, our geography, our strategic asset profile, and our operational exposure, have a plausible motive to target us deliberately? This is not a question that a threat feed can answer. It requires combining intelligence about adversary objectives - drawn from government advisories, vendor reporting, incident analysis, and academic research - with an honest assessment of what your organisation has that those adversaries might actually want.
This is where the work from the previous weeks in this series connects in a way I find genuinely satisfying. The crown jewel analysis from week three gives you a map of what you have that is valuable and irreplaceable. The attack path analysis from last week gives you a model of how an adversary could reach it. Intent analysis - what I am arguing for here - adds the third dimension: which of those paths is live, for which adversaries, motivated by which objectives? Without the intent layer, you are defending against a theoretical adversary that represents some average of every threat actor in the feed. With it, you are defending against the specific adversaries who have specific reasons to come for specific things. The difference, in terms of where you direct attention and investment, is not marginal.
In practice, this means using MITRE ATT&CK’s Groups database not just to map techniques but to understand the sectoral and geographic targeting patterns of named adversary groups. It means reading government advisories - CISA, the NCSC, the ASD - with attention not only to the indicators and mitigations they recommend, but to the adversary objectives those advisories are implicitly responding to. It means developing, and revisiting, a written hypothesis about which threat actors have the capability, the motivation, and the opportunity to target your specific organisation - and using that hypothesis to filter and prioritise the signal your threat intelligence programme generates.
It means, in short, treating threat intelligence as an analytic discipline rather than a data subscription.
The Meaning Layer We Have Been Missing
There is a version of this problem that I include myself in. For much of my career, I consumed threat intelligence the way most practitioners do: as an input to detection and response, a source of indicators and techniques to operationalise. The question of adversary intent - not as an abstract concept, but as a specific, reasoned hypothesis about who wants what we have and why - was something I treated as a specialist concern, the domain of national intelligence agencies and very large enterprises with dedicated analytic capacity.
I no longer believe that. I believe it is the core act of meaning-making in a threat intelligence programme, accessible to organisations of meaningful size, and neglected primarily because the industry’s commercial incentives have been organised around signal volume rather than analytic depth. The vendors who sell feeds are measured on coverage and freshness. The analytic work that transforms coverage into comprehension - that is the work that falls to the practitioner, and it is the work that the practitioner rarely has the time, the frameworks, or the organisational mandate to perform well.
The constraint, as Goldratt would note, is not the input. The feed is abundant. The constraint is the throughput of meaning - the rate at which raw intelligence is converted into a reasoned position on which adversaries, with which objectives, represent a live and material threat to specific assets. That is the bottleneck. That is where the investment is missing. And that is the work that no feed, however comprehensive, can do for you.
The signal is in the feed. The meaning is in the question you bring to it.
If you stripped away every indicator of compromise, every hash, every IP address from your current threat intelligence programme and kept only what it tells you about adversary objectives and targeting logic - what would remain? And if the honest answer is very little, it is worth asking whose interests that actually serves.