Asset Discovery, Attribution, and the Shift Toward Exposure and Risk Management in Modern Security Programs

Asset Discovery, Attribution, and the Shift Toward Exposure and Risk Management in Modern Security Programs

A conversation with Jeff Foley, Amass leader and Senior Advisor for External Exposure Research, Internet Abuse Signal Collective

We are pleased to welcome Jeff Foley, creator of OWASP Amass and a long-time contributor to attack surface research, to WhoisXML API’s Internet Abuse Signal Collective (IASC) as a Senior Advisor for External Exposure Research.

In this interview, Jeff shares his perspective on how asset discovery is evolving into exposure management, why attribution is more critical than ever, and how better data and collaboration can augment security programs.

Let’s begin with your work on OWASP Amass, which has played a significant role in shaping modern asset discovery. With version 5 being a major release, what problems were you trying to solve with this update?

Jeff Foley:
The main focus of that release was the open asset model, making sure everything in the framework revolves around a single, consistent way of representing data.

When you’re pulling information from hundreds of different sources, normalization becomes critical. You need everything to come into a unified view so that correlation is actually possible across the data being collected.

We also expanded how that model can be implemented, supporting different backends like PostgreSQL for enterprise-scale deployments, Neo4j for graph-based analysis, and SQLite for lighter use cases like penetration testing. That gives users flexibility depending on how they want to operate at scale.

As we started using this more in real-world environments, it pushed us to improve performance and scalability, with more data being collected, stored in a single place, and made accessible in real time.

A big part of that is attribution. In many organizations, there isn’t a single person who knows where everything is or who owns it. So it’s not just about collecting data quickly, it’s about being able to use that data to answer questions like “Who does this asset belong to, and where does it fit?”

That’s really been the focus: improving the data model, enabling correlation, and making the system fast and scalable enough to support that kind of analysis.

Building on that, based on your experience across academia and real-world tooling, what do you see as the biggest gap in asset discovery today?

Jeff Foley:
I think the biggest gap in traditional asset discovery is attribution.

Most approaches focus on identifying whether an asset exists. But far less attention is given to linking that asset to a real-world entity, who owns it, who operates it, and who is responsible for it.

That requires stepping outside purely technical data like DNS and protocols, and into the world of legal entities, ownership structures, and organizational context.

Research frameworks often don’t bridge that gap. They stay at the infrastructure level and don’t connect assets to the organizations behind them.

But in practice, that’s exactly what security teams need. Without that connection, discovery alone often doesn’t translate into action.

How have the expectations around asset discovery changed over time, and why has attribution become more critical as a result?

Jeff Foley:

When asset discovery first became mainstream, the focus was mostly on visibility, understanding where your infrastructure was exposed and getting a handle on things like shadow IT.

Now, organizations are expected to have much more control over their exposure on the internet, including the vendors and suppliers hosting their assets, the partners they work with, and whether those environments meet their security and regulatory requirements.

To answer those questions, it’s no longer enough to just know what’s out there. You also need to understand who has control over it, essentially, who is responsible for these assets.

With asset attribution becoming more critical, how has it changed the way security programs operate in practice? 

Jeff Foley:
If you go back to before cloud computing was mainstream, vulnerability management was much simpler. Organizations mainly asked, “Does this asset belong to us?” and then applied similar processes across all assets.

Now, that’s changed. How you engage with an asset can vary depending on where it’s hosted and which vendor is involved. Different vendors can impose different policies on what you’re allowed to do with those assets.

That alone introduces a lot of complexity for security teams.

On top of that, organizations are increasingly being held accountable for their partners, suppliers, and vendors. So it’s no longer just about knowing where assets are, it’s also about whether those vendors meet the same requirements you’re expected to meet.

If they don’t, that responsibility falls on you, not just the vendor.

Given those growing responsibilities, what’s driving the move beyond EASM toward exposure management? 

Jeff Foley:
When EASM first became mainstream, it was really focused on visibility, showing where you were present or exposed on the internet, with “exposed” simply meaning that an asset was reachable. It didn’t go much further than that.

Now, with increasing regulatory and compliance expectations, it’s not enough to just know that something is there. You need to understand what risk it brings to the organization, what vulnerabilities it has, what level of risk that introduces, who owns it, and how well it’s being managed.

That’s where things start to move into risk management. 

If you simplify it, exposure management is really EASM plus risk management, with a bit of vulnerability management mixed in.

And that shift is coming from what organizations are being held responsible for, they’re expected to understand and manage that risk, including across the vendors they rely on.

As organizations rely more on partners and suppliers, how should the approach to discovery and risk evolve?

Jeff Foley:
As soon as you start bringing third-party or nth-party risk into what we used to think of as EASM, discovery becomes more complicated. You’re no longer just discovering your own digital footprint, you’re also being asked to understand the footprint of your suppliers and partners.

At the same time, there’s an expectation that those third parties are meeting best practices and the same regulatory requirements you are. To meet those requirements, you have to go beyond simply knowing who an asset belongs to.

You need a clearer understanding of what those assets are actually being used for. Attribution is the first step, but it’s no longer enough on its own. If you want to treat this as a risk management problem, you have to be able to ask what the asset is, how important it is, and what the impact would be if something happened to it.

That requires a deeper level of understanding. And looking ahead, I think those expectations will continue to increase. To meet them, we’re going to need more data and a better understanding of what these assets are doing.

Let’s shift to the Internet Abuse Signal Collective. It aims to bring together registrars, ISPs, CERTs, and enterprises. What value could each contribute?

Jeff Foley:
They’re definitely important players, and each one brings a different angle on what’s happening on the internet and how assets are actually being used.

ISPs, for example, have strong insight into network activity, so they can show what’s happening in real time and potentially share that as part of a broader intelligence effort. CERTs have a clear view of who is being impacted through the incident response work they do with organizations. Registrars, on the other hand, have a more preemptive perspective, they can see how actors are positioning themselves through domain registrations, DNS queries, and WHOIS activity, which can give early signals of intent.

What would that kind of collaboration between these players enable? 

Jeff Foley:
If they’re working together and bringing those different angles to the table, you can start to piece things together.

It becomes more of a full-circle view of what’s taking place, and if those perspectives are brought together, it can help build a much better understanding of the overall behavior across the internet, giving you the context needed to make better decisions about the assets within your attack surface and the risk they introduce.

There could be a range of use cases for this, but even just from a defensive information security perspective, that level of visibility would make exposure management more accurate and give us more preemptive security capabilities.

To make that level of context and visibility actionable, how important is high-quality data as AI and automation come into play?

Jeff Foley:
It’s essential.

AI and automation are only as good as the data they operate on. The more context you provide, the better the outcomes.

Traffic data is especially important here because it allows you to understand what assets are doing, who they’re communicating with, and whether that behavior is normal or abnormal. That’s what enables automation systems to make decisions without constantly relying on humans.

If we equip AI systems with that level of context—not just asset existence, but actual behavior—we can build far more effective, end-to-end automation.

But in a dynamic environment like the internet, it’s not just about having data. The data itself needs to be accurate, fresh, and up to date. Without that, these systems will make poor decisions, regardless of how advanced they are.

Even with better data and visibility, is the industry still missing the bigger picture when it comes to asset discovery?

Jeff Foley:
That’s a good question, and I think where the industry is headed is moving beyond looking at individual assets in isolation.

Today, data is often presented around a single asset, like an IP address, and what you can say about it. That can be useful, but it often turns into more of a yes-or-no, red-light, green-light view.

When you step back and look at behavior, the roles these assets play, and how they relate to the organization behind them, you start to get a more holistic picture of what’s happening.

That’s where it becomes more valuable. You can begin to see patterns and trends across the environment and better understand what that activity represents, whether it suggests targeting or a vulnerability that’s attracting attention.

But getting there is contingent on having the right data. You need visibility into behavior and activity, not just confirmation that something exists or is reachable on the internet.

If organizations could combine visibility and signals across registrars, ISPs, and CERTs, how would that level of coordination improve threat prevention and risk management?

Jeff Foley:
If you had that level of coordination and visibility available—being able to bring together those different perspectives into one place, supported by the right automation and AI—you could make much stronger security decisions.

In that scenario, you would be able to build a significantly improved security program. You’d have better predictability and more preemptive capabilities, because you’d have a clearer understanding of what’s likely to happen and what is actually at risk. That would allow you to prioritize your efforts and resources much more effectively, and focus on addressing the right gaps in your program.

Now, I wouldn’t say that this is something organizations can easily achieve today. It’s not something you can expect to have at your fingertips yet.

But if there’s an opportunity to encourage this kind of collaboration, it’s definitely worth pursuing. Improving visibility across this environment, the internet, which we all depend on, is the right goal if we want stronger security outcomes.

We’ve touched on the Internet Abuse Signal Collective, where you’re joining as a Senior Advisor for External Exposure Research. Given your background in attack surface research, what made this initiative compelling to you, and how do you see yourself contributing to the Collective?

Jeff Foley:
I think the Collective is aiming to bring the kind of visibility and coordination that could take internet security to the next level, which aligns with what I’ve been trying to support through OWASP and the Amass project, raising the bar for the broader security community.

I’ve spent years working in information security and helping organizations move from attack surface management toward exposure management, and I’d like to bring that experience into the Collective to help make this next level of insight more accessible.

For me, that’s about enabling preemptive security, helping organizations understand what’s likely coming, prioritize effectively, and address the most important risks so they can operate with greater confidence.

What key message do you want readers to take away from this interview?

Jeff Foley:
For me, the core message is about improving internet security through better visibility and situational awareness around internet-exposed assets.

You need to understand what you have out there, but also what’s likely coming your way, who might be targeting you and why. We’ve made progress on the first part, but it’s no longer enough. The next step is understanding what the other side is doing.

That’s where better data comes in. It helps you not only understand your own assets, but also see adversary behavior and make more informed decisions. Without strong, up-to-date data, even AI and automation won’t be very effective.

That’s also why the Collective matters. It’s about helping the community better understand the importance of this data and improving how we use it to strengthen security programs.

Related posts

Try our WhoisXML API for free

Get Started

Have questions?

We are here to listen. For a quick response, please select your request type. By submitting a request, you agree to our Terms of Service and Privacy Policy.