A recorded panel session, roughly forty-five minutes including audience questions, on what changes inside a brand protection program once detection, triage, and enforcement are run by AI rather than around it. Designed for in-house IP, brand, and trust-and-safety leaders making 2025 plans.
What the panel covered
The session opened with a survey of the current threat landscape: generative counterfeits that no longer leak obvious model artefacts, impersonation at scale across paid social, and the steady migration of organised abuse onto closed platforms where third-party visibility is limited. Each of those shifts changes what a detection program needs to look for and how success should be measured.
The middle third focused on build-versus-buy economics. Where does an internal model genuinely pay off, and where is a commercial platform the right call? The panel's working answer, simplified: build where your data and your enforcement signal are unique to your brand, and buy where the underlying problem is shared with everyone else in your category.
The final third was operational. How do you structure a team around a model-first detection pipeline without turning analysts into rubber-stampers? How do you measure precision and recall honestly when the ground truth is itself moving? And how do you handle the inevitable false-positive that ends up in front of a customer or a senior stakeholder?
Who's speaking
The panel features Maya Linden, Director of Detection at 2GeeksinaLab, who leads our marketplace and social abuse models, and Tomas Beck, Head of Enforcement Operations, who runs the analyst and escalations side of the program. Both have spent the last several years working with global rights holders across consumer goods, entertainment, and software.
They are joined by Priya Ravel, an in-house brand protection lead at a multinational consumer brand who has built and rebuilt detection programs through several waves of platform change. Her perspective grounds the discussion in the real constraints of running a program inside a large rights holder rather than from a vendor's vantage point.
Why it matters now
Two things are happening in parallel that make this the right conversation for early 2025. First, generative tools have lowered the cost of producing convincing counterfeit listings, fake support pages, and impersonation content to roughly zero, which means the volume curve is steeper than any analyst-led program can keep up with on its own.
Second, the platforms that brands have historically relied on for enforcement are themselves under cost pressure, and trust-and-safety teams are leaner than they were two years ago. That combination — more abuse, less platform-side capacity — pushes more of the detection and triage burden onto the rights holder.
AI-led pipelines are not a complete answer to that pressure, but they are the only realistic way to keep human attention on the cases that genuinely need human judgement. The panel is honest about the limits as well as the gains.
Watch on demand
The recording, slides, and a written Q&A transcript are available on request. Use the contact form on this site, mention the AI and Brand Protection webinar, and we will send the link directly.
If you are evaluating a detection or enforcement program for the year ahead, we are happy to follow the recording with a working session against your specific surfaces. Mention that in the same request and we will route it to the right team.



