If you’ve been managing PMax campaigns for any amount of time, you probably have a complicated relationship with them. The performance can be genuinely great. The transparency? Historically terrible. For a long time, “trust the algorithm” was less a strategy and more a coping mechanism. You could see what was converting, but you had almost no idea which asset groups were pulling weight, which channels were eating your budget, or whether Google’s AI was doing something smart or just confidently wrong.
That’s changed pretty meaningfully over the past year. The updates that have rolled out since early 2025 are worth talking through seriously. Not because PMax is suddenly perfect, but because the tools now exist to be actually strategic about it, if you know what to do with them.
Asset Group Segmentation: What It Actually Unlocks
For most of PMax’s existence, performance data was essentially stuck at the campaign level. You could see clicks, conversions, and ROAS, but none of it was broken out by asset group, which made it genuinely hard to make good structural decisions. Our team has been running third-party scripts to get around this, which calculates channel distribution by pulling display, video, and shopping asset data separately and deriving “search” as whatever spend is left over. Useful, but it’s an approximation, not a direct data pull. And even with that workaround, asset group-level breakdowns, conversion windows, and device splits by group still weren’t accessible. Without something like that in place, you were essentially flying blind on which parts of the campaign were actually working.
In early 2025, Google rolled out asset group segmentation across all PMax campaigns. From the asset group table view, you can now segment results by device, time, conversion action, and a “Top vs. Others” ranking that shows relative asset performance. You can also see days to conversion broken out at the asset group level, which is something we hadn’t been able to access before, even with scripts.
That last one is more useful than it sounds, and it’s genuinely become one of our favorite things to show clients. On accounts with longer purchase cycles (luxury goods, high-consideration DTC, anything with a meaningful research phase), days to conversion tells you something real about how an asset group fits into the customer journey. If one group consistently converts at a 10–14 day lag while another converts same-day, those groups need to be evaluated differently. Applying the same efficiency benchmark to both is going to cause you to make bad decisions. We see this constantly on accounts we audit, and it’s usually the thing that explains why a “low-performing” group was actually doing important work.
One other thing worth mentioning: all of this data is now downloadable. Small quality-of-life thing, but if you’re building client reporting outside of the Google Ads UI (which we always are), it matters.
How We’re Thinking About Asset Group Structure Now
The biggest misconception about PMax structure is that it’s mainly an organizational exercise. It’s not. Asset groups are how you communicate business logic to an algorithm that has no idea what your margins look like, which products are strategically important this quarter, or which customers you actually want to acquire vs. retain. If you set it up loosely, it will optimize loosely.
The most common thing we see in audits is asset groups built around product categories without any thought to the underlying business logic. That’s a starting point, not a strategy. The more useful question is: what do you actually need to be true for this campaign to succeed, and does your structure reflect that?
For ecommerce accounts, margin is usually the right first lens. Your high-margin products and your clearance items should not be competing for the same budget against the same ROAS target. The algorithm will happily spend aggressively on low-margin bestsellers because they’re easy to convert, while your high-margin items that need more reach don’t get enough budget to generate data. Separating them with different targets gives Google clearer guidance about what “good” actually means for each group.
Audience intent is the other axis we think about a lot. A group built around Customer Match lists of your existing buyers needs different creative, different messaging, and probably a different conversion goal than a group targeting custom segments based on competitor search behavior. Bundling those signals together limits what the algorithm can learn from each of them. We know there are performance lifts from splitting audience signal types into their own groups and giving each one tailored creative. With the new segmentation data, you can actually validate whether that structure is working, which changes the client conversation significantly.
A few other structural things worth calling out:
Search themes are still one of the most underused levers in PMax. Google quietly doubled the limit from 25 to 50 per asset group in August 2025. Think of them less like keywords and more like intent signals you’re feeding Google to help it find the right traffic faster. Check the usefulness indicator regularly and swap out low-scoring themes. You want every slot working for you.
Brand exclusions got a meaningful update in 2025: you can now apply them specifically to Search text ads while leaving Shopping ads open to run on branded queries. For retail clients, this matters a lot. You want to protect brand terms in a dedicated Search campaign while still capturing branded Shopping impressions in PMax. Before this change, it was all-or-nothing. Now you can be more surgical about it.
Asset group count is not a metric to optimize for. A campaign with 15 asset groups and a $100/day budget is going to have groups that never gather enough data to exit the learning phase. Consolidation with good structural logic beats fragmentation with granular categories. We generally aim for 5–10 when budgets allow, but tighter accounts can perform well with 3–5 well-funded, well-structured groups.
Channel Performance Reporting: The Thing That Actually Changes the Conversation
If there’s one 2025 update that shifted how we talk about PMax with clients, it’s channel performance reporting. It launched in beta at Google Marketing Live in May and has been rolling out broadly since. As of November 2025, it’s available across all PMax campaigns. Genuinely, we were relieved when it started rolling out.
What it gives you is a breakdown of results across Search, Shopping, YouTube, Display, Discovery, Gmail, Maps, and Search Partners. Clicks, impressions, conversions, conversion value, and cost, all broken out by channel, with format-level detail and a downloadable distribution table. This is the data people have been asking Google for since PMax launched.
What it does most immediately is confirm or challenge your assumptions about where your budget is actually going. We’ve pulled channel reports on accounts that looked great at the campaign level and found real imbalances: significant spend going to channels that weren’t contributing meaningfully to conversions, or channels that were punching well above their weight but getting no creative investment. Neither of those shows up in campaign-level reporting alone.
The thing to understand about channel allocation in PMax is that you can’t control it directly. Google decides where to bid based on predicted conversion probability in real time. But you can influence it:
More search themes = more Search exposure
More video assets = more YouTube and Display
Campaign-level negative keywords (now available for all advertisers, up to 10,000 per campaign) reduce wasted spend on Search and Shopping, which effectively shifts budget elsewhere
These aren’t perfect controls, but they’re real levers, and the channel report is what tells you which ones to pull.
The other thing channel data is good for is distinguishing creative problems from channel problems. If Display is generating a lot of impressions but terrible conversion rates, that’s not necessarily a reason to deprioritize Display. It might mean your image assets aren’t strong enough for that format. The channel performance report now includes creative recommendations linked to an AI-powered image editor in Google Ads, which is a genuinely useful workflow for accounts where creative resources are stretched.
PMax and Your Other Campaigns: The Cannibalization Question
This is something we get asked about constantly, and it’s genuinely under addressed in most PMax content, so we want to spend some time on it. How PMax interacts with your existing Search and Shopping campaigns depends a lot on how your account is set up, and getting it wrong is expensive.
Google’s documented priority system is straightforward in theory. If a user’s query is an identical match to an exact match keyword in one of your Search campaigns, that Search campaign takes priority over PMax. But in practice, there’s meaningful overlap that doesn’t get caught by that rule. A large-scale study by Adalysis across more than 3,300 non-retail PMax campaigns found that Search campaigns had higher conversion rates for overlapping search terms 84% of the time. When PMax wins the auction for terms your Search campaign is also targeting, you’re usually getting a worse outcome than if Search had shown instead.
The fix isn’t complicated, but it requires deliberate account structure. Run a dedicated brand Search campaign with well-funded budgets and exact match coverage for your brand terms, and apply brand exclusions to PMax to keep it from stepping in on those queries. If your brand Search campaign is budget-capped or has gaps in match type coverage, PMax will fill that vacuum. Not because it’s greedy, but because the system is designed to find conversion opportunities wherever it can. We monitor brand Search impression share closely after any PMax launch or restructure, and watch for volume drops that might signal PMax is absorbing traffic it shouldn’t be.
One structural thing we’ve started building into accounts more explicitly: aligning bid strategies between PMax and Search so they’re not inadvertently outbidding each other for the same queries. When PMax is set to a higher effective CPA target than your Search campaign for overlapping terms, PMax will win the auction more often, even when Search would have performed better. That’s a self-inflicted problem worth auditing in any account where PMax and Search are running simultaneously.
What the New Reporting Doesn’t Tell You
We want to be upfront about the limits here, because there’s a real risk of over-reading the new data.
Channel reporting shows you where budget went. It doesn’t explain why performance differed across channels. That interpretation is still on you. Asset group performance metrics can also be tricky because groups within the same campaign share traffic and audiences. A group that looks weak may be benefiting from or contributing to other groups in ways that aren’t visible in the data. Attribution within PMax is still messy, and the new reporting doesn’t fix that.
The “low performance” labels Google applies to assets are relative. They’re comparing your creative against other creative in the account, not against any external benchmark. Don’t pull an asset just because Google flags it. Ask whether it’s serving a purpose (awareness, a longer consideration cycle, a specific audience) before making that call.
And the learning phase is still real. If you restructure your asset groups or make significant budget changes, give the campaign at least two weeks before drawing conclusions. The data you’re looking at during that window isn’t stable.
Where to Start if You Haven’t Revisited Your Pmax Setup Lately
Pull your channel performance report first. Even just knowing where your budget is actually going changes the conversation, both internally and with clients. From there, work through this list:
Check your asset group structure against your actual business goals. Are the groups organized around what matters, or around what was convenient to set up?
Separate your audience signal types if you haven’t already. Loyalty lists, prospecting segments, and competitor-based signals should each have their own group.
Max out your search themes (you now have 50 per group) and review usefulness scores regularly.
Add campaign-level negative keywords if you haven’t yet. Start with your search terms report. There’s almost always quick waste to cut.
Audit brand Search impression share to make sure PMax isn’t absorbing traffic your dedicated brand campaign should be capturing.
Check bid strategy alignment between PMax and Search campaigns to avoid inadvertent self-competition.
None of this is complicated in isolation. The hard part is doing it with enough consistency and patience to let the algorithm actually learn from the structure you’ve built. That’s always been true of PMax. What’s different now is that you can actually see whether it’s working.
Every account teaches us something new about how PMax actually behaves in the wild, and we’d genuinely love to hear how other teams are handling it. Are you segmenting by margin, audience intent, both? Have you pulled your channel report yet and found something surprising? Drop a comment or reach out to the RMP team. We’d love to compare notes.
SOURCES
Google Ads Blog: Channel Performance Reporting Coming to Performance Max
Google Ads Blog: Kick Off 2025 with New Performance Max Features
Google Ads Help: About the Channel Performance Report for Performance Max
Google Ads Help: Unlock More Visibility and Control in Performance Max
Search Engine Land: Top Performance Max Optimization Tips for 2026
Google Ads Help: Apply Brand Exclusions to Performance Max or Search Campaigns

