14# Simo Ahava (Co-founder: Simmer, Partner: 8-bit-sheep, Co-host: Standard Deviation Podcast) on Teaching Technical Marketers When AI Removes the Incentive to Learn, Why Critical Thinking Is the Skill AI Can’t Replace, and What Agentic Commerce Means for Data Layer Architecture

Published:

Knowledge Distillation Podcast episode 14 cover featuring host Katrin Ribant interviewing Simo Ahava, digital analytics expert and Google Tag Manager specialist, about AI in education, data layers, server-side tracking, and the challenges of agentic commerce and measuring AI-driven traffic.

Key takeaways:

  • AI lowers learning effort but weakens deep understanding
  • Analysts must understand fundamentals to evaluate AI outputs
  • Agentic commerce requires structured data by design
  • Legacy tracking breaks when AI agents interact with systems
  • Measuring agentic traffic remains an unsolved problem

In this episode of Knowledge Distillation, Katrin Ribant speaks with Simo Ahava – quite simply the person the entire digital analytics and technical marketing community turns to when they need to understand how things actually work. Simo has been writing about web analytics, tag management, and the Google marketing stack since 2010, and his blog at simoahava.com has become the definitive technical reference for anyone implementing Google Analytics or Google Tag Manager. A Google Developer Expert in both platforms from 2014 to 2025, a multiple Digital Analytics Association award finalist, and one of the most generous knowledge sharers the industry has ever seen – if you’ve ever asked a question on Measure Slack, there’s a good chance Simo answered it, thoughtfully, for free. He co-founded Simmer with his wife Mari Ahava, an online learning platform for technical marketers that has become the gold standard for courses on server-side tagging and BigQuery. He is partner and co-founder at 8-bit-sheep, a Helsinki-based digital services consultancy, and co-hosts the Standard Deviation Podcast with Juliana Jackson.

The conversation opens with what Simo calls the educator’s dilemma: AI makes it trivially easy to get answers, which removes the incentive for deep learning. His students take course content to an LLM, get a conflicting answer, and bring the contradiction back – without the baseline knowledge to judge which is correct. Katrin pushes back: practitioners doing real analytics work need to understand fundamentals like context windows and attention mechanisms. They land on a distinction – Simo’s concern applies to learners seeking quick answers, Katrin’s to practitioners maintaining context continuity across complex workflows.

The episode then pivots to agentic commerce. Simo draws a direct line from his data layer and server-side tracking expertise to the challenge of designing websites for AI agent access. Tag management systems have let organizations survive with poorly structured data for years. Agentic commerce breaks that – agents need structured data by design, not retroactive patches. Simo warns against over-optimizing for agents at the expense of human UX, and raises the unsolved measurement problem: how do you track agentic traffic when AI agents have no reason to identify themselves?

All episodes on our website: www.ask-y.ai/knowledge-distillation-podcast

Learn more about ASK-Y: www.ask-y.ai

Chapters:

  1. 00:00 The Journey of an Educator and Consultant
  2. 07:10 Navigating the AI Landscape in Education
  3. 09:40 The Challenges of Teaching in the Age of AI
  4. 18:05 Understanding AI: The Need for Deep Learning
  5. 24:31 Innovative Approaches to AI Education
  6. 27:54 Understanding Different Learning Populations
  7. 29:57 The Role of UI/UX in Modern Education
  8. 32:32 The Importance of Building Knowledge
  9. 34:52 Incentivizing Deep Learning in Education
  10. 37:13 Reading vs. Writing Code
  11. 39:11 The Challenges of Software Development
  12. 42:51 Shifting Focus from Code Production to Application
  13. 49:16 Agentic E-commerce: A New Paradigm
  14. 54:33 Balancing Human and Agentic Experiences
  15. 58:17 Measuring Agentic Commerce

Katrin (00:00)
Welcome to Knowledge Distillation, where we explore the rise of the AI analyst. I'm your host, Katrin Ribant, CEO and founder of Ask-Y. This is episode 14, and my guest today is no other than Simo Ahava. Simo, you need no introduction. I'm going to do like a little bit of it anyway. So I think, you know, if anybody's ever implemented a Google tag,

they ended up probably, if they're on the measure slack, they probably ended up somewhere reading your comments on some very technical, very specific issue, very thoughtful comment. I think you're probably one of the most active contributor for how long? How long have you been doing this?

Simo (00:41)
Well, first of all, thank you. Thank you for having me. I guess you could say that I've been writing about these topics online since 2010, maybe. So that's 16 years. But certainly I've been a, let's call me a power user of the Google marketing stacks ever since Google Analytics became a free tool in 2005, 2006. And before that with old school analytics tools, but I didn't I didn't blog about them actively.

back in the day, but yeah, certainly since 2010.

Katrin (01:12)
kind of didn't want to define what you do because you do many things. You're, I suppose, maybe mostly defined as an educator these days, but you're also a consultant. How does that all work together?

Simo (01:26)
It doesn't really. So I have to shift the balances every now and then. ⁓ For the longest time, I was actively a consultant working for various employers ⁓ and then with my own business. this was like, let's say from 2013 to 2020 when I was the most active, also as a blogger back then. Blogged a lot, wrote a lot of just...

tools, apps, did a lot of consulting. And then when the pandemic hit, ⁓ I wanted to kind of slow things down. And I had become quite frustrated with the, let's call it a consultancy paradigm. And I wanted to do something else. Yeah. Well,

Katrin (02:12)
Whenever I talk about consultancy with anybody, like to say at some point really, somebody defined this to me one day when I asked them to consult for AskWI, actually, and they told me, no, consultancy is not good for the soul. And I think for most people, that's true.

Simo (02:27)
Yeah.

It is draining. Some people find it immensely rewarding to help others with their problems. And if it were about helping others with their problems, then I consultants would be very rewarding. But I find it mostly, especially when you enter the enterprise world, it's less about sharing expertise and more about just learning to play the game. And I don't like playing the game. I think it's time consuming. It's kind of just...

Katrin (02:37)
Yes.

Simo (02:59)
It's not respectful of people's time and effort. So that's whole, whole another discussion, but just, you know, I got frustrated with it and I wasn't a lucky enough position to scale down the consultancy because then my wife and I, we, we thought that because I had been doing online trainings for a long time for different companies like CXL, 121 watt in the EU. I thought that why not, you know, it was, it was

pandemic had hit and companies were kind of turning off their in-person training setups because they didn't want to have that in-person contact and they were moving to online training. we thought, you know, let's try something ourselves and let's just build a quick kind of proof of concept if we could do an online course. ⁓ We wanted it to be kind of packaged, self-taught so that I don't have to be present.

So this is not like an in-person training online, but you actually, it's pre-recorded, packaged, and then you buy the course. And so anyway, we did a course on server-side tagging with Google Tag Manager, and it was very successful. And so now we're still on that path. So we founded a company called Simmer with my wife, Mari, and we've been working on it ever since. And I am still doing consultancy. I, because as much as I dislike it,

If I do very little of it, can bear it and I find that I can focus well.

Katrin (04:30)
When you can choose your engagement,

suppose, there's also that aspect, right? When you can choose the engagement where you think you're going to be effective, you get gratification out of

Simo (04:38)
Yeah, to a certain degree. mean, ⁓ I am still lucky enough to be able to do that. But I think that the less I do of consultancy, the less I have the freedom to do so. Because people will obviously, when they get used to me saying no, then they won't approach me again. So it is a difficult game to play. Luckily, I'm actually working as part of a group of consultants. So it's not just me. So I have a kind of a safety net ⁓ with a company called 8B Cheap. ⁓

But with CIMR, know, we have been, ⁓ you know, it's worked fine. I, so I definitely identify now most as an educator, which is a very, very lovely role to take because I don't have to, ⁓ I don't have to pretend, I don't have to be a vendor mouthpiece. I don't have to think about how my customers or clients will react to what I say because my, don't really have them anymore. So I can really be just myself.

and try to figure out how to take these kind of technical concepts and package them in a way that makes sense. And now obviously with the rise of generative AI, it's become a whole different thing. ⁓ know, Simmer has been for five years, actually almost to the day. ⁓ And somewhere in the midpoint of that, we started seeing the rise of GenAI and it certainly has...

changed a lot in terms of how people consume education and learning materials. So that's something that we're still contending with, how to pivot ⁓ to that new reality.

Katrin (06:12)
And so thank you for that. so, well, mostly we're going to talk to you in your educator role persona here, because that's really what we do. We explore the rise of the AI analyst, and we look at how do people ⁓ who have... I'm thinking mostly of people who are beginning mid-career, ⁓ not people my generation necessarily, because when you have 20, 25 years of career behind you, it's a little bit different, right? But if you have those years in front of you,

It's a real question of what you need to upskill in, how should you invest the time that you have to better yourself, to educate yourself in which direction. But before we go into that, first I want to say, ⁓ I wanted to say this to you forever. If I were in need of consultancy in Google, Azure, which I will be at some point, I will hire 8Bit Sheep just for the name, not literally just for the name.

Simo (07:08)
Thanks

Hehehehe

Katrin (07:11)
No,

because it's a great, it's actually a really, really cool concept. Like I think that this collective of experts is a really, really good way to go about engagements in our industry. it, I functionally, I think it's a really smart move. The other thing I wanted to tell you is, ⁓ you know, you didn't mention, you are actually also a podcast cohost. you go who standard deviation with Julianna Jackson and, ⁓ you know, it's.

Simo (07:33)
True.

Katrin (07:39)
I listen to every episode of it. I think it's great for the audience. If you don't listen, you really should. It's great. It's fun. It's entertaining. It's really, really engaging and full of actual content. But mostly, I would say it is industry defining in the level of preparation and structure you put into it. That really, really is a thing. And I hope you...

Simo (08:05)
I'm going to have to.

That's That's thank you so much. A lovely shout out. Juliana has obviously put in a lot of work. I kind of joined the bandwagon because, you know, Julia is a brilliant, brilliant individual in our industry and deserves all the praise she gets. And I'm just tagging along, tagging along the ride. We have a lovely ⁓ chemistry and our podcast is mostly about kind of

Katrin (08:25)
to do.

Simo (08:28)
Talking about current phenomena in a lighthearted manner, I do enjoy that somebody calls us structured and organized. And I'm very happy to ⁓ that that's what it seems like, but the reality is everything. But we are, we are the most poorly organized, but I have to say, I think that's why it's so successful. Yeah, it's, it's, terrible. Like we have, we have zero prep. Yeah. No, no, no. It's horrible.

Katrin (08:40)
It was very funny, but it's really part of the problem. No, it's one...

No, it's absolutely one...

Simo (08:53)
But I gotta say,

maybe that's the reason why it's so fun to listen to. And obviously, we know what we're talking about, so that's where the structure comes from. ⁓ And Julian is a great editor. ⁓ But yeah, sometimes we laugh like we have nothing prepared for today.

Katrin (09:06)
That's really, like very serious, very seriously

now that really is I think where the power of the podcast comes from is your chemistry, the fact that you're both really, really deep specialists in what you're talking about. As you can just riff off each other, and it's really, really engaging and entertaining and informative. So, you know, I'm kind of cheating a little bit here because I listened to the two last episodes obviously. So I have some idea of...

what the answer to this is going to be, but I'd love to hear it from you. I've been thinking...

Obviously as an educator, you put out these really, really in-depth courses about deeply technical subjects. And with AI today, I'm sure you're rethinking profoundly what it means to code, what it means to architect, what it means to learn how to do all these things. How are you thinking about your content these days?

Simo (10:05)
yeah, that's kind of the million dollar question. is, let me say that it's really frustrating actually to be an educator in this space right now, because we are in this transitional stage where people are kind of ⁓ on the AI trail without necessarily thinking about what they're doing. For people who already have the expertise, I think it's a very logical next step in their careers.

to take that expertise and kind of formulate into a prompting approach instead of coming up with the answers yourself. And as long as you have the expertise, you are able to kind of validate on the go, ⁓ kind of the human in the loop approach, which I think is great. And that's how I approach using AI as well. I don't use it to replace... ⁓

my expertise, don't use it to replace my knowledge, I use it to enhance it, I use it as a sounding board, I use it to validate my approaches. But I always have the last say because I know what I'm asking and I know what I'm reading and I know how to interpret it. I think the biggest problem is with the entry level people and juniors who are kind of entering this industry or entering these technologies without prior understanding of how they work and their first

contact with this technology comes through the distillation of that AI response. And all of my empirical findings and all of my personal anecdotes about this have shown that most of the time it's not successful. People get an answer to a problem very fast and the answers are often very good, but they lack the ability to discern whether they actually was actually good or whether it's leading them down a wrong trail.

which then very easily becomes a feedback loop where they use that incorrect output to fuel their next approach and then the AI compounds on that and it becomes this whole mess just based on an incorrect premise to begin with. ⁓ We see this among many of our students at Simmer where they take something that we teach in the course and they go to the AI with it, which I think is a great approach.

take what we learn and then enhance by asking the AI for more context or for more further resources. But then it becomes an incredibly time consuming thing where they might get a conflicting answer from ⁓ the LLM and then they bring that question back to us and saying, hey, you said this, but chat GPT said that. Now I'm trying to reconcile which is correct. And then it becomes our job to do the students job.

So it becomes our job to tell them like why actually the agent response was incorrect or in the rarest of cases, why our approach might have had a flaw. But typically it's the, because we do know what we are teaching. So it's very frustrating that the entry level, the kind of a priori knowledge doesn't exist and you lose the ability to critically think because you are externalizing all of your processes to that AI. And I'm not saying that people do this regularly.

It's very extreme to externalize everything, but it does ⁓ remove some of the incentives for getting a deep learning about certain topics because you can just externalize that. And technical marketing just happens to be one of those topics that is very AI friendly because it's structured information. JavaScript, SQL, ⁓ it's code. You can get those AI tools to give you very, very good code these days. ⁓

So why should you even learn the deeper concepts when you can just go to the AI and prompt it for an answer? And if your only job is to get an answer, then of course you don't need the learning. But if you want to be a self-sufficient practitioner who can actually confidently say that this is, I use the AI to get this answer and I'm coming to you with the response and I validated it, that it actually works, then I think you do need a deeper learning. So I am kind of frustrated.

about how it's going, but I also know my audience well enough to know that it's almost useless to fight against it because this is kind of a tsunami at the moment. So what we are trying to do, and this is a very elaborate answer to your question, I'm sorry about that, what we are trying to do right now is figure out how do we proactively kind of enter that game so that we recognize that our students are most likely using AI assistance.

Katrin (14:34)
So thank you for that.

Simo (14:48)
⁓ We want our content to still be there to give them that deep learning, but we want to teach them how to use the AI responsibly if they want to enhance their learning with our current content. So we are trying to figure out how do we become kind of AI assistant assistants. We're not there yet. ⁓ We haven't put this into practice yet, but I think that the very next course that we do, whatever it will be, will be restructured in a way that ⁓ kind of takes into consideration that

most of our students will be using AI assistance. And then we'll just try to figure out how do we pitch ourselves. We want to be kind of, we want to be the people who teach the experts, the validators, so that if somebody takes our course, they can be that expert in that organization, validating the AI output with a deep understanding of those concepts. That's where we are kind of right now, but we haven't put anything into practice yet.

Katrin (15:42)
That's really fascinating to hear because obviously we are all sort of in that area, ⁓ Ask why build software, but to the same degree, ask why build software that has AI as an engine at the center because why wouldn't you these days, right? Like it would not be reasonable not to do that. ⁓

Simo (16:04)
Mm.

Katrin (16:08)
And so because it's AI native, we have to think about this whole different experience of users with generated code and with using AI to generate code, which analytics is at this weirdo sort of place where a lot of it, most of it can be reduced to ultimately it is all code, but not the ⁓ architecture around it, the project planning, the task planning, the what am I, what am I, why am I actually doing this? Like literally.

the name of the company. Like, why am I actually doing all of these things? What am I trying to achieve? How am I realistically going to achieve this within the context that these tasks exist in the world, right? Whatever organization, whoever it is for, whoever needs to make a decision with this. There's, think, all of these aspects that are related to the craft of being an analyst. There's a bunch of aspects that are related to ⁓

the logical understanding of context in digital analytics. Where does the data come from? How is it generated? What are the inherent flaw in the generation of the data? Ultimately, what we are creating is a reduced model of the world. We need to understand where we are compressing the information because otherwise, as we are manipulating the data and getting answers, we are not aware of how much information we have compressed in the process.

Simo (17:23)
Mm.

Katrin (17:36)
And then when we go back to the insights, it's like, whatever, because we have lost too much information in that process of compressing into the data model and then decompressing into insights into the reality. So I think that's really something that needs to be there. But also, think there's... I'm really wondering what you think about this one. To me, I was thinking, like, technically, if I was in my late 20s, early 30s today,

What would I think about really learning deeply to use AI as a tool for the rest of my career? First thing that came to my mind was I would want to really deeply know about how LLMs work, like really the internal mechanisms of it, because it's my tool for the rest of my career. It's going to be my central tool. I need to really know and understand the tool.

And so I'm wondering whether you've been thinking about doing a course specifically about how do LLMs work and how do LLMs work specifically in the remit of creating analytics workflows, the code and the logic around. Sorry, that was not in the outline that I sent you. The as I would like it to be.

Simo (18:50)
No, no, no, no worries. ⁓

No, this is exactly how it's supposed to work. Like this is exactly how it's supposed to work.

No. Yeah, we have been considering like courses around AI. I think the problem is that we are a very small company with limited resources and trying to create a course about something that changes so fast ⁓ and is constantly evolving is risky. The other problem is that how LLMs work is kind of a theoretical question and we are trying to create very task-based actionable conflict. That's kind of our niche.

Katrin (19:23)
Mm-hmm.

Simo (19:25)
We have been thinking about maybe a course on how to create your own LLM, like a company brain, a marketing brain. That might be something worth pursuing, but it kind of goes against how, I don't know how to frame this, I'm very skeptical. ⁓ I do think that you are absolutely right that if in a perfect world, if somebody's entering the digital industry, doesn't matter what you do, developer, engineer,

HR, PR, marketing, advertising. If you're entering this industry, then obviously it would be great if you understood how LLMs work. think that's just a... It used to be just a de facto rule that if you wanted to do something, you know how it works, right? That's like the engineering principle. And especially in technical disciplines, it's absolutely required or used to be. But I think...

that AI and LLMs are so abstracted, like the underlying model is so difficult for a layman to lay person to understand that people are not even attempting to do so. And one of the reasons why they're not even attempting to do to understand how these things work is because there's no incentive for that. Like why should you know how an LLM works other than kind of the old timers saying you should because that's how the world used to work. if you can, like the barrier of entry is so low.

Katrin (20:47)
disagree.

Simo (20:52)
10 years ago, if you want to get into tagging, you had to learn how tag managers work. You had to. There was no way, no matter how easy like the Google Tag Manager interface is, there's no way to just open it and start doing. You have to know how it works. Right now, you can go to chat GPT and type a prompt asking for it to do whatever, know, build you an app, write you documentation, give you ideas for a marketing campaign, give you ideas how to set up Google ads if you've never used it before.

And you'll get a pretty nice step by step on how to do that. And you have no clue where that information comes from. You have no idea if it's actually responsibly sourced. You have no idea of the energy consumption it took to give you that answer. But it gives you the answer. So why should you go beyond that and learn how it works? Because that would most likely be very time consuming, which is time away from prompting and doing your job. ⁓ I would say that there's a very small

group of people in the world who actually need to know how an LLM works intimately ⁓ and the rest can just enjoy the fruits of their labor ⁓ with these AI engineers build. So think that's the problem. The incentives just aren't there.

Katrin (22:05)
with that.

I'm going to disagree with that because

I'm here at Experimentation Island, which is a conference about ⁓ experimentation, CRO, et cetera. It's actually a really great conference. If you're in CRO or interested in CRO in any way, you really, really should ⁓ come. It's really, really cool. It's like Measure Camp vibe, ⁓ but with also planned content, lots of experts, incredible knowledge. Yes. Yeah, exactly. It's really fantastic.

Simo (22:33)
And there's an EU version with conversion hotel, right? So that's the kind of EU version.

Katrin (22:40)
So ⁓ anyway, people here are obviously ⁓ maybe a little bit more advanced than people who just at the beginning of their career. And they're in positions where ⁓ inside their organizations, they start working with AI and they start ⁓ working on workflows. Experimentation is pretty intricate. Working on a number of workflows to make some of the steps of their.

workflows more efficient. And I spent basically my entire time talking to people, with people literally coming to ask me, so how do you do context management? ⁓ What is context continuity? How do you actually maintain that context in a complex workflow? Because for as much as you say, you can go to chat GPT,

and ask a question, get an answer. It's gonna be average, it's gonna be like the compression of average, really good enough these days for that. If you are actually working in analytics, actually doing what is ultimately quite complex workflows, even relatively simple things are really quite complex workflows. You are ⁓ accumulating so much context, so many tokens that I don't really see

how you can possibly understand how to use the well, as in, and I mean well, as in with answers that you can trust. If you don't at least understand the concept of tokenization, context window, attention mechanism, and how those interact, this is like the strict minimum in my opinion that you should really understand because, and people.

Simo (24:28)
Mm.

Katrin (24:31)
Most don't, most people really don't. Most people confuse ⁓ having a long context window and having attention in that context window and what that means for the answer they're going to get and then they run into a whole bunch of issues. And I feel like so, I'm just gonna plug in my education efforts for this. ⁓ I don't think you actually know or seen this, but I have created these little guys, the floofies.

So these are little creatures. Because I agree with you, right? Most people don't necessarily have the bandwidth to actually go into reading papers and it changes quickly and all of that. But I felt like if you do it so simple that you can do it with cartoons, you can actually explain quite complex concepts. So the floofies are essentially the parameters in the LLMs. And the floofies have

So the metaphor really holds in the sense that their ears are the context window, their eyes are the attention mechanism, their fur is spiky when they're not trained, when they're trained it smooth is the parameter, that their fur becomes smooth, etc. And I started creating, because I mean, you know, we are in the era of AI and you can do just absolutely amazing things these days. So I started creating these videos that are bite-sized

videos generated with AI these videos are teaching concepts like context poisoning, attention dilution,

how do LLMs get trained in bite-sized videos of two to six minutes with learnings of how you should prompt afterwards, like how should you actually modify your prompts in order to alleviate some of these issues? And so that's like my modest non-educator effort to do this. Thank you, because I have really encountered

Simo (26:11)
Mm.

No, that looks great.

Katrin (26:35)
as I start, you as we work with design partners, that if people don't have these basic notions, it's very difficult to use AI practically in a real analytics workflow in an efficient way. I don't know if that in any way sort of, you know, modifies, but I'm just saying I would really like a CMO course about that actually.

Simo (27:00)
You've already

got the educational context. What do need another course for? That looks great. That's a really great approach. Yeah, so you're absolutely right. But I do think you are talking about a small population of AI users who are advanced enough to think about in terms of that, who are moving from one prompt to the next and try to hope, or at least with the kind of...

⁓ effort that the context window stays alive and stays the same and they have enough tokens to share to spare on that. ⁓ I'm thinking this from a purely practical point of view of people and as an educator again, of people using AI to enhance their learning or to produce a single context answer for their code, which I think anecdotally is the way that most people would use these tools.

⁓ They need a piece of code. They need an answer to a question. They need a solution to a problem. I'm not talking about creators and I'm not talking about designers. They have completely different workflows. ⁓ I'm talking about people who are taking courses in order to learn something, ⁓ how to use a tool or how to solve problems. And for those, think... ⁓

My paradigm applies and for the creators and the designers and the developers and the experimenters and the marketers who need that larger context, which your paradigm applies. And I don't think they are mutually exclusive at the same time.

Katrin (28:32)
That makes sense actually, you're right, it's a different population. Your audience is people who are trying to learn a technique, tool, ⁓ something specific. We're talking to people who are trying to actually do something in the real world, assembling the knowledge of all of these different pieces that go into a workflow and they're trying to just accomplish something that is valuable for the company. Which is a

Simo (28:56)
Yeah.

Yeah, and I think that at the same time, again, in a perfect world, we would take the time to these things. I think it's a shame that we are taking technology for granted. It's always been a shame. It's the abstraction problem to a large degree where, understandably, complex mechanisms are abstracted to make it easier for users to approach them. ⁓ This is why we have user interfaces.

why we have user experience studies. At the same time, and I think we going to talk about this later in the podcast, that importance of that UI and UX is actually becoming smaller and smaller these days because we have ⁓ more and more agentic AI where they do the tasks for you, where you no longer have to click around a user interface anymore. So the need to learn how tools work ⁓ is becoming less and less. And again,

this kind of revolves around this new sort of a model of education where it's less important to understand how things work and more important to understand how to get answers quickly and how to get solutions quickly. So I think we're just skipping over the actual ⁓ kind of architecture of knowledge in a way that we don't have to.

hold on to that knowledge anymore because we are externalizing it. And this is, course, I am exaggerating a great deal. Luckily, we are not in a place where people are just externalizing everything. But I do see ⁓ just following along in LinkedIn, which is kind of a terrible thing to do these days because it's such an insufferable place. But just looking at what kind of things people are talking about these days is that, you know, I built an app.

It gave me 10,000 rows of code and I didn't write a single line myself, which sounds cool on the surface. But once you go below the surface, it's actually kind of, it's sad in a way because you produced a great amount of information. You distilled information from so many different sources and you don't really know how the thing you built works. So we're kind of in the Star Trek replicator phase where you just ask a machine for a couple of Earl Grey hot.

and it gives you a cup of tea and you don't actually have to know what's in that anymore. And I'm a Luddite for saying this, but I really think, ⁓ I have no pedagogical evidence to back this up, but I'm sure somebody could find it. But I really think that the fact of actually taking the time to understand how something is constructed building block by building block, which usually means that you build those blocks yourself, you write that code yourself, just wires your brain in a different way.

And you become more patient, you become more industrious, you become more curious, you become more critical because you've built them yourself. And again, I'm not saying you shouldn't use AI for that. think there's an incredible amount of things you can do with AI assistance on every step of the way building those blocks. But I do think that ⁓ having that human in the loop helps keep your brain wired in the correct kind of way and makes you more patient.

in a world that seems to reward speed and just expedience over everything else, which I think is a very, very sad state of things. But this is the Luddite Simo talking. I think that I'm trying to be hip and edge on the edge, but it's very difficult.

Katrin (32:31)
I think that on top of what you said, which I agree wholeheartedly with the Ludditz-Saimo on that aspect, I also think that ⁓ understanding the building blocks, really understanding the fundamentals and how they connect and integrate allows you to manipulate them, to abstract them.

Simo (32:48)
Mm.

Katrin (32:55)
manipulate the concepts that are related to why having an anchorage in reality because you know how these things work together and that is where effective creativity comes in place because that's how you can actually create new solutions, new thinking which obviously is not something that we get very much off from AI and it's also I think one of the things that you're going to want to develop as an individual.

if you want to have a career where you add value on top of the tools that you use, ultimately your ability to do things that the tools can't do in your place is going to be paramount. And I think that's, know, sort of grounded, kind of in my head, called that grounded creativity is really paramount and is predicated on understanding the building blocks really deeply.

to be able to manipulate without losing the contact with reality.

Simo (33:58)
Yeah, true. But it is an educator's dilemma. How do we cater ⁓ content to satisfy those deep learners who want to understand how the mechanisms work, and also at the same time, those who are coming for answers? So for an educator, it's very, very difficult because we used to have a single kind of, not homogenous, but single approach.

Katrin (34:00)
So.

Simo (34:24)
to learning, which is kind of teaching things one by one, teaching the building blocks, teaching a skill progression. So if you wanted to learn to use server-side tagging, you had to go through a certain progression. You had to understand how the browser works. You had to understand how JavaScript works, how tagging works, how the server client infrastructure works. ⁓ And I think that there are just so many new shortcuts introduced that it makes it difficult as an educator to incentivize learning. This is all about incentives, really. Like, how do we incentivize learning? ⁓

And I think that like I have two kids, one is in school and the other is in preschool. And I am. And they don't use AI, obviously, like they understand we've been playing around, but they don't they don't use that in school yet. I know that it's coming in the later, like in subsequent grades. But I am kind of concerned about the long term effects of education, where where so much is now it's pre AI, of course, like we had Internet at Wikipedia as as

trustworthy sources in academic research for a longer time. So I'm still kind of curious to see, you know, in 10, 20 years when we have data like how has human behavior changed with this thing that's going on right now, assuming it continues, assuming there's something just not going to stop, which would be, which is very unlikely to have this tsunami of AI progression stopping at any time soon. But it does.

as a parent and as an educator, does bring me pause ⁓ trying to figure out like, how do we incentivize deep learning? And how do we make people curious beyond just a single prompt? Like, how do we make them question what they see, tear things apart just to understand how they work when that doesn't seem like a necessary skill anymore? So this comes from a place of concern as well.

Katrin (36:12)
In my

In my experience, that comes when people hit that wall, right? It's all about incentives and people hit that wall when they start actually working with it and inevitably a hundred percent of people hit that wall. And so, you I thought about like, going back to my, to my point, you know, learn about LLMs and then what would I also invest in is, ⁓ so natural language is now my interface to, you know, the world, to the tools.

Because through natural language, I can talk to the LLM and then the LLM can talk to my pipelines, my database, like everything. Right? So, so that's literally how we constructed ⁓ Ask Why. Conceptually Ask Why is context continuity across all the steps in your ⁓ workflow analytics and with natural language as an interface to everything you work with. So what I now really need to be good at...

is one prompt engineering context management, practically prompt engineering, and it's a skill, obviously, right? ⁓ And it's a very, very fast evolving skill. So I understand that that in terms of your courses, that's a challenge. But I think the other challenge is ⁓ I need to learn how to read code and understand what the code does, which is different, I think, from writing code. Because reading

code that you haven't written. And I understand that you don't hit that wall immediately, right? But you do hit that wall at some point. At some point, you hit the wall where, and we obviously, anybody who does software engineering and builds anything complex, you can generate excellent code today with AI. Maybe not super efficient, but you know, the thing is you don't need that efficient code anymore.

Simo (38:00)
Yeah.

Katrin (38:06)
that you like not as efficient as you would have needed 20 years ago because compute is cheaper and like everything. All of the constraints are larger and cheaper and give you more freedom, more degrees of freedom. However, you have moved the complexity elsewhere. And in the case of software engineering, you move it into the merging layer. Merging is really hard. And in order to merge, you need to really understand what's happening. It's the same in analytics. Ultimately, when you have

Simo (38:27)
Mm.

Katrin (38:35)
complex steps of logic layering, you do need to know how it works because how are you going to trust the result otherwise? How are you going to get in front of stakeholder, show, I don't know, a chart, whatever, right? And have them say, ⁓ revenue, interesting, where does that come from? And be able to answer, right? So which is ultimately the goal. ⁓ So I do think, and I'm really curious about your opinion on that.

Simo (38:54)
Yeah. Yeah.

Katrin (39:04)
Writing versus reading code, ⁓ do you think about, like, have you thought about, I'm sure you've thought about.

Simo (39:11)
⁓ Well, mean, one doesn't really exist without the other. So writing is a manual task. It's an activity. ⁓ It's ⁓ generating something out of nothing. Reading is validation. Reading is exploration. Reading is kind of looking at what has been written in the passive tense because we don't know who wrote it. ⁓ I think that in terms of coding, ⁓ you can get

You can of BS your way through many, many, many stakeholder discussions without having to understand a single line of code and just generating it. The thing I have to say in favor of LLMs is that the interfaces do a great job of explaining how a code works when they produce it. And you can actually ask them to explain it even further, which is great.

I think the problem is more in terms of how the code works. Of course, if you have an intimate understanding of code, can spot those inefficiencies which you mentioned, which is very important. I think that when you start building apps for production, efficiency becomes more important than having just dynamite code. You have to build efficient code. ⁓ You have to build secure code. You have to build code that is... ⁓

⁓ You have to build code that is well documented, that follows a style guide. You have to build code that is secure, that has access management in place. You have to be code that is correctly scaffolded. You have to build code that understands the changing context of where the code is running. Those are skills ⁓ that I think that LLMs are going to be struggling with because it's not their job ⁓ to give you that kind of structure unless you very, very specifically prompt it for it.

It's not their job to teach you how to read their code unless you're very specifically prompt for it. And then you mentioned context windows many times. I think that's an absolutely important concept here because if you don't consider the context window and you ask an AI to generate 100 lines of additional code into an app that already has millions of lines of code, it doesn't understand necessarily how that entire app works and it just generates the code and it can be completely intrusive.

⁓ And it can be actually breaking. So I think that the bigger problem is not on people understanding how code works. I think that is a skill that I have to grudgingly admit is becoming less important. I think the bigger problem is not understanding how software projects work, where the code is actually being generated for. You don't understand how the different levels of the project work. You don't understand how it's tested. You don't understand how it's protected, how its database connections work, how efficient it is.

how it's consuming scalable cloud resources, how it's building a credit card bill, how it's doing SQL queries, how it's doing multi-threading. If you don't understand those things, I think it's very, very dangerous to build anything for production if it's anything that requires any of the aforementioned thing like users or protections or third-party logins or database connections.

if there's not somebody in the organization who can validate that human in the loop. I'm ready to admit that right now, if I were tasked to build a course that's about a programming language, like I've been thinking about a Python course for the longest time, we just released a course that focused on the R language, ⁓ I would be very hesitant to do it as a programming course, because I have to face the facts that most people will not be turning to an online course to

produce code, they'll be turned to an LLM. ⁓ But instead, pivoting it in a way that teaches people how a typical Python project would be constructed, how different virtual environments would be used, how different modules would be introduced, what is the most efficient way to work with Python, how would you build Python for a colab, Google Colab, for example, or Jupyter building notebooks. So application.

over code production is where I think as an educator we're right now trying to think of things as because as good as an LLM is in giving a very specific snippet of code for a very specific question and as bad as it is in generating something very extensive for a very bad question, one thing I found that it's still struggling with and I don't think this is a solvable problem for the LLM is application and trying to read your mind what are you actually thinking about.

when you are asking for that Python query, what is the actual end game? What are you building with that Python? What kind of problems are you trying to solve with that code? I think that's where the expertise lies is trying to teach people. ⁓ It's in another abstraction layer, but trying to teach people what that code is actually being produced for and how software projects work. Because that is a whole other thing that is not as well structured as you might think. I think that's my approach.

Katrin (44:17)
Are

you at all on restructuring the BigQuery course with that in mind?

Simo (44:23)
⁓ Not at the moment. think we actually, when we built the new version of the BigQuery program, LLMs were already, of course, a thing. And so we actually already reduced the amount of just plain SQL content that we taught. I think that made a lot of sense. It just doesn't make sense to teach you how a select query works because you can just really get that answer very easily. So we went beyond. We started thinking like, what are LLMs still struggling with? And one of them is, for example, how to build an attribution model. So we talked about attribution.

⁓ And then we have an entire section of the course dedicated to UI and UX stuff that is, of course, not part of LLM at all. So how to use DataForm to build data pipelines, ⁓ how to kind of orchestrate those things. Of course, those tools use AI. you're kind of using a secondhand version of AI in that case. ⁓ But yeah, we're already pivoting any content that we create right now. We're about like...

where are people using an LLM right now for technical assistance? And we don't steer clear of that content, but we pivot that content to help people use AI more efficiently. And then for the stuff that we know that LLMs can't really help you with, which is like digging into your mind and trying to tap into that creativity there, that's where I think our focus will be. And then for those who want to be deep learners, we focus on the expertise. But it is like...

We haven't tested this. don't have a solution for this. think that every educator in the world is struggling with this. Not just those who are online, but those who are in schools, who are in universities, professors in universities, they are struggling with this thing because they know that there's this tool that students are using and they are fighting ⁓ this lack of incentive for critical thinking. ⁓

Katrin (45:51)
I mean, it's true. You know?

Simo (46:13)
again, exaggerating a little, but it is out there. There's just incentives, incentives, incentives. Why should you learn to critically think when you can just get answers out of the bat? That's the debate.

Katrin (46:27)
It's really a huge question, ⁓ I don't personally understand how you would ever let go of your critical thinking. It seems to me like that's like a survival skill. Like I would never, never, ever let go of that, except if I'm unaware of it, obviously. But like voluntarily, absolutely never. Are you crazy? No. ⁓ But then it's...

Simo (46:36)
Mm.

Yeah.

Katrin (46:51)
seems like a lot of people seem very happy to let go of critical thinking. I don't really quite get it. I don't think that's actually necessarily related to AI or not. It's just a phenomenon in the that I just don't really understand. Yeah.

Simo (46:58)
Yeah.

Yeah, it's attention spans. Like attention spans

are so short. Why spend your time critically thinking when you just move to the next topic? But I do think that ⁓ not just critical thinking, but your sensitivity to detect flaws ⁓ in a response or in approach is just becoming less. ⁓ It could be due to AI, it could be due to the fact that you just answer generation is so much easier these days. ⁓

But just this idea that it's becoming more more difficult to detect problems or to have that kind of spider sense in the back of your mind saying that something's wrong here. And you only get that through experience. You only get that if you've struggled with this before. Like if you've stumbled into something before, that's where you get the sensitivity to prevent you from stumbling there again. And I think that the more we have these abstractions, like layers of abstractions over each other, stumbling somewhere in that abstraction stack,

doesn't transfer to upper levels. So if you stumbled a long ago with a tag management problem, if you're now working solely with MCPs, you lose that sensitivity. You no longer have that, which would have alerted or flared your synapses before. You no longer have that because it's so much removed from where you had that initial experience, even if the problem remains the same.

Katrin (48:24)
It's also to a certain degree specific, it's like atlases, right? Atlases is specific to the activity that you, that you, and it translates to a certain degree, generalizes a certain degree, but only a certain degree. But, you know, I'm going to jump on one something you say, which is jump to the next topic, which I've been terrible at doing in this podcast. We are supposed to talk about agent e-commerce. So let me ask you the question. ⁓ You know, you're the...

former specialist of data layers, server side tracking, concept collection infrastructure, ⁓ all of these like aspects of how we deal with ⁓ understanding traffic to websites. What is, are you thinking about agent e-commerce? What are you thinking about agent e-commerce? Let's just open the question that way.

Simo (49:16)
⁓ Well, for the longest time, we've been able to ⁓ get by with a poorly structured data layer because we've always had the tag management system to cover our back. So we've been able to make transformations where necessary. think that with any kind of agentic access to our website, ⁓ so we're talking about it's not a user or a human being browsing the content, but ⁓

but an AI agent doing it for them. ⁓ Just the importance of having a structured data layer, having structured data in the first place, having a structured website that is specifically designed for that agentic access becomes more and more important. And I think that you can't really rely on having a patching system take care of that for you anymore, but you have to really, it's, I don't know if this is a thing, but it's kind of agentic by design.

So you have to design your content to be consumed by agents. And it could actually lead to a very inefficient kind of dual approach where you have content that's designed for humans and content that's designed for agents. And then you have to kind of struggle with what is the... Do we have the resources to manage both? And if not, which one should be kind of gravitate towards? So I think that it does increase...

the importance of having that logical chain throughout the back end, all the way to the front end and through all those analytic systems in place. And it does require consistency. And it's one of those things that really requires deliberation, where you can't just get by with a single chat GPT prompt. You have to kind of think about it across the entire journey. But definitely structured data is back in fashion after the longest time when it just wasn't.

Katrin (51:01)
Yes.

So if we were to think about a theoretical process like this for a theoretical brand step by step, what would you say are the steps to think through? And as we go through the steps, what would be the main questions, processes? Like, how would you think about that work stream?

Simo (51:24)
Well, ⁓ that I don't think has changed anywhere. I think that building a data layer is a very deliberate thing. I think that building ⁓ any kind of semantics data structure to be consumed by a machine and previously was a tag manager, now it's maybe an MCP or a ⁓ AI agent. I don't think that has actually changed at all. We've just become lazy with it.

So now we have to think about, for example, when working on an e-commerce site, we have to think about how is our data structured for the agent to consume. ⁓ And it means that we can't get, know, we have to start thinking like, what are, what is the backend that we're using? If we're using Shopify, for example, it's obviously well ahead of this curve. Like they already have everything in place. They've built their own protocols for agentic access. ⁓

And then you have to kind of make sure that your entire process follows that logic. If you're building a semantic model that focuses on ⁓ clearly distinguished namespaced entities, some for humans, some for agents, or some for regular tags and some for agents, you have to be well documented along the way. You have to build tests against that. It becomes a whole thing. And it's kind of those interconnected systems like what is your sales engine? What is your backend? What is your CRM?

What kind of information are they producing? So structured data has always been problematic from the point of view that if you're working with structured data, have to translate this, you have to communicate the structure to all the systems that are producing the data. It's not enough to just have kind of a, a rule-based system for your data layer, but your actual backends have to agree with those rules and they have to produce content that follows those rules. And like I said, we've, we've become complacent with it because tag management systems have been so efficient at doing those translations on the fly.

which has led to some friction and server-side tagging even more so because now we can externalize it completely out of our organization ⁓ to like a server-based system. But with agent e-commerce, ⁓ at least my understanding of the current approach, we can't get by with that complacency anymore. And we have to start thinking about things in terms of structure by design ⁓ from the very ⁓ kind of first thing that produced by our e-commerce system.

Now I have to a sidebar that I again think that this is a bit of a shame. I think that it's problematic if e-commerce stores start solely optimizing for agentic access, which I think many of them are doing because they can see how useful and how beneficial it can be. Well, just based on the output again that I'm seeing and based on some projects I've been working with where the entire preview has been how to generate the structure for the agent and not consider

the human-based ⁓ access anymore. think those companies have the incorrect approach. think they've exaggerated or over-interpreted how agent-to-key commerce works because it's not mutually exclusive to human access. ⁓ But it does become a prioritization problem as well, especially if you're thinking about how do we structure content for two very types of different approaches or accesses.

Katrin (54:33)
concretely so people could have a picture in their head. Let's say I'm a company X and I over-optimized for agent X and I don't take humans into account in that over-optimization. What type of experiences for human can I make underwhelming or suboptimal? What are the dangers of that practically?

Simo (54:56)
Well, take it to the extreme, the lack of focus on UX anymore, lack of focus on UI, lack of focus of customer journeys on the website itself. Like, why worry about that if the agent does the shopping for the user at this point? ⁓ And just lack of prioritization. Like, you've been working experimentation for a long time, and people around you at Experimentation Island have certainly done so. And you know the frustration of an organization that doesn't put appropriate resources into it.

into thinking about UX, into thinking about customer journeys.

Katrin (55:30)
We

all know that as users of the web.

Simo (55:33)
Exactly.

Exactly. And we are frustrated. if we take this at face value, I think it does throw another problem in our faces when it comes to prioritizing resources. And we are working with stakeholders who are bedwagging AI like crazy and who want to put more and more resources. So it becomes another struggle internally in an organization trying to get them to understand that we still need to focus on UX because we'll still have human visitors.

⁓ And I do fear, or maybe it's not a fear, the wrong word, but I am a bit worried that there's going to be backlash when enough bad experiences emerge from the current state of agent commerce, where the kind of agent just doesn't understand our demands or doesn't...

No, that's maybe I'm just being overly cautious here. But I do kind of worry that ⁓ at this point, running full steam ahead with building a website for agentic access ⁓ becomes almost a zero-sum game when you're allocating company resources for this, where the company tries to think that, OK, let's put 90 % into agentic ⁓ development work.

Katrin (56:48)
Not much. That

sounds unreasonable. I mean, I-

Simo (56:51)
It does. I'm

just thinking this to the extreme and thinking about companies who are really, really thinking about this when they've had maybe bad experiences. They're just lacking results. They're lacking conversions. I'm actually coming from a very personal place here where they're trying to figure out how do we make our commerce store work? So should we maybe try this focusing 100 %? But it's...

Just this idea of potentially having to manage two different workflows could become a resource allocation problem. I don't know, are you seeing something like that?

Katrin (57:27)
So what I'm seeing is people really thinking about it, not necessarily a lot of action yet, because ultimately when you think about it, you do need to ⁓ first change the website, to a certain degree. You have to restructure. Most people have legacy code on their websites. At least some of the pages need to be restructured.

Simo (57:51)
You have to de-noise

it, like remove noise and remove distraction and remove intrusion.

Katrin (57:55)
to a certain degree. And if you don't do that first, you can't really build a data layer on top, like a new data layer on top of it. I can also ⁓ see a lot of questions about, what should we measure? What are the KPIs? How do you actually measure agentic commerce at all? How do you capture this? Do you have an answer?

Simo (58:16)
No, no, that's a very good question.

think that because there's no there's a lack of standard for it. ⁓ I do think that it's there is it's kind of like ad blockers in a way that the ad blockers themselves don't want to announce that they exist because then they will be easier to kind of track and block in the same way. I don't think that AI wants to announce that they are an AI because most people will be blocking those crawlers or not. Most people, many people will.

because they don't want that content to be scraped for whatever reasons. ⁓ So that's a very good question, and it's completely unsolved at the moment, and it probably requires a compromise somewhere down the road where ⁓ maybe the browser will step in as the user agent again, ⁓ communicating some kind of a signal that can't be used for manipulation or just misusing that information, but could be used to detect reliably when ⁓ an AI is actually...

Katrin (58:59)
Yeah, makes sense, yeah.

Simo (59:12)
That's very interesting problem to solve, which doesn't have a solution right now.

Katrin (59:15)
And

Cool. Well, Simo, thank you. It's really been amazing. And I hope to have you back and talk about all of this stuff more, because with the speed of development, I'm sure we're going to have at least another hour of this conversation worth in not so long. So thank you for doing this. If people want to.

Simo (59:27)
Yep.

Katrin (59:35)
hire you as a consultant or find your courses or talk to you. Where do people find CMO?

Simo (59:42)
Don't hire me as a consultant. Don't even ask because I'm just gonna say no at this time. My availability is so poor. ⁓ so ⁓ simoahave.com is my blog. still do blog more rarely now, but still do. teamsemer.com is the website for a company. ⁓ We have a blog there as well, which goes in depth into the things that we teach about. And the two places where I'm most active online is LinkedIn, unfortunately.

Katrin (59:47)
Okay.

Simo (1:00:09)
So feel free to connect, but be patient because I might be very grumpy old man screaming at the cloud in LinkedIn. But Measure Slack for sure is my environment of choice. So Measure Slack is a free online community on Slack, surprisingly, for anybody in the digital marketing industry. We have over 20,000 users now and it's just a great place to have these discussions. And please do join and please do discuss because we've also seen

a decline in participation in Measure Slack, which I do think is also attributable to the fact that people go to an AI for their answers. But we have a community of verifiable experts ⁓ who are truly like top of their game and can help you with your questions in a way that no AI will at least at this time be able to reproduce because they are the people who the AI is actually scraping to get those answers ready. So Measure Slack is my... ⁓

recommendation if you want to find me and engage with me and other people who are similarly deeply invested in this industry.

Katrin (1:01:12)
It's a

very special place. We'll put all of the links in the show notes. And so thank you, Simo. That's episode 14 of Knowledge Distillation. Today's conversation made you think about the foundations underneath your analytics stack and what happens when those foundations shift. Visit us at ask-y.ai, try Prism, platform helping analysts navigate complexity with context. Thanks for listening. And remember, bots won't win. AI analysts will.

­Resources Mentioned:

Companies & Organizations
  • Simmer – analytics and tagging consultancy
  • 8-bit-sheep – digital analytics consultancy
Analytics & Measurement Platforms
  • Google Analytics 4 (GA4) – modern analytics platform
  • Google Tag Manager (GTM) – tagging and data layer implementation
  • Data Layer – foundation for tracking and agentic commerce
AI & Commerce
  • LLMs (Large Language Models) – discussed in context of skill shifts
  • Agentic Commerce – AI-driven interactions and transactions

Connect with Our Guest:

Host name:

Katrin Ribant

Episode Credits:

Host: Katrin Ribant Guest: Simo Ahava Podcast: Knowledge Distillation
Episode: 14 Runtime: ~61 minutes Release Date: 03/25/2026