Katrin (00:00)
Welcome to Knowledge Distillation, where we explore the rise of the AI analyst. your host, Katrin Ribant, CEO and founder of Ask-Y. This is episode 15, and we continue our exploration.
of how agent e-commerce is changing the online market as well. Today, I have a guest who has quite literally seen every chapter of digital analytics from the inside. I generally try to classify guests into, yeah, I'm a data person, you too, John.
You don't laugh at this. classify guests into four categories. Just actually do laugh at this. It is really funny. But it's basically just to clarify what point of view we are taking on the rise of the analysis, the AI analyst. That's kind of the idea. So I classify guests into either market analyst, tool builder, investor or practitioner.
But John, you have been a market analyst, have been a consultant, you've been president of the Digital Analytics Association. So it's kind of hard to put you into one of those boxes. So we'll just go with whatever perspective seems appropriate on the fly on the questions. And so, John, you're currently VP of Analytics and Insights at Seer Interactive. You've been in digital for...
over 20 years, we stopped counting after 20. I've decided after 20, it's 20 plus. Mostly, guess, through that 20 plus are okay with this. So you move through analyst roles at Aberdeen and Jobbitton Research, then you joined ⁓ Forester as a senior analyst covering the web analytics space.
John Lovett (01:20)
Sounds good.
Katrin (01:42)
You then co-founded Web Analytics Demystified with Eric Peterson, where you did strategic consulting for quite a few years. You served as president of the Digital Analytics Association. That was during the period where the industry was really renaming itself, right, from like web analytics to digital analytics. We'll talk about that. That was quite significant. And then you're also an author.
you're the author of Social Media Metrics Secrets and you just released the new big book of KPI, which reimagines how we define and use KPIs in the age of AI, which is basically our subject today. And at CIR, you've been building AI assisted analytics, moving your team from static reporting to strategic conversations with data. And this is really
the core of what we are looking at in this podcast is the rise of DNA analysis. So I could not be more excited than to be talking to you. ⁓ But first of all, could you give us sort of your color on all of these amazing credentials that are just sort of like rattle through?
John Lovett (02:49)
Well, first of all, thank you for the accolades and thank you for listing out my accomplishments there. It's a pleasure to be here with you today. Yeah, it's been a fun journey. I very much started my ⁓ analytics exploration from a marketer's perspective. Before I even got into the data side of things, I was a marketer and just curious about how do we get our messages out there? How do we get our brand awareness? How do we grow that and how do we bring people in?
And then when the internet came along and just seeing how like, wow, this is a whole new channel that has exploded, it opened my eyes to so many opportunities. And it's funny as we kind of the parables, I was actually explaining to my son at dinner last night talking about one of my first jobs. And I was explaining to him how I was responsible for getting people to go online and to use forums and to start using the internet. And he said to me, he's like, dad, that's kind of what you're doing today with AI, isn't it?
And I laughed because he was right. It was a good observation that we've kind of gone through this cycle of, something new is coming out. It's emerging. Here's how it can benefit you. And I still remember conversations with like aunts and uncles who were like, I'm never going to buy anything on e-commerce. That sounds dangerous. I wouldn't hit that click. I wouldn't do that button. And people are sort of skeptical about AI in those same ways today. And so it's about helping them understand. But it's been a funny journey where I've learned a lot of, you know, I've seen the tools rise and
We had burgeoning analytics, different tools, and then the market shrunk down again. And now we've gone back to this new realm of AI explosion. And it's a really fun time to be here. It's probably one of the most exciting times in my career because there is so much to learn and so much new things to be able to show clients and help people understand. It's a really fun time to be in this industry.
Katrin (04:36)
It's an amazing, I agree, it's an absolutely amazing time. And what you described of this sort of journey, and I think it's a journey a lot of people, our generation share when, you've stuck around in this area of work for this long, generally it's because you are somebody who's interested in innovation in...
things happening that change the way we interact with the world. And we've been through a lot, right? We've been through the web and then mobile, social, and all of these things. There were so many aspects where, yeah, people around us would just go, no, I don't really want this. This looks frightening. This looks dodgy. I don't know that I really want to do all of that stuff. And
At least me, I'm sure you too, I was always like, this is amazing. How does it work? It's so fascinating. And this one is really one of those, probably the most profound, ⁓ since the rise of the internet, would say. ⁓ And probably more, in my opinion, of a larger scale than that even, because it has a distribution platform that the internet didn't really have. I think it's in my...
And in episode two, I was talking to Mike Driscoll about this. And I was comparing this to, it's kind of like if you had had the internet, but the fiber optic grid was already laid, because you have the distribution platform for it already, right? So it catches on like fire. ⁓ And so like to sort of get us started, I kind of want to go back to this, you know,
John Lovett (06:06)
Yeah.
Hmm.
Katrin (06:20)
the renaming of the digital, the web analyst into the digital analyst, because that was really about broadening the scope as the web was actually broadening and as what behavior, what people do on the web was actually broadening. Do you see, ⁓ have really extensive experience with implementing AI in an actual team. What would you say makes an analyst an AI analyst?
John Lovett (06:51)
Yeah, it's a great question. ⁓ You know, and maybe I'll just touch first on when we made the transition from the Web Analytics Association to Digital Analytics Association, it was, it seems like a very trivial name change. But for us, it was a it was a monumental movement because we were trying to say, this isn't just about websites anymore. You we were, you know, mobile was very much emerging. We were seeing cross channel digital activities going on everywhere.
and all of the jobs or the role of the analyst wasn't just confined to the web. That was cutting us or selling ourselves short to say that we were the web analysts. We were the analysts and we were looking at the digital data. And I think now, you know, obviously that is that has expanded to all data, to offline data, to online data, digital data, wherever you may have it. So it certainly broadened in that way. When I think about the role of analysts versus AI analysts, I actually see
more similarities than differences. And I'll tell you what I mean by that. As I've gotten more more involved with AI, I keep going back to the things that make someone a good analyst is what makes somebody a good AI analyst. The ability to be curious, to ask questions, to see anomalies in the data, or to, when presented a number that just looks funny or looks wrong, to have the instinct to question that and to dive in and to be able to dig deeper.
So, it gets, unfortunately it's uncommon. It's not so common sense, but it goes a long way. But yeah, you know, and I'm finding that as I'm building with, you know, the most sophisticated tools that are out there, you know, all these AI agents, I'm building in practical guardrails for data quality, for understanding what I want and what I don't want. And I think those are some of the foundational things. So,
Katrin (08:19)
good old common sense, yes.
True.
John Lovett (08:46)
to me thinking about that AI analyst is more of, they willing to bring AI into their process? Not do they change their process, but how do they expedite their workflows? How do they work more fluently? How do they work more fast, more quickly and produce more ⁓ rather than changing the way they work? It's inviting AI to be a partner in that work. So to me, that's kind of the distinction there. It's not necessarily.
a big pivot, it's the welcoming and understanding truly how it can be used to your benefit as an analyst.
Katrin (09:20)
And yeah, that's very true. As you were talking, I was thinking actually, when we started, we were at the forefront of digital marketing and digital marketing was tiny, right? Really tiny part of marketing and certainly tiny part of ad spend. ⁓ Today, it's a vast majority.
And you're not really a digital marketer anymore. just a marketer. You're not a marketer if you don't market on digital or, I mean, I'm sure some people are, but it's very rare. ⁓ Conversely, I suppose in a couple of years, you won't be an analyst if you don't use AI. So we won't really think about telling people they're AI analysts, right? However,
John Lovett (09:54)
Mm-hmm.
Katrin (10:10)
Today, it's not really the case yet. And the future is distributed unevenly amongst people and organizations from that perspective. And maybe a little bit less now, but I would say until a few months ago, I felt a lot of fear about it amongst analysts. Fear of, know... ⁓
Is AI going to replace me? How do I use it? Is it actually reliable at all? ⁓ And there's not really sort of obvious path to do that. So I just had Scott Brinker on the show. The episode is not released yet, but it will be released obviously ⁓ before we have this episode. And Scott draw ⁓ direct parallel between what happened with the rise of the marketing technologist.
in 2010 and what he sees happening with the AI analyst now. So, you know, there's tool explosion, obviously, there's hybrid role emergence, but there's also professionals called in the middle of all of these changes in organizations where there is a lot of top-down pressure for effectiveness, efficiency, moving, trying, experimenting, not really having much of a...
path to that that is particularly obvious. Literally everybody's kind of trying to figure it out. Do you see that around you in organizations you work with or hear about, or what is your experience from that perspective?
John Lovett (11:40)
Yeah, yeah. First of all, shout out to Scott Brinker. I love his work. I've been watching the Martech landscape. His chart grew from this little thing to this gigantic, what are we, 9,000 or I don't even know how many technologies.
Katrin (11:49)
God is amazing. I
don't want to know. And we're one of them, so I definitely don't want to know. ⁓
John Lovett (11:55)
⁓
No, I love his stuff. And it's an interesting thing. my observation on that is, know, early days in analytics, it was, hey, we got this tool, we need to tag our websites and start collecting data. ⁓ And, know, we dabbled in, ⁓ you've got ⁓ some other data over here, you know, that we can look at as well. It was our data sets, by all intents and purposes, were small. And I think what's happening more and more today, and it
kind of follows Scott's technology growth is that what I'm finding more and more is clients are coming to us and saying like, here's our analytics data, we keep it over here. Here's our CRM data, it's here. Here's our SEM data, it's over here. Here's our AI data, it's over here. And so we become data pipeline creators and aggregators to be able to tell the complete story because that fragmentation of the tech stack by itself,
has just created silos, right? So you've got data silos all over the place. And part of our job is to be able to say, okay, you can look at those independently, but they're never going to tell you the whole story. So the analyst has thus become a data aggregator and a data cleanser and a data ⁓ organizer to make sure that we can tell the full story. So I think for me, that's one of the biggest changes in that growth of the landscape is thinking about the sheer multitude of data sources we have.
My approach to this, in fact, I was working on it this week, I'll build something for a client. I'll say like, here's our, you let me just start with one data source. This is all of our, for example, geo data. And, you know, we're collecting the data. This is what we see. This is how it's affecting your customer journeys. This is how it's affecting your revenue. And that's one path. But then when we layer in web and we layer in mobile, we layer in offline sales, we layer in physical locations, you get this complexity that adds up.
And that becomes the true picture. And so you can do it as individual channels and even these silos, but until you really bring your data together is when you get the full picture. And I think that makes a big difference and very relevant ⁓ to Scott's kind of landscape, the technology landscape.
Katrin (14:08)
Yeah, definitely. And one of the things that you're talking about is this like AI learning and adoption loop. Do you want to tell us a little bit about that and how, what determines where somebody sort of lands on that, on that spectrum?
John Lovett (14:22)
Yeah, yeah. So this is something I wrote about a while back. it was Satya Nadella from Microsoft talked about you're either in the loop, you're on the loop, or you're out of the loop. And the idea being that ⁓ you're either looking at AI technology and wanting to get on the inside, wanting to know what it's about. If you're on the loop, you're there. You're trying the tools, you're testing things out, you're playing with things. And then outside the loop, just simply like
Nope, I don't have time for that or I'm afraid of that or I don't understand it. And so one of the things I explored was like, how do you start to get in the cycle of ⁓ experimenting with tools and trying to learn things? And for me, it's actually, there was a, what's the right, is it centrifugal force? It like, it pulled me in and it basically said like, you know, the more I learned and the more I saw these things, the more I wanted to go deeper and the more I wanted to understand the next thing and look at the next technology and, play with the next thing. So.
You know, really kind of as I think about that, ⁓ it ⁓ was more about like, you know, there are going to be people that are all over the continuum. For those that are interested in this, you know, rolling up your sleeves, getting involved, starting to use the tools, ⁓ you know, if you see moderate success, and again, the numbers are out there, there's a lot of people that have not yet seen the success. And I would argue they're probably starting in the wrong places. Once you start to see a little bit of success and start to see
how your workflow changes or how your day changes when you can introduce AI. ⁓ That just changes everything. without getting into the big build of how you get there, ⁓ the way it's changed my day, like I come into the morning and I've got my personal cloud or my enterprise ⁓ cloud hooked up to my Gmail, to my calendar, to my Slack notifications, and to our project management system.
And every morning I come in and I say, what is my morning briefing? Tell me, and I've trained it to know morning briefing means tell me what I have today or this week and what is most important and what I need to prioritize. So it'll tell me like, you've got a management meeting here. You've got a finance meeting here. You've got a podcast tonight. ⁓ I see you having prep for that yet. You better get on it. But that just shifted my day. ⁓ actually changes the way I think because I can talk to my AI and it says, this. And then more often than not, I'll pull up cowork and I'll say, okay,
Now I want you to do this task, something bigger that I'm working on, a methodology, a research project, maybe it's some analysis I'm doing for a client. And I'll say, like, here's all the things I want you to do. Now go off and do it. And I let that run on one of my screens over here. And then I just go about my day and I start my day. But then I'll pop back in. was like, ooh, that was good progress. And I'll keep it going. But that has really changed the way I work. Because now, and this is hard to do if you're not a multitasker, because these tools run.
they process and they take a while to process. So I jump over to another screen and do something else. But the joy of standing up to get a cup of coffee and coming back and seeing like a beautiful HTML artifact that was produced while you stepped away from your desk is one of the most gratifying things. And it's like, I can totally use this. I'm going to share this with the client. And it's always editing. can't, or I don't, take the outputs and just say like, OK, let's run with this. I have to.
Katrin (17:18)
That's true.
John Lovett (17:42)
I changed this, let's iterate on that. That's not what I wanted. Here's how we're gonna modify. But those types of behaviors have really shifted the way that I work. And it's transformed things for me to be able to produce more, to be able to produce more insightfully, to be able to show clients things they've never seen before. And that's, you know, there's a lot of joy.
Katrin (18:03)
And how does it manifest in the way you manage the teams, that we manage the work? Do you produce code? I imagine you produce code differently. Do you have processes in place about this? ⁓ like in the practical analytics work, how does this manifest?
John Lovett (18:24)
Yeah, so ⁓ we did something as a company at Sierra Interactive. This was probably 18 months ago. We were asked by our AI council to think about every single deliverable that we offer, every single analysis that we conduct on a regular basis, really everything we do in any given week, and to look at all of those things and determine if AI could help us to do them.
So we had a list, we narrowed our list down to 15 core deliverables that we're regularly delivering to our clients. And we said, which one of these can we disrupt with AI? And so we started building these and we had, I'm lucky that Seir gave us a mandate to say, experiment, we want you guys building, we want you experimenting, we want you trying out new things, showing them to clients. When you get good feedback, keep going and then we'll productionalize them once you know you have something.
And so we've taken this notion and just started to figure out, here's the things we're going to do every day. If we're going to produce this type of deliverable, how can AI help us through this process? And it doesn't take the process over. It just expedites it. It helps us move faster. It does some of the tedious work for us. And then we were able to step in and say, oh, that's really good. I'm taking it to the next level by bringing this idea and this idea, and let's expand on this one. And no, I don't like that one. But that's really helped us to operate differently.
And now, since we started this about 18 months ago, we have so many of our regimented or things that we do every day that you pull up an agent and the agent starts the work and does the manual or the automated part of the process. And then the analyst will come in and look at that and make observations and be able to build upon it. So, before it was, he always started from scratch. like, okay, pull up the template deck and you start thinking and writing. And now it's like, boom, I've got something I can really work with here.
I can mold this into exactly what I want it to be. So it's just leveled us up across the board in terms of how we work and the way that we do things. And that's really, really made a difference for the team.
Katrin (20:27)
And just for the audience, has it taken anybody's job?
John Lovett (20:33)
No, definitely not. ⁓ It's a funny thing. Someday, think some of the lowest level jobs will be gone because of AI. And I think as knowledge workers like we are, ⁓ there's going to be things that can be fully outsourced and automated. But AI for me, we have a saying here that's like the human needs to be in the loop because you cannot just let it go and do everything.
Katrin (20:36)
Yes.
John Lovett (21:03)
You're gonna get slop. You're gonna get that generic output and it all sounds the same. The more you work with your tools and put them in your voice, put them in your style, make sure they're saying the things that you would say, that's when it becomes better. So no one has lost their jobs. They haven't been displaced by AI. They've gotten smarter. They've gotten better. They've gotten to be collaborators to be able to use it to the best ways that they can.
Katrin (21:30)
And one of the things that I am trying to of like get at is ⁓ for somebody who's an analyst in an organization, an agency or inside of an organization, and is thinking, well, I need to sort of wrap my, I need to get somewhere on that spectrum continuum of AI adoption that is not out of the loop. But I have limited time, I have limited ⁓ ability to focus on different things.
What should I learn? So I'll give you my theory about that and then we can argue if you want. We gotta like, we gotta create some drama and argue a little bit. We really don't, but I've been told I should maybe do some more, you know, some more sort of like, ⁓ not adversarial. ⁓ Tom, yeah, yeah, exactly, polarizing. I'm not polarizing. Tom, cut this. I don't want to be polarizing.
John Lovett (22:02)
Ha
polarizing ideas, right?
you
Katrin (22:28)
⁓ So here's my sort of take on ⁓ the skills that I would sort of recommend ⁓ an analyst focuses on. One is, I think it's important to really understand how LLMs work. ⁓
because it's a tool, it's your tool, you have to understand your tool. And in fact, to help people with that, I've created a series, I don't know if you've seen that on our website ⁓ or on LinkedIn, I have this series, AI ⁓ animated series that explain things like context windows and attention mechanism, et cetera, to help with prompting. So I think that's like one of the things that is really important.
The other thing is, ultimately, when it comes to AI, your interface is natural language.
and it's specifically prompt engineering, context engineering, you have to learn how to do correct prompt engineering. It really is an art and a science. If you have simple prompts, it's fine. These days, you can really be sloppy about it if you have simple prompts. If you need something long, precise, specific to be done, you have to be very structured. You need to know how to utilize the context window and the attention mechanism to make it work.
So that's number two, prompt and context engineering. The third one, I think it's really important to learn how to read code because it's one of the things to write code, but it's a completely other thing to read code. And as you said, you have to read the outputs. You can't just take the outputs and say, oh yeah, that's probably correct, or make it explain it and believe the explanation. That's kind of not...
necessarily a great idea to do that even today, right? ⁓ What are you sort of like ideas about what people should focus on to get on that adoption loop?
John Lovett (24:21)
Yeah, yeah. And unfortunately, I'm probably not going to disagree too much with you here because a lot of things you said are very, very spot on. The outputs, prompt engineering is something that, and there's a lot of methodologies behind this that are smarter people than I have created, but giving it context, giving it a role, giving it an ability to solve a problem. One of the best things you can do in a prompt is,
Katrin (24:28)
You
John Lovett (24:49)
lay out your, basically give it the role, lay out your scenario, and then tell it to ask you questions. like, and I usually, a lot of times I'll end my prompts, do you understand? And then that comes back and it tells me like, hey, this is what I understand that I'm doing. It gives me an opportunity to correct them. If you just enter a prompt and say go, like you don't quite know what the LLM is thinking necessarily. So asking if it understands, asking it to ask you clarifying questions to give you the opportunity to do more is.
absolutely a way to be able to get the answers that you need to get. Otherwise, you'll get generic stuff that doesn't really work so well. So very much agree with that. I think other ways to kind of get in the loop, ⁓ and I talked about this recently as well, but it is hard for the average person to say like, ⁓ now I'm just going to add AI to my workflow if you don't know how to do it.
my personal journey, and this goes back a couple years now, but I just had to GPT on my phone and used it for my personal life. I would take pictures of ingredients that I had in my refrigerator and I say, what should I make for dinner tonight? And it would give me great answers. I planned a trip to Ireland with it and it helped me figure out an itinerary and where to go and what to do. And that was really opened my mind to what was possible.
⁓ And more often than not, like it'll be a Saturday or Sunday morning and I'm sitting at home and I'm ⁓ on my phone and I'm just playing with the tools that I have on my phone, asking it to build. My client asked what the ROI of analytics was and we know that's a really hard question to answer. ⁓ On my phone, I just vibe coded. was like, let's build a calculator that shows the ROI of analytics. And just, was like a simple little fun little thing I did. And I built this whole interactive calculator on my phone.
using Claude that I then showed to the client on Monday and they're, just solved it for me. This really helps answer my questions. So, you know, obviously the, the interest and the appetite has to be there, but you know, pick your use cases, whether they're personal ones that you can bring into your regular life and see how this works or simple, you know, you know, identify a problem that you're trying to solve in your role and then think how AI can help you to solve that problem or ask it, how it can help you to solve that problem. And
I will tell you that it's generally the first answer is not going to be the end. Like almost that's why they're called conversations, right? You start somewhere, you build on it, you keep building and you ask it more and you ask it more. ⁓ you know, for me, it's funny that you talk about context windows because this is, this could be a consequence of the day we live in, but I get context windows exceeded. My chats break, you know, whether I'm using Claude or Gemini or chat GPT or Ninja cat or whatever, it'll say like,
you know, boom, your conversation's over. So my defense mechanism against that has been like, ⁓ remember this, the output that we just got to, I love it. I really like this. Give me instructions to give to another agent so I can replicate this and build it again. Like I always try to say like, hey, if we stopped our conversation right now, how could I pick this up again in another conversation? And again, this is probably like going back to the earlier part of our conversation, the days before.
reliable internet, you you didn't know that you were always going to have a connection. That's kind of what a lot of these tools feel like today is you'll start a conversation, connection breaks, or something goes wrong. So I've kind of been trained to say like, okay, how do I save this? How do I do this again? How do I repeat this in scale? And that's a little bit of my leadership role too, is to be able to say, I built something. I think it's cool. I want my team to use this. How do I get an output from the AI tools that will help me replicate this or help people down this similar path?
I don't want to pre-assume their observations or their findings, but I want them to follow the process. And that is something that AI is great for. It can say like, boom, here's how you do it, here's the steps, I'll give you the SQL or the markdown file that you can feed into any agent you want, and we'll go through this process together. And that's been kind of a breakthrough for me for how do I scale things, how do we do things again and again, and how do I continue to up level my team to be able to do things as well.
Katrin (29:02)
And I think what's really great is these examples, simple example of making a picture of your ingredients and ask, what can I do with this? Examples like this to get started are great because there is Cree, ⁓ there are use cases you understand, you will very easily see whether the AI is telling you things that make sense or don't make sense. You'll actually be interested in reading the output. You'll stop noticing.
how the LLM is responding, what is missing, what is not missing, how to get it to do things yourself running into all of these issues like what you said when you exceed the context of one conversation. And then now you have to like, but I don't want to lose all of this history. How do I take all of this into the next conversation? And how do I utilize all the mechanisms that exists in the LLM? And how do I make it invent? And now you become
more creative about your usage and more sophisticated. And that probably makes you more comfortable to pull it into your job, where the stakes are obviously higher than making dinner.
John Lovett (30:07)
Yeah, yeah, absolutely. And I think that's very true. It's like you see how it works. You see the types of responses you get and you'd get a feel for it. that, you know, my personal experience, it just helped me to be able to use it more effectively and to learn things. So I would highly recommend that for anybody who's looking to get started. You know, and again, too, that, we have to be careful. Like, you know, free chat GPT, you're going to get ads pretty soon. So, you know, it be worth that $20 to get that, you know, that paid account, ⁓ you know,
I use it for like a personal budget. put my budget and uploaded spreadsheets into ChatGPT so I can just look at things and understand my finances, but I don't want that going out there. I want that to be in my inside an account. that's one, I would advocate for folks to spend the short money to be able to get a secure account so that you're not training the models and your data is secure.
Katrin (30:59)
Well, knowing that even so, whichever system you use, whichever lab you use, whatever your license, the data gets copied in multiple places that are not necessarily controllable and that the lab doesn't control either. There's an excellent floofy video about this, by the way. It's called the floofy factor. It explains what happens between when you're typing your request and you get your answer.
before the token generation actually happens. So it goes through a number of systems that are creating copies for legal and compliance, for security, for different reasons. And it's a very, very complicated architecture. The surface attack is enormous. And it's so complex that there is no way the company themselves would be able to tell you where the copies are.
should they need to delete it, right? So there is still a certain degree of caution that I would sort of encourage people to have when it comes to sharing certainly personal data and personal information. There is really no guarantee it's not going to find its way out there. Not necessarily even because of a malignant attack, right? Simply because it is there and it isn't controlled.
⁓ So, you know, so yes, on that word of caution, let's talk about ⁓ the subject of today, which is agent e-commerce. I'm really excited to talk to you about that because I think you are probably the person I know that has the most experience with this, has on the most research on this. So ⁓ expectations are high, just saying.
pressure is on. so, you know, we've been spending a lot of time on these podcasts talking about agent e-commerce, agents that shop, compare, buy on behalf of the consumer. First of all, can you just tell us a little bit your perspective on this, what you've done, what your research, what you looked at?
John Lovett (33:05)
Yeah, yeah. So it is definitely something that myself and my colleagues at Sierra Interactive are actively looking at. ⁓ A year ago, we published a blog post about Opera, browser that, that's agentic browser where you could go and it could help you surface things. So we've been researching this for a while. And, the, the perspective that we're coming at it with is you've got your, you've got your bots, you've got your, ⁓ you know, you've got your ⁓ agents that are going out and
getting training data, you've got your retrieval agents that are going out and hitting websites to get other information. And now this new slew of agentic browsers or agentic commerce enabled browsers are ⁓ things like where you're on your cloud and you tell it, hey, go out and shop for airline tickets for me, find me the best ticket and eventually help me purchase that ticket. So we're looking at this across the board in terms of
How do the bots act? How are those different than the agents? how do you distinguish that, the agent from the human? And it's getting very difficult because if these agents are performing tasks on behalf of the human, there are similar characteristics that you could see across both. I'll kind of pause there for a second, but those are some of the things that we are exploring.
Katrin (34:26)
So but specifically on the recognizing ⁓ humans from bots, because we've been, you know, we've been trained to have a certain way of filtering out bots and optimizing for certain bots. These are totally new types of bots. ⁓ How do we even recognize that those bots have visited the website and what they've done?
⁓ What's your sort of experience just purely on the measurement side of things?
John Lovett (34:59)
Yeah, yeah. So it is increasingly difficult. It's very hard to do this. So at this time, we've got a couple of ways that we operate. It's going back to basics, log files. Looking at your log files and trying to get user agent IDs and being able to understand from that perspective, it's not like we have a list and say, here's the 20 AI agents that are going to hit your site. Look for these agents. And if you see them,
you're good to go because it's probably maybe a start at 20, but now we're at 20,000 and it keeps growing. And you can't just say like every visit to your site with a .ai suffix is gonna be an agent either. It is very complicated. So we've worked with our friends over at Chex, CHEQ is a vendor that does bot detection and talking to them about ⁓ basically they had an eight point fingerprinting model to be able to understand
when traffic comes across that is agentic, what that looks like, how do we pick up on their ⁓ user agent strings and be able to understand that. So from the data collection point, it is increasingly difficult. know, log file analysis, sophisticated tools like checks that are doing that for you, and a lot of digging because we don't know what these things look like. They're manifesting in all different ways. Sometimes they reveal themselves, most times they don't.
⁓ And so we've reverted to ⁓ behavioral characteristics. And so if that's on the data collection side and is very fuzzy and it's very tough to identify on ⁓ the behavioral side of things, we know what humans do. In fact, we had a conversation ⁓ with one of my dev leads and she quoted the, they do clicky clicky scrolly scrolly things. And it's like they move around and they...
Katrin (36:52)
That's true. That's true.
And those are all technical terms. Absolutely. Yes. Yes.
John Lovett (36:57)
Very technical turn, but that's what we do. We scroll up and down, we click on things, we
move our mouses around. Agents and bots are gonna be very deliberate. It's a surgical strike. They come in, they look for the button, they click the button, sub-second type of behaviors, they get what they need and then they're out. So even in agentic commerce, when an agent is operating on behalf and they open the browser and you're kind of watching the browser move, it is surgical strike. It is not.
clicky clicky scrolly scrolly, it's more there's the button, I see it, I recognized it, I clicking it, and boom, onto the next thing. And form fills will be like, whoop, you you got all the information that's right there. ⁓ So instead of us hunting and pecking on our keyboards to get the numbers typed. So we think that that is increasingly going to be a big differentiator is being able to, and not like in a session ID type of way, or like a session replay type of way.
but more in just understanding kind of what happened in that session, what was the total length, what were the events that were triggered, how did that engage? And being able to look at those journeys and understand the differences between humans and machines so that we can then discern kind of what's happening. But the whole thing is fuzzy. It's gonna get blurred because we won't know. The difference between a machine going out and doing something, the description I said earlier where,
I set my machine off on a task in the morning and come back in the afternoon to see what it did versus me watching a machine do something and knowing that I want to do it. Like these are all very ⁓ different shades of similar experiences. So discerning what those look like is going to be really hard for us. And increasingly because we live in a zero click world now, and by what I mean by zero click is people look for answers, they look for your brand, they find it on AI overviews or
chat GPT or wherever, they see your brand there. ⁓ Maybe if there's a citation, they'll click through and you'll get that referral that goes to your website that eventually leads to a transaction. But more often than not, they abandon that session. The next day they'll say, I remember that brand. I got the recommendation. I'm gonna type it in direct. So we get all this new direct traffic that we never had before, but it's influenced by AI. So all of these things ⁓ further muddy the waters about where people coming from and what are they doing and how do they get there.
⁓ So it's gotten very complex. I wish I could say there was a simple answer, but there is not today.
Katrin (39:26)
There is definitely not. And it's fascinating because what you said about the clicky, clicky, scrolly, scrolly, and not what it's about, that's kind of basically the conclusion that ⁓ every conversation I've had about this ended up with is ultimately, you want to really identify the traffic, your real only option here today is you look at your log files,
look at your GA4 data and you do some behavioral analytics and you probabilistically isolate ⁓ traffic that looks non-human and that is probably AI. And that might give you a notion of the impact that ⁓ agent e-commerce has on your traffic.
today. But you just mentioned that you saw ⁓ rising direct traffic, is interesting, right? It's interesting. like, also makes a lot of sense. Do you have a notion from sort of, you know, experience with clients, projects, research, whatever, of what is the actual impact today and how fast is the acceleration of that impact of, you know, agent e-commerce on
share of traffic, of transactions, of anything significant, quite frankly, to marketers.
John Lovett (40:54)
Yeah. So, I mean, to be totally honest, agentic commerce is so small right now that it's hard to say where those trends are going. What I can tell you, and again, the volumes aren't tremendous, but as we look at just traffic to websites, if you look at direct versus your organic versus your socials and all your other channels, ⁓ when you do AI-referred traffic as a percentage, you know, we see it growing, but it's still a very, you know, it's still single
digit, you know, single digits percentage wise. So the numbers are very small, but they're going to accelerate faster and faster as you know, what did we just have? ⁓ You know, there was, I believe it was Claude went from like 133 in the app store to number one, you know, in a single weekend. So people are adopting this ⁓ rapidly. I, you know, I don't have numbers around the agent e-commerce just because it is difficult for us to know. And I think it still will be difficult for us to know, you know, when the
agent pushes the button or when the person gives permission for the agent to push the button versus the human doing it. I think that's going to be difficult to see ⁓ how that happens. But I think we'll learn, we are learning exponentially and the learnings will go quickly.
Katrin (42:11)
So, yeah, I wasn't thinking you'd have numbers on this, but like more anecdotally, you know, whether you've seen cases where it would have grown in any way. It's like, we've seen...
phenomena being very, very small and becoming large over the course of our careers ⁓ quite a few times, right? Especially around areas where ⁓ commerce is involved, there is generally a reluctance in the beginning, very understandably so, credit cards involved, et cetera, know, some trust ⁓ about handing over some parts of the process to a bot, et cetera. ⁓ My theory about this is that
This is going to end up kicking off a work stream for marketers that is comparable in magnitude to a mix of the mobile replatforming because it really is a very, very different UX, right? Very different.
John Lovett (43:06)
Yeah. Yeah.
Katrin (43:09)
Combined with some ⁓ of the shift to Amazon being the dominant e-commerce platform because there's a decent term radiation aspect to it that is going to need to be addressed. that's kind of like the last one that was really, really important, I think. And ⁓ also to a certain degree, the G4 migration because it is going to force people to...
look at their website, restructure the website for agents, rethink the data layer, think about what KPIs they should be measuring that they're not measuring today. So what's your notion of the magnitude, regardless of the numbers today? Do you see that ⁓ as well? Do you see that sort of preoccupation and kicking off of these work streams?
John Lovett (44:01)
Yeah, absolutely. I do. think, I think more than anything else, it begs the question, and we get asked this a lot. Do I develop one site for humans and another site for the robots? You know, and it's a legitimate question, you know, and generally our belief at CIR is built for the humans, the robots will figure it out because that is who you, you you want to, you know, that's really who you want to love you is the humans. get them, you know, brand, brand loyal, but
In terms of the scale and the way I guess I would theorize if I were to hypothesize, I think that with the Gented Commerce, what we're going to see is ⁓ things like recurring services. If you today are somebody that signs up with ⁓ Petco or Chewy to get your dog food sent to you on a regular basis and you've got a cadence and you know that it's happening, or if you're a loyal Prime subscriber and you know you go through
shampoo and toothpaste at a certain rate, you're more likely to say, sure, automate those things for me. Every three and a half weeks, you can buy me a new tube of toothpaste and I'll let that happen. I think that those are the types of ⁓ experiences or gentile commerce transactions that we'll start to see first. I know I need this. If I have two tubes of toothpaste, it's not gonna kill me. It's not gonna break the budget versus going out and shopping for luxury items or cars or what have you. I do think the...
The counterpoint to that is for business travelers and ⁓ businesses themselves, that just grows at scale. We know we need paper. We know we need ⁓ office supplies. We know we need these things. How can we automate that? So we have our agents going out and doing that on our behalf. And who knows, maybe they'll even do the inventory first and you've got a robot that goes around and inventories and then places an order. ⁓ But kind of getting back to the human side of things, like for business travelers that fly a lot that have
the specifications, I always fly Delta, here's my frequent flyer number, I've got these three trips coming up and I need to do multi-leg journeys. ⁓ If they don't have an admin doing that, or even if they did, that's a tedious process that someone has to spend time thinking about and planning, which could easily be outsourced. So again, my hypothesis would be the repeatable, ⁓ call them doorables if you will, goods that we know we need, and then things like
travel where you know it's going to happen and you have to buy something. I think that those become the purchases that we start seeing first. I think that when it comes to luxury items or ⁓ sporting goods or ⁓ other types of clothing like where there's style and taste and things involved, I think that's slower to come along. But who knows? Maybe something similar where if you define your affinities and
can establish your style and your taste, maybe that becomes automated too. I'm not quite sure.
Katrin (47:01)
That makes a lot of sense. procurement slash utilitarian type of purchases are probably the ones that are going to be going to this first. And it's probably also the ones where optimization for bots is the most important anyway, because those tend to be commoditized. And low interest, low differentiation categories where it really is about sending the right information, getting into the context window.
John Lovett (47:07)
Mm-hmm.
Katrin (47:30)
and being seen getting the right information to be visible and then sort of like having the conversion process be frictionless. You guys have done a lot of work around GEO, right? ⁓ So can you kind of talk to us a little bit about that, the relationship between SEO, GEO, getting into the context window, optimizing to be in that feed?
John Lovett (47:44)
Thank
Yeah, yeah. And just for the audience, ⁓ so GEO has a lot of names. It's referred to as generative engine optimization. So how do ⁓ you optimize your content, your web pages, your marketing to be able to be found by LLMs? So the work that we've been doing is ⁓ we use a tool called Scrunch. That's the tool that we use. And when brands come to us, ⁓ we help them by
⁓ developing prompts. I have a whole prompt methodology that we use for understanding who their customers are, what personas they go by, what topics are relevant to their ⁓ brands in their industry, who their competitors are. And we input all this information and then we run these prompts. They're synthetic prompts, so these prompts are running, but we're trying to discern when do they show up in LLM responses.
And we can do this across ⁓ chat GPT, Claw, Perplexity, you name it, Copilot. We have basically all the platforms. But the idea behind Geo is, let's try to understand when consumers are asking questions of their ⁓ LLMs, do brands show up? And what would force things for brands to show up? And this is a brand new field. is, before it was always in the SEO world, if you rank at the top of the page in Google or Bing,
You're doing great. You always wanted to be at the top. Now it's first the LLMs have to see you or find you out there. Then they have to know who you are. Then they have to trust you. So there's a lot of build that goes into this. And we are just learning fascinating things every day. One of the things I did, this study will be out ⁓ probably about the same time as this podcast. ⁓ During the Olympics, we did a study. I used the Olympics as my case study.
So I did three, I did a pre-Olympics, a during Olympics and a post-Olympics to run queries like who's going to win the women's downhill? Who are the best athletes in the 2026 Winter Olympics? And what we learned was fascinating. And so I had five hypotheses that I established to try to prove out how the LLMs think. And so first was like citation analysis. We learned that there's three different types of citations. There's a narrative.
There's a predictive, and then there's like a conversational type of narrative. And the way you ask your questions, different models respond in different ways. So it's very interesting. If you want your brand to show up when there's a consideration process, should I buy this or should I buy that? The way in which people are asking and the way in which you show up in LLM responses is very different. Another one of the hypotheses was around temporal velocity. So February 10th happened to be a day.
where they were the most Olympic medals being awarded. So we increased the frequency of our tests on that day. And what I wanted to understand was how quickly after a medal is awarded, does each model pick up on that? Do they recognize when that happened? And the results were mind blowing because at first it was like, some of these models were saying, yeah, this happened. They were just flat out speculating. They were probabilistic answers that said what they thought were happening.
Then they would show something that was wrong, and then they would catch up a few hours later. But each model, learned that AI overviews, the Google models were faster. Perplexity was particularly fast. Meta and Claude are, I learned this a new term, the binary cliff. They rely on training data only. They do not go out and search the web on normal type queries. They will if you force it to and say, hey, I want you to go search the web for me. But on normal queries and things like when we're testing,
who won the medal today, they had no idea. They had no idea the Olympics was even happening. So we had this big divide in the way that each of these models operate and doing testing at scale, I was running 5,000 prompts ⁓ multiple times a day. We have this great data set where we can now understand a lot more about how these LLMs think. And to me, that's the geo research that's gonna push us through and help us to ⁓ test with our clients and to learn things.
And that's what it's all about. It's hypothesis creation. Let's validate that hypothesis and let's see if it worked because the person who tells you that they've got Geo figured out today is lying because none of us know how it works just yet.
Katrin (52:21)
No, we really don't, but that's really fascinating. I'm looking forward to seeing that study. also, I have not read your book yet. So can you tell us a little bit about the new big book of KPIs?
John Lovett (52:34)
Yeah,
I had a copy around here somewhere. don't know where it is up here. Yeah. So the new big book of KPIs right here. So this I wrote almost a year ago. I published in March of 2025. And so my former partner, Eric Peterson, wrote the original big book of KPIs. And I still rely on it. I use it and I pick it up. But I was trying to formulate a KPI for one of my clients. in the preface of that,
of that book said, this book is meant to be rewritten. And I looked and it was 18 years ago, that's the first big book of KPIs. And so I called Eric up on the phone and I said, like, Hey, I'm thinking about rewriting this to refocus because so frequently people would come to me and say, like, Hey, we're gonna use bounce rate. And I'm like, no, don't use bounce rate. And how about time on site? I'm like, no, please don't use time on site. Like, and so the notions of KPIs kind of have a lot of baggage and legacy.
This was my way of thinking about how do I create a new paradigm or a new framework for developing KPIs in AI world? And for me, one of the fun parts about writing the book was ⁓ I used AI to write the book. So I developed a formula. I built an agent that helped me formulate these KPIs. And I said, this is what I want the outputs to look like. And then I organized it in such a way that I've got different categories of KPIs. But it was a very fun experiment. My original goal was I was going to write it in three weeks.
It took me three months, which was a little longer, but that is way faster than the first book that I wrote, that's for sure. ⁓ But yeah, the idea is that, and I can share with you in your show notes, there's an accompanying GPT that goes along with this, but the idea of KPIs, Key Performance Indicators, is that they need to be metrics that force you to take action. So tell you to do something. even like LLM visibility isn't a great,
KPI because you're like, ⁓ my visibility went up. What should I do? And you're not really sure. So thinking about KPIs in such a way that force you to take action, that really make you think about if this number moves, what am I going to do about it? And that is kind of the way that I train or tend to teach people how to think about KPIs. Make them such that when a number moves or a number shifts, ⁓ that it is something that you want to take action on.
and could be able to really kind of see what you should do about it in order to change your market.
Katrin (55:04)
Well, we'll definitely put that in the show notes and especially the GPT. I'm looking forward to playing around with that. One last question though. So you've been in the industry for 20 plus years. ⁓ For people who are just starting or, you know, are sort of like building their career and are going through this transformation very early in their journey, ⁓
Any advice, anything you know that you ⁓ would like to have known at that age and that you would like to sort of like, know, transmit.
John Lovett (55:45)
Well, I wish someone 20 years ago had told me to buy a Nvidia stock. probably would have. No, I guess, you know, I thought about this question in the context of, you know, 20 years ago, I probably would have said start now in AI. And, you know, the more you know, and again, it wasn't even a thing 20 years ago, but it was you machine learning has been around for a long time. I would tell new analysts
Katrin (55:51)
That would have been nice.
John Lovett (56:14)
Get started today. ⁓ The best time to start would have been yesterday, but the next best time is today, or saying it was something like that. ⁓ Don't be afraid of it. Take bite-sized chunks, roll up your sleeves, get yourself ⁓ a... If your company doesn't offer you ⁓ enterprise tools, make the purchase yourself and just start using them. Figure out what is it about my workflow, the way that I operate.
Katrin (56:21)
Mm-hmm.
John Lovett (56:42)
where I can introduce AI to this process that can help me do it smarter, faster, better. And really that's kind of what I would encourage people to do is don't, and again, I tell my kids this too. I've got a college age son and two boys in high school. And they, you when they asked me about AI, said, don't let it write your paper. That's not what it's for. You can ask it to help you organize it. You can use it to take your thoughts further than you would have been able to do by yourself.
And for me, that's really what it is. It's a partner to be able to help me think analytically, to be able to advance my ideas, to be able to show me things, give me feedback on what's working and what's not working, and to explore things. Going into it believing that it's going to be the answer engine and it's going to produce everything for you is the wrong belief. You need to go into it with the fact that I have good ideas or I have ideas where I want to start something, and this can help me take it to that next level.
And it takes a lot of work. takes a lot of training. Every one of the agents and tools that I build, I have to teach it my voice. I have to teach it what I wanted to say. I have to teach it what I don't want it to say. I have to train the models to say, like, I know that you're designed to be probabilistic, and you're going to tell me the next best thing. I want you to be deterministic and use SQL, build SQL queries to do calculations. Show me your math. Don't just give me an answer because it'll make stuff up.
And until you start to see that and have an analytical eye to say, that number looks funny, that looks off, I'm not sure I trust that. And you can ask those things, but then build them in as part of the instructions and the training that you give to these tools. That is really how you start to get the maximum value. You don't have to teach it every time you go do something. It knows where you want it to start. And that's really kind of where you start seeing bigger and bigger dividends from using AI tools.
Katrin (58:36)
I think that's amazing advice. The best time would have been yesterday, but the next best time is today. And let's not forget it's a lot of work, so you actually better get started. John, this was amazing. Thank you so much for talking to me. For people who want to find you, talk to you, read what you, you know, can just like tell us. put everything in the show notes.
John Lovett (58:44)
It is a lot.
Yeah, yeah, best place to find me is on
LinkedIn, John Lovett on LinkedIn. I'm very active on that channel. You can also visit seerinteractive.com is our company website. But yeah, I publish a lot on the blog there. I ⁓ write a lot on LinkedIn. Those are the best places to find me.
Katrin (59:17)
Thank you. So that's it for episode 15 of Knowledge Distillation. If today's conversation made you want to experiment with AI for analytics, visit us at ask-y.ai and try Prism. Thanks for listening. And remember, bots won't win. AI analysts will.