Chapters
00:00
The Journey of a Modern Analyst
07:45
Building Tools for Efficiency
17:03
Leveraging AI in Analytics Workflows
22:24
Evolving Workflows with AI Tools
27:23
The Changing Landscape of Marketing Analytics
32:20
Contextual Understanding in Marketing and AI
37:16
Adapting to Change in Analytics
41:59
Future of Analytics: Orchestrators vs. Operators
Katrin (00:00)
Welcome to Knowledge Distillation, where we explore the rise of the AI analyst. I'm your host, Katrin Ribant, CEO and founder of Ask-Y. This is episode six. And today we are talking to Jim Gianoglio about the dual life of the modern analyst, both building AI tools and using AI tools to transform their own work. So Jim, you've been in marketing in various roles for a couple of decades.
Your skillset spans from graphical design to creative direction to SEO to analytics and now entrepreneurship and software development. So quite, quite a spread there. ⁓ You are also the co-host of Measure Up, a podcast about marketing measurement, as it says in the name. I'm by the way, an avid listener and I highly recommend it. So
We serve together on the organizing committee of Measure Camp New York, the measure camp with the best pizza. I always have to say that. And also the best coffee since the company that you founded in 2023, Cozul, sponsors it. The coffee, mean. ⁓ Ask Why sponsors the breakfast, which allows us to make these stupid jokes on the boards like, ask why you should have breakfast. Because the coffee is great.
Jim Gianoglio (01:00)
⁓ yeah.
Thank
That's right. It's right up there, right up there with dad jokes. ⁓
Katrin (01:25)
Analyst humor, what can I say? So please tell us about your journey. Oh yeah, no,
totally, totally. We're an exciting crowd. So please tell us about your journey and what made you decide to leave the agency world and start your own consultancy focused specifically on marketing measurement.
Jim Gianoglio (01:46)
Yeah, so I guess like anyone with enough gray in their beard like me, we didn't go to school for this. It wasn't like, oh, I went to school to be a web analyst or a data science. Well, I guess data science. Yeah. We kind of got into this long and circuitous route. But my path was photographer to kind
Katrin (01:57)
There was no school. There wasn't even books about it.
Jim Gianoglio (02:13)
TV and radio commercials print ads web design I kind of did all that with a small agency and then because of the web design I got into SEO and then from SEO to analytics from analytics to data scientists and Then on to what I do now with causal analytics, which is you know, my own consultancy doing marketing analytics and measurement and get my last role ⁓ I was ended up being the bridge between marketing the marketing team and the data science team
⁓ because I kind of speak both languages. have that sort of creative background and marketing background, but then I also have the data science chops. And so I ended up being kind of the bridge, the, the, the, translator between both sides, ⁓ which is a pretty good skillset to have. ⁓ and when the opportunity came to start my own company, I jumped on it. So, ⁓ by opportunity, I mean, I was laid off, which by the way, ⁓
Best way to start your company is to have your previous company pay you for the first four months while you start your company and get your first couple of clients. Also, it helps to have an amazing, wonderful, supportive wife that happens to have our health insurance. So all of those things kind of combined and yeah, and Cosmo Analytics was born, that was a couple of years ago, two and a half.
Katrin (03:24)
That does help too, yes.
And so just to like expand a little bit on your data scientist skillset, it's really like you've gotten quite deep into it, right? And you're an R aficionado. Can you tell us a little bit about like that journey, ⁓ how you, you know, why you chose R for example, how you got into it, what you studied, ⁓ where you are there?
Jim Gianoglio (03:54)
Yeah, yeah, I mean, this will be a common thread, but it's, I ended up going back to school back in 2016, graduate school for data science in essence, ⁓ because I had a client who was kind of pushing me on ⁓ marketing.
questions and answers and I didn't have the skill set or the knowledge to be able to answer their questions. And I didn't like not knowing. So I said, well, maybe I'll go back to school. ⁓ So I did. ⁓ yeah, R was just kind of that's in academia that tends to be, know, in statistics programs and computer science, not so much computer science programs, but statistics and sort of the data science side of things that tend to be in R, or at least they were at the time, I'm sure that's
rather equal parts Python now. That's just what I happen to pick up at grad school and be the most familiar with.
Katrin (04:48)
So you do have some schooling in the end. Yeah, yeah, yeah. So you went back to grad school to be a data scientist. You continued working for your same company. And ⁓ when the opportunity came, you started COSL. And can you tell us about COSL? What is COSL?
Jim Gianoglio (04:51)
yeah.
Yeah, causal analytics is a boutique consultancy. We work with clients on marketing analytics broadly. So helping clients measure and understand how their marketing is working. Specifically, we do that with marketing mix modeling and incrementality testing. Those are kind of the two core areas that we focus on, ⁓ helping clients understand, you
which channels are working, which channels are maybe not as efficient, where they should shift budget from this channel to that channel to get the most conversions or sales or revenue or leads or whatever it might be.
Katrin (05:41)
And so because you started this company and you obviously have the data scientist skillset, you end up working really across the entire analytics life cycle, right? From getting the requirement, from meeting the clients, getting the clients to work with you, getting the requirements from the clients, understanding the problem, sort of aligning on a scope of work.
to then all of the hands-on analysis, the presentation of the results and all the follow-ups, right? All of that is you. And so I imagine for that, I mean, I don't imagine because you told me, so I know ⁓ that you are very keen on building efficiency tools for yourself, right? It's quite important because in what you do in marketing, mixed modeling, you have to sort of work with
Jim Gianoglio (06:13)
Exactly.
Katrin (06:37)
Do you actually, I don't know this, do you always choose which model, which marketing mix model you work with or ⁓ is it something that your clients can have some strong preferences for?
Jim Gianoglio (06:48)
Yeah, that's a good question. It depends. I've had clients where their Facebook rep has told them they should be doing marketing mix modeling and, oh, hey, we have this thing called Robin that you can use. And so they've been kind of pushed to use Robin and.
Katrin (06:57)
they listened.
Jim Gianoglio (07:02)
I get pulled in to help them build a Robin model. ⁓ I've had another client where it was Meridian. Their Google rep said, hey, you should build marketing mix modeling with Meridian. And so they needed help. And I came in and helped with Meridian. I don't have a strong preference. That's usually whatever makes the most sense for the client, ⁓ whether they want to own it themselves or not, whether they have the in-house expertise or not.
Katrin (07:05)
Mm-hmm.
Jim Gianoglio (07:29)
Sometimes you have a smaller team that just needs a platform, something that they can log into and kind of see and play around with some things, but not have a kind of fully custom built model. So there's a lot of different ways that you can approach that. Thank you.
Katrin (07:45)
So given the breadth of your skills and your practice, I thought you'd be an ideal person to talk to about these dual role we see appear with the rise
of the AI analysts because LMs are powering a shift of analyst skill set, both to the left towards coding and to the right towards like more a business ⁓ sort of aspect to it. And so we see AI analysts both using AI to power their analytics lifecycle and using AI to build tools.
to help themselves or to help their process. And this is for the audience, if you've ever been to a measure camp in New York or any measure camp in the US, I was in Chicago and in Washington and both used Jim's app to digitize the session board, you will see that once you are, by the way, if you've never been to a measure camp, go to a measure camp. It's really worth it.
Jim Gianoglio (08:37)
Yes.
Katrin (08:40)
⁓ So it's an on-conference, it's the audience that actually makes the conference. And what happens is all of us sort of put our paper notes, literally our post-it notes with our session names on a board, on a physical board on a wall. And then after that, some people come and digitize that with Jim's app, and then you can just like have your program online. And so I had the chance to sit in on your session in Washington.
where you broke down your process in learning and what you learned basically when you were building the app. So please walk us through your journey. What made you start? How did you select a tool? What were your learnings? How did that whole thing happen?
Jim Gianoglio (09:24)
Yeah, yeah, so I mean the reason why I built that like in one word is curiosity I you know with LLMs and AI over the past three four or five years like Taking up all the oxygen in the room and everyone's talking about what they're doing and how fast they're progressing and what you can do
there's this feeling of being left behind. I don't know if you've ever felt that, especially lately as fast as things are moving. It's like, oh, I need to build an MCP server now. What is an MCP? I don't know. It's just moving so fast. And so really, about, I don't know, maybe a year ago or so, it just finally got to a point where like, okay, I have to dive into this. And sometimes like,
Katrin (09:51)
yes.
And really for
this, you want a project, right? I mean, I just recommend that to everyone. Like don't just try the theory. It works for mostly nobody. You want a project and you want a project where you have stakeholders, where somebody is going to tell you, ⁓ thank you. This was so great. You want gratification.
Jim Gianoglio (10:12)
Exactly.
Exactly. Yeah. And that's where the idea of the session board, the virtual session board came in. Cause we kind of experienced this firsthand with the measure camp, New York, where like we had the physical board with all of the session cards up on it, but there wasn't a way for people to just look on their phone to see like, what were the sessions coming up? Right. After every session, there would be a big
Katrin (10:47)
And we tried to do an online Google
Sheet, but it really wasn't great.
Jim Gianoglio (10:50)
Yeah, there was some options out there,
everything required someone there with a computer or with their phone to manually type in all of the sessions, and that's just crazy. So that was the perfect idea. Can I build an app that ⁓ lets a volunteer go to the session board, take a picture of the session card, handwritten or printed out, however it might be, take a picture of it.
Katrin (10:59)
Yeah.
Jim Gianoglio (11:15)
hit a magic button to have it kind of auto-translate the image into text and then just upload it automatically. so that was kind of a good, felt like a chunky project that wasn't kind of beyond, it wasn't too much to build. ⁓ So that's what got me started. And I just chose Replet. That was the tool that I ended up going with for no other reason than I was the most familiar with it because I listened to a lot of podcasts and...
I'd heard a couple, two or three different podcasts with the founder of Replet. And so, you know, that's what, that's all it took, right? I just, if I, if I, if I listened to podcasts with the founder of Loveable, I probably would have went with them instead. But you know, as it was, it went with Replet. ⁓ and really I just, whether it was Replet or Loveable or Base 44, one of these other, there's a dozens of tools out there now, but what I was looking for was something that did, that would handle all the hard things like hosting and deploying and dealing with.
Katrin (11:52)
It's a good reason.
Jim Gianoglio (12:13)
databases and security scans and like something where there's basically just a web interface where I could type in what I wanted to build and have it build it for me and do all the hard stuff behind the scenes. You know, I didn't want to have to set up an environment locally on my computer or anything like that. So I went with Replet. ⁓ Since I've actually moved on to using cursor, I saw the last thing I just built and launched actually earlier this week, ⁓ I built a fairly simple
MMM pricing calculator using cursor, which was another fun experiment. kind of like having to do all the, some of the hard things like local environment and, and, you know, uploading to GitHub and all that kind of fun stuff. ⁓ so I mean, that was kind of where I started off with. then, you know, the things that you learn, the things that I learned, ⁓ the number one thing I would say would be patience. ⁓
because when I first started off, right? Like I went from zero to 80 % in about an hour. And it was like that magic moment. Like if you can remember back to when being a kid and the first time you saw a magic trick and it's like, my God, where did it go? It just disappeared. was like that truly, like that's the closest feeling I can use to describe what it was like. like, I just described what I wanted and it built it.
Katrin (13:24)
Amazing.
Jim Gianoglio (13:39)
got it pretty close, like one shot. Like it did really good. Like the design was clean. I liked the design of it. And you know, from there it was like, wow, like that was crazy. But it's funny how quickly we can shift, our mindsets can shift from, my God, this is sorcery. This is magic to, ⁓ God, why can't this stupid machine figure out how to fix the simple bug? It's like.
we kind of forget how amazing it is that it even got to where it was. And now we're just like, ⁓ I can't even like fix this thing. Come on. ⁓ cause there were, there were plenty of times in this, like one of the learnings I talked about in Washington was, learning how to work with the LLM to get certain results. And sometimes it'll do something that you think is really complex. And you're like, my gosh, there's no way it's going to be able to get this right on the first try. And like,
you'll say what you want it to do and it just gets it right and you're like, my gosh. And then other times there's like this tiny little bug or like this element is positioned too far to the right and you'll be like, can you reposition this to the left? And then it like completely breaks the whole, you know, the whole app or something horrible goes wrong. And then you're like, okay, you didn't fix it. It's still broken. And you showed us screenshots like, and it's sycophantic way. says, ⁓ you're absolutely correct. You're so smart. Let me go check and see what the problem is.
And it's, found the issue in this file here, the line 27 had this and this. I now fixed it. You should try and try it again and it should work perfectly for you now. And then you try it again and it didn't do anything. It didn't fix it. It didn't even make the change that it said it made. It just kind of lied to you. And then you like, you go back and forth like 12 rounds of this and you spend like an hour and a half and it just keeps lying. It just keeps like making stuff up like, I fixed it. And like, no, you didn't. ⁓ I fixed it now. No, you didn't.
So it's like learning to deal with those situations, those loops that you get into, and trying to figure out what's a side door I can go through to try to fix this. Or maybe I just revert back to the previous version and start over. So there's a lot of that that goes into it. But it's like with anything, you do it more often. You figure out the tricks to make it work.
Katrin (15:52)
I just have to make a shameless plug here because what you just described about ⁓ context management, attention and context poisoning, which you just described, is exactly what the two latest floofy videos are about. ⁓
Jim Gianoglio (16:07)
Okay.
Katrin (16:09)
I explained those concepts because I really do think that what you're talking about is so fundamental to evolving our skill sets as analysts.
We need to gain a really deep and broad understanding of how these tools work, what they can do, what they cannot do, how their actual internal mechanics work.
so that we get the best out of them and we don't get stuck in an hour and a half of a loop of trying to fix something because we don't understand why it can't get it. It's so simple. I'm showing you a screenshot. It's really, I mean, you can do all of these complicated things. Why can you not do this one? Right? And also it's infuriating. At some point it's really, really annoying. But I think that that is really part of the upscaling that we need to do as analysts.
Jim Gianoglio (16:53)
Yeah.
Katrin (17:03)
that node. Let's talk about the other side of things. How are you using LLMs to power your analytics workflows? So you're a of Prism, AskWise platform, and we actually built quite a nifty solution together to match DMA values across platforms together, which is something that one has to do in the US at least, in marketing, mixed modeling quite a bit. But you're obviously using a large variety of solutions in your day to day because
Jim Gianoglio (17:14)
Mm-hmm. Yeah.
Katrin (17:32)
you're curious and you're a tinkerer and you don't like not knowing. So let's start with your first aha moment with AI in analytics. What was that?
Jim Gianoglio (17:44)
I'm going to disappoint you. I couldn't think of an aha moment. I had that aha moment building the session app. That was the aha moment. And the day-to-day usage that I use LLMs in, it's been more of a gradual layering on of additional things that I'll hand off to the LLMs to help with. And at first, and this is still the majority of how I use
Katrin (17:52)
Yes?
Jim Gianoglio (18:12)
you know, LLMs like ChatGPT, ⁓ Gemini. I don't really use Cloud a whole lot for, it's either ChatGPT or Gemini at this point, but I just use it as a pair programmer, right? So ⁓ I've been doing a lot more in Python lately, but like in R I would get lazy and I didn't want to look up the documentation for how a certain function worked or what the syntax was. So I just go to ChatGPT and say, how do I do this? And then it would give me some code and I'm like, yeah, that looks right. And copy and paste and it would work and whatever. Or it would give me some code and I'd be like,
No, I think that's wrong. got this in the wrong place. And I would just make the change and paste it and it would work. And in Python, same story, right? I'm less familiar in Python, less capable in Python. So it's just using, oftentimes Gemini, because it's built right into Colab, saying, hey, I need to transform this data frame from this to this. How can I do that? Oh, here's the code. Thanks. Apply. It works. Great. So that's been kind of the initial kind
that it's been really good at for a long time and continues to even get better, which is just like helping out with coding. And then again, I'll try to layer on additional things as I go on. And sometimes it works and sometimes it doesn't. And I think this is like the critical thing, right? It's, you know, case in point, like now that I use Prism for the DMA matching, that's great. Before Prism existed,
I tried doing this with Chachi Pati, with Claude and with Jim and I, basically, ⁓ story short, the context of this is that each ad platform, you can get sort of impressions and clicks and spend at the DMA level, right, out of the platforms. You can export the CSV and get that. Unfortunately, every ad platform has a slightly different way of naming the DMA. So I'm in Pittsburgh, so it's the Pittsburgh DMA. So some platforms will be Pittsburgh, comma, PA. Some will be Pittsburgh, comma, Pennsylvania.
Some will be Pittsburgh PA without the comma. Some will be US colon colon Pennsylvania colon colon Pittsburgh. There's like seven different variations of the DMA name for Pittsburgh and seven different variations for all 209 other DMAs. And so getting them to all match up, it was a huge pain in the butt. And again, now we have. ⁓ awesome. Cool. So now now, fortunately, like that.
Katrin (20:29)
We have an explicit rating, by the way.
Jim Gianoglio (20:35)
We can do that in Prism very easily now, but before I tried doing that in ChatGPT, I tried doing that in Gemini, and they just, they failed, right? And the critical thing here is, like, if I went back today and tried that again, they would, I'll bet they would get it. I'll bet they would be able to do it. And so, like, I'm constantly going back to say, this thing that it wasn't able to do a month ago, can it do it now? If it's something that I have to keep doing, I'm going to keep trying it periodically, because eventually it's going be able to do it.
And I think that's a different mindset than what we're used to, which is like, I tried this thing, it didn't work, I've forgotten about it now. It's like, we can't have that mindset with LLMs because they're getting so much better so quickly that eventually they will be able to do it. So we have to kind of keep trying. ⁓
Katrin (21:24)
know, capability of the models is evolving really remarkably, including their ability to handle context and other aspects. What is something that I think we need to focus on as analysts is how we integrate them in our workflows.
Because it's, you one question that I had when you were talking about cogeneration is, don't you, you don't use anything like a cursor or a Windsor for that. You go directly to the model.
Jim Gianoglio (21:54)
So when I'm building an application, like when I built the MMM pricing calculator, I did that in cursor. ⁓ But that's sort of an IDE environment that when I look at that, I think of that as I am building something. I'm creating an application of sort. ⁓ When I'm doing modeling work or data science work,
Katrin (22:14)
You're building an application, yes, yes.
Jim Gianoglio (22:24)
⁓ Then I tend to use either chat GPT or Gemini. So like I'll have chat GPT open and a separate screen. So I have three monitors. So I'll always have chat GPT open in one. And if I'm working in Google CoLab, it has Gemini kind of built right in there. oftentimes I'll use Gemini right within CoLab. ⁓ That's generally the workflow that I use.
Katrin (22:52)
And when we see sort of like this workflow, these tools ⁓ and the impact they have on your clients workflows, I can imagine that at this point, you can observe some organizational changes, expectation changes in your conversations with your clients when you're talking about how to set up a project.
Jim Gianoglio (23:17)
Yeah, I think
organizationally, and I don't have, this is very anecdotal. I don't have a ton of examples of this. I've kind of seen it here and there, ⁓ which is that it seems like people are taking on more than they previously used to, or maybe it's that the companies are expecting more of the people than they used to expect of them. So for example, there's a client I worked with where
they're trying to basically build a wrapper around Meridian, kind of a front end interface for Meridian. So you don't have to run it through a Python notebook. initially they had a software developer, you know, I'm not a software developer, an engineer, software engineer, developer, whatever you call them. Like someone who is trained in computer science and development. And he was, he, he kind of was building things out and what I would say is kind of like the right way, like,
He was considering the architecture of it. He was concerned the modularity of it and how things are like the right way of like, I have no idea how to go about doing that. And then, you know, fast forward to today and he's no longer involved in the project and they have another person who's working on it who is a data scientist and understands data science, but not necessarily like software engineering and development. And in the past, that wouldn't have worked.
Right? Like a data scientist knows how to code for data science things, but not like software engineering, like developing a product. And yeah, in the past that wouldn't, you just couldn't do that. And so now it's like, well, he's a smart data science person. He knows how to code some things. And now we have, you know, cursor and windsurf and chat GPT and all these other cloud code and all these other things that they could probably just figure it out.
Katrin (24:53)
Yes, not the same thing.
Jim Gianoglio (25:18)
Again, I don't know how much of this is me sort of ⁓ looking too deeply into one specific scenario, but I think that's probably happening.
Katrin (25:26)
think it's very illustrative,
very, very illustrative of a trend that we see now though, because I really see this confusion between an expansion of skill set. So having a tool at your disposal that can allow you to expand your skill set into areas that you wouldn't have been able to touch before. But expanding your skill set doesn't mean that you are.
that role, that you're trained to do that role, right? I can expand my skillset into data engineering. I am not a data engineer. And until I train and decide to do that job for a while, I will not be a data engineer. I will be somebody who can, with the help of a tool, produce code that's good enough to do a data engineering task as a one-off and badly repeatedly, but probably not in production. should not, not
Jim Gianoglio (25:54)
Hmm.
Katrin (26:24)
take that and put it in production most probably, right? I mean, if it works, you're very lucky. That's purely a question of luck at that point.
Jim Gianoglio (26:26)
Exactly, right? Right.
Yeah. And it's great for like, for a marketer who wants to prototype a solution that can then hand it off to the engineering team to say like, okay, here's, here's what I built and lovable or bold or replete or whatever. And then the engineer can team can take that and like, that's a bad word, professionalize it, but like, you know, make it robust, make it secure and work out all the bugs and things like that. Yeah.
Katrin (26:36)
Fantastic.
But what I do see when I see that specifically when I looked at job descriptions, but also from what people tell me, where in this like immature sort of like beginning phase of the rise of the AI analysts, where when you look at job descriptions that say AI analyst, you have everything, everything bundled together. You know, build infrastructure. ⁓
You fine tune LLMs, maybe even just do like training, know, pre-training for LLMs and also ⁓ create dashboards. Basically the whole thing. Yeah, that too, that too.
Jim Gianoglio (27:32)
Yeah, and you need 10 years of prompt engineering experience, right? ⁓
Yeah. ⁓ job titles.
Katrin (27:43)
Yeah, yeah. So you run MMM Hub, great newsletter, highly recommend, and also community around marketing mixed modeling. One thing that you talk about a lot is incrementality and actually proving marketing impact. It's fundamentally a context problem really, right? Because ⁓ like in every analysis in marketing modelings, you have a certain modeling of reality, you have a reduction of information, and then you have to extrapolate what your...
analysis says about what the recommendations are. So you're to isolate signal from noise, obviously, cross channels, geographies, time periods, external factors, et cetera. How do you think about context engineering ⁓ in an MMM and parallel with context engineering with working with LLMs? Lots of acronyms here.
Jim Gianoglio (28:36)
Yeah, that's interesting. So I've never heard it framed as a context problem before in sort of the marketing effectiveness or the MMM space. ⁓ But ultimately, like you're right, we're trying to show or prove that there's a causal relationship between A and B, right? If I increase A by 20%, B increases by 10%, or if I turn off Facebook ads, ⁓ revenue decreases by 5%, right? So anytime.
You can set up a test that isolates a single change, right? And this is what you're talking about, like the context, right? We're trying to isolate the signal from the noise, right? The thing that we changed from everything else that was happening at the same time. And so really, you know, there are a lot of different ways that we can try to get at that actual causal relationship between the thing that we changed and the effect that we're seeing that it has or doesn't have. ⁓
And it kind goes down to like you could do incrementality testing. Or could do a geolift experiment or a geohotout experiment where you turn off your ad budget in certain geographies, different DMAs or states, and then compare it to the ones where you left it on. you can see, as much as possible, we left everything else the same. And the only change we made was turning the ads off. And here's the difference between our testing control regions. so that's really the best we can do from a marketing.
standpoint, it's not a perfect randomized controlled trial, but it's kind of about as good as we can get, right? ⁓ In terms of like taking that into the to the MMM side of things, ⁓ know, we can, know, MMM is definitely more of a higher level, right? It's not it's it's not as much kind of like that ground truth of we made this change and then we saw this result.
we had tests, we had control. Like MMM is just looking at patterns of data, right? That's looking at how much you spent over time and what was your revenue over time in these different channels. it's, you know, there are causal MMMs and there are frequentist MMMs and different ways of going about it, but generally it's more of a, you know, we're pattern matching, right?
Katrin (30:38)
Mm-hmm.
But ultimately, you do an analysis, marketing mix model, you have the information you have in there, right? You know that this exists in a broader context. And you know that ⁓ whatever the result of your analysis is, ultimately you're presenting it to a person and will have to pass a gut check basically.
past the check of this person thinking, I understand, I can see how this translates to something that I understand in reality, then my intuition tells me, yes, I can see how this is, you know, let's say a certain type of product is more effective or sells more in a certain geography at a certain period of time, you know, anything along those lines. There has to be some translation of that into a contextual reality.
Jim Gianoglio (31:51)
Hmm.
Katrin (31:51)
⁓ that can be ⁓ extrapolated by the person who's going to make the decision because they have a model of the world and they have the model of their business and they put everything into that. My feeling, my point with this is that when we work with AI, if we don't have that model of how the AI thinks or works, then we lack that context. We don't know what we can
tell the AI what the AI will take into consideration, what it will not take into consideration. And at that point, you sort of very much taking the results at face value without necessarily being able to deconstruct why is this the result I'm getting and how much of my reality context is part of this.
Jim Gianoglio (32:47)
Yeah, that's really interesting, right? So if I'm following you and kind of following your frame of thought, there's sort of like, we have different measurement outputs, right? Maybe it's platform metrics, it might be the results of a test, it might be whatever Google Analytics is telling us, it's our MMM vendor that sent us a slide deck that shows us output of channel ROAS and all that stuff, right? And there's all these different things that are coming at us that are telling us
how certain channels are performing, right? And then that all feeds into our brains that have all the other context, right? So I'm kind of like thinking about it from like a context window, right? Where it is...
Katrin (33:25)
Yes.
Mm-hmm.
Jim Gianoglio (33:33)
if we just feed in the results of a marketing mix model into an AI. But that AI doesn't also have, let's say that AI doesn't have a large enough context window to also hold all of the other details about that business, about the fact that their strategy changed in Q1 and they shifted to more awareness advertising, that they...
Katrin (33:57)
and not only have
the context but pay attention to the right aspects of it. And that's a skill to feed that in.
Jim Gianoglio (34:02)
Exactly, right?
Yeah, and Q2, they had a marketing manager who was really poor at his job and did some stupid stuff in Facebook and that's why Facebook was so bad this month and they fired him that month and all these little details that you can't feed that into a model. Marketing mix model, there's a limited amount of
Katrin (34:23)
Yeah.
Jim Gianoglio (34:31)
input variables that you can put into it, right? Because you're limited in the amount of data that you have to observe. But how that fits in then, in kind of a broader sense from like an AI perspective, is really interesting. think this is the way things are moving, ⁓ which is to say that there are companies ⁓ building out AI as sort of the interpreters of
Katrin (34:34)
Yes.
Jim Gianoglio (34:59)
marketing mix modeling and probably other like incrementality testing things too, right? Like they're building the AI layer on top of the measurement platforms so that people can talk to them and say, hey, why did my, you know, why did my revenue go down in December? And it can just respond, right? And now if that AI model has all that context, it'll be interesting to see.
A, does that improve the recommendations, right? Because just because it knows all of the everything that happened in the company doesn't necessarily mean it's going to understand that causal relationship or the fact that this happened is why that that happened. So it'll be interesting to see.
Katrin (35:31)
Mm-hmm.
Well, and I'm just going to have to shamelessly plug Prism here because that's exactly what we're building this, you know, we call it jam joint associative memory. So this context engine that not only has the context, but has the mechanism to pick the relevant parts out of the context at the right moment. That's actually the hard part. Large context windows exist, having attention to the right parts of it.
at the right moment, that's a really difficult problem to solve. But I think that is truly what makes the difference between having an analysis that ultimately helps you bridge that gap between the part of reality that exists in your data model and then the part of reality that you have to interpret towards in order for your stakeholder to actually make a decision with your output, which is
Jim Gianoglio (36:13)
Mm-hmm.
you
Katrin (36:36)
ultimate goal, right? The ultimate goal is for somebody to make a decision with the output of your analysis. That's success.
Jim Gianoglio (36:41)
Exactly. Yeah.
Katrin (36:44)
when thinking about all of this and, you know, thinking about like, you've got measure up your podcast, you interview a lot of people, you talk to a lot of clients, obviously, you go to all the measure camps, you talk to a lot of analysts, so you have a
broad view on what is happening ⁓ in the space. What are you seeing in how analysts are adapting or not adapting to this change? Because we've had changes before, maybe not ever something this fast.
Jim Gianoglio (37:16)
Yeah, I think there's a spectrum, right? There's gonna have at one end, you're have some people that are burying their heads in the sand, they're unwilling to change, they just wanna do what they've been doing for the past however many years and they're gonna get weeded out. Like they're either gonna retire or they're gonna move on to some other profession. I think at the other end of the spectrum, you have people who are excited about this new technology and trying it out and exploring and curious and like.
Katrin (37:18)
Mm-hmm.
Jim Gianoglio (37:45)
building their MCP servers and figuring out agents and agentic workflows and things like that. They're kind of cutting edge and trying to stay on top of things that way. And I think the majority is right somewhere in the middle. They're using ChatGPT and playing around with it a little bit, trying to see how it can help improve them. There are various levels of exploring some of the other LLMs, but I think that's probably where most of the people are.
I think if I had to guess, I don't see too many people who are sort of in the heads in the sand space. I mean, part of that is because like if you're going to come out to a measure camp on a Saturday and spend all Saturday, like you've already self-selected into a certain group, right? ⁓ But, you know, I'm sure there are those people out there that are just like, ⁓ I'm so tired of change. I just want things to stay the same. I feel that way a lot of times too. Like sometimes I just need like a break.
Katrin (38:31)
That's true, yes.
Jim Gianoglio (38:45)
Can things not change for like five minutes? But yeah, just let me catch. Yeah.
Katrin (38:49)
That would be nice sometimes. I actually really like the change. I really do
like it. I think it's what kept me in this domain is if you think about the past two plus decades, the amount of fundamental changes we've gone through in basically the same profession, But really very, very large changes. It's been really, really interesting.
Jim Gianoglio (39:11)
Mm.
Katrin (39:18)
If you were advising a traditional marketing analyst today, like somebody who's like, you know, this person
Jim Gianoglio (39:23)
you
Katrin (39:24)
who's like, I don't know if I want that much change and, you know, been running campaigns and building reports for a while. And I kind of like it. Could that still, you know, could that stay the same? But probably it won't. So I have to do something. So, so, you know, in that idea of like a limited attention span and time. ⁓
to learn new skills. What would you, you know, with your breadth of skills and your breadth of experience, what would you tell that person to focus on? It can be, you know, one, two, three, whatever you want, whatever you think is relevant.
Jim Gianoglio (40:06)
Yeah, no, I've thought about this. here's my advice. Here's what I would tell people is take, let's see, most people are back to work on Monday. This Thanksgiving week now, so a lot of people are off, but starting on Monday, you get back to work, take an hour, write down everything you do in a week, task by task. Not broad things, but ⁓ I do this thing.
for this many minutes? Like what are the tasks that you do, right? So tasks on one column, next column, how long it takes you to do that. And then prioritize them by, at the top of the list are things that ⁓ you truly believe need your human involvement. Or there's something that you enjoy doing, it's like the thing that you actually like about your job that you want to keep doing. Like yeah, that's at the top of list.
Katrin (40:58)
Yeah, you should keep that.
Yeah.
Jim Gianoglio (41:01)
And then
down at the bottom list, have the boring mundane things that you hate to do that take up your time that you wish you didn't have to do. And then I would just start at the bottom of the list and work my way up trying to figure out how I can automate this thing or use LLMs or agents or MCPs or whatever new technology is out there. How can I use
chat GPT or Gemini or, or replet or some other tool to maybe build my own tool to do this thing for me, ⁓ to either completely automate it, get it off my plate or to at least make it so that I can do it in five minutes instead of 30. Right. I would do that for every single task starting at the bottom, working my way up so that hopefully you get to a point where, you know, you've gotten to a point where in a week you can do and,
20 hours what used to take you 40 hours. So now you have an entire 20 hours, you have half of your week left to provide more. You could take it another a couple of different ways. You could say, and I have more time to invest in higher value activities for my company. Things that will provide more value that will show that I'm providing value like answering difficult questions ⁓ that the company has. Maybe it's something about like the
churn, like how do we reduce churn or it might be how do we increase customer lifetime value or what is the CLV of our different customers? Like there's hundreds of different types of analysis that you could do from a marketing side or an analytics side that they, and they'd never get answered because we don't have time because we're doing all this garbage crap work that could be automated, right? So give yourself half of your week back, right? To, you know, provide more value, right?
Katrin (42:41)
There is never a shortage of questions.
Jim Gianoglio (42:57)
gonna help out when when companies are downsizing and they're saying, who do we need to lay off? ⁓ no, not Jim. He's he's doing some really good stuff. ⁓ That'd be one way to do it. And the great benefit, too, is like if you're spending time learning how to automate the bottom of the list crap, you're learning as you're going, right? You're learning new skills and new things that are, again, going to make you more valuable, kind of keep you on top of things. You know, the cliched saying now, which is ⁓
AI is not gonna take your job. It's someone who knows how to use AI is gonna take your job, right? Be the person who knows AI, right? Don't just kind of, like we said before, stick your head in the sand and just hope things don't change, because they're gonna change. But yeah, mean, you're learning as you go. You could also take that other 20 hours a week you have now, and if you wanna be more selfish, could just like have fun and build tools and learn more things and satiate your curiosity and build your own company.
Katrin (43:31)
Mm-hmm.
And so, yeah, absolutely. And so, you know, we started talking about you both ⁓ as a builder and as a user. In your ideal future, like, and that's a future where, at least in my picture of that future, we have agents that manage more and more and longer and longer workflows for us.
And we become more orchestrators than operators of platforms. So our jobs kind of move more to architecting, planning, and ⁓ intervening in the workflows when there's a decision point, basically. ⁓ What does your day to day as an analyst look like? Like as an analyst and an entrepreneur? Like what does that look like for you?
Jim Gianoglio (44:48)
You mean in an ideal future or right now? Ideal, ideal. So here's what I would love, right? Kind of throwback to Star Trek, the next generation. Again, a cliche, but like there's a computer just ever present, right? And if you have a question, say, computer. Jarvis, we'll use, we'll go Marvel instead of Star Trek, Jarvis, how did that campaign do that we launched last week, right?
Katrin (44:51)
an ideal future. Let's go for ideal.
Mm-hmm.
Yes.
I like Javis.
Jim Gianoglio (45:18)
Fague question, but you know, it's a super smart AI computer and it says, I'm gonna have to be Jarvis in a British voice here, so pardon my bad accent. So far, your new campaign drove an additional 124,000 in sales that you wouldn't have otherwise gotten, but expect the effects of that campaign to continue to drive an additional 58,000 over the next three weeks. Would you like to continue the campaign, Right, like it has all the brains and it knows how to do MMMs and.
Katrin (45:21)
Mm-hmm.
Absolutely.
one job is.
Jim Gianoglio (45:48)
Yeah, like it knows, like it has all the data, it knows how to do all the analysis and like it can answer the, kinds of questions. Like that'd be great, right? Cause what that lets us do, of course, of course. But that would let us focus on the like the fun creative things, right? Which would be like, I don't know, like let's come up with a wild video campaign or let's come up with some.
Katrin (45:58)
That would be amazing. And all the data is clean. Of course it is.
Jim Gianoglio (46:16)
other way of reaching our audience or our customers that might be new and exciting? How can we drive additional value without spending hours and hours and hours just figuring out what is the optimal amount of money I should spend in each channel? That gets me excited and it pays my bills, but that should be like a computer should answer that one.
Katrin (46:37)
I subscribe to that future. Jim, this has been fantastic. I could do this forever, especially if you start doing accents. I can listen to this for hours. So for people who want to either check out MMM Hub, listen to measure up, work with you on the marketing measurement challenges, where can they find you?
Jim Gianoglio (46:48)
Hahaha
Sure, yeah, if people want to join the MMM Hub Slack group or the newsletter, they can go to MMMHub.org. ⁓ Otherwise, you can find me on LinkedIn pretty easily. I'm there pretty often posting and lurking.
Katrin (47:20)
Well, thank you for distilling some knowledge with you. We'll make sure all of this is in the show notes. ⁓ That's it for episode six of Knowledge Distillation. If today's conversation made you want to experiment with AI for analytics, visit us at Asquai and Triprism. Thanks for listening and remember, bots implement, AI analysts architect. Thank you.