Michael (00:12)
It's just that it's a lot harder path than people are are letting on. Cause today people are sort of like, you know, drop metal, a lamb on top of your data warehouse and, just let the insights flow. it's complete baloney. It's bullshit. It's just bullshit. It doesn't work like that.
Kat (00:31)
And I think, Michael, karma might have caught up with you because I believe that that makes you the quintessential AI analyst.
Michael (00:48)
Don't you dare. Okay, revenue is down for paid search and it's actually related to this. Otherwise, it's the classic, the guy searching for his keys under the street light and they, you know, they ask them, where did you drop them? I think it's over there. Why are you searching those? Cause that's where the street light is. Well, the AI has no choice but to search under the street light. So if you only give it light in one area, it cannot search anywhere else. It has no knowledge.
Kat (01:17)
Welcome to a knowledge distillation where we explore the rise of the AI analyst. I'm your host Katrin Ribant, CEO and founder of Ask-Y. After talking with Martin Kihn about the strategic aspects of the rise of the AI analyst with Mike Driscoll about the technical evolution of data tooling. And with Shomik Ghosh about the investors point of view on tech hype cycles and how they affect role creation in general, but in analytics in particular.
Today we're going to the front lines. I'm very excited about this because at heart I'm an analyst. So this is about the practitioner because while executives strategize and engineers build and VC makes, make bets. There's a group of people who actually work with this stuff in their organizations for their clients and just generally in their day to day work. So
There could be no one better, in my opinion, to open the practitioner track of knowledge distillation than Michael Helbling. Michael, you need no introduction. I'm still going to try though. So here it is. You've been in analytics for over 20 years. You started in the trenches with the first web analytics tools when there were no user guides and we were all on Yahoo groups trying to help each other understand what is a tag.
And you will build the analytics programs both as a consultant, as a practitioner. You are a co-founding co-host of the number one explicit analytics podcast, the Analytics Power Hour. Although as we discussed, you guys need to work a little harder on keeping that explicit rating these days. So it's been running as you just told me, 11 years, not 10 years. So tell us a bit about the Analytics Power Hour.
Michael (03:09)
Yeah, I'm, I'm as shocked as you are to, to realize that it's been 11 years, but it's been an amazing journey. And it, it's very interesting to see how, ⁓ that what we've talked about on the show, what we cover on the show is, has, ⁓ transitioned over the years into different topics. And, and actually I kind of look at that year over year. And of course this year, ⁓ you know, just topically.
dominated by AI. So this conversation, think, is also very timely as well.
Kat (03:43)
That is really fascinating because, you know, I told you this, but I'm going to say this again. The analytics power hour and you in particular are partially responsible for me going back into building a company and going back to analytics because ⁓ I found myself during the pandemic just wanting to reconnect to analytics and just to listen and found the analytics power hour, which I wasn't listening before.
to Analytics Power, I just literally discovered because it was the number one, actually the number one podcast on analytics, explicit or not. And so I started listening to it and I found myself, I mean, laughing by myself like an idiot at your jokes and thinking, my God, these are my people. I need this back in my life. And then I proceeded on to ⁓ listen to every episode from the beginning.
Cause I was, I was like, ⁓ this is actually an incredible historical document. And it really is absolutely, I really highly recommend it to anybody who's got, you know, the time and the willingness. It's a fascinating exercise to go through all of these years of evolution and to see how the industry really has changed and has evolved. And yes, I mean, lately, obviously AI and the thinking around AI.
Michael (05:04)
Yeah, for sure. mean, in our first few episodes, we were covering topics like what is an analyst job and what is a dashboard and, you all those kinds of things. And, and now we're revisiting a lot of those things, but in a new context, right? So it's, it's kind of fun to sort of see the journey and the evolution. And of course we're all facing sort of an AI evolution of our industries and fields right now. So I think it's, ⁓ always fresh there, right?
There's not a lack of things to talk about, which is crazy to me because like you think after 10, 11 years of doing a show every couple of weeks, you'd run out of topics, ⁓ but we don't.
Kat (05:43)
You
don't. You definitely don't. And Tim will never run out of runs. That's not going to happen.
So you're a managing partner and the founder of Stat Analytics, a consultancy. You help companies solve most of their data challenges. What type of companies do you usually work with? How does that translate into analytics workflows? What is it like specifically within the vast domain of analytics that you actually focus on?
Michael (06:21)
Yeah, no great question. So at Stacked Analytics, we primarily work with C-level executives in primarily B2B e-commerce and B2B SaaS verticals. And we work first at the data strategy level and then down into execution, primarily in digital and product analytics use cases. you ⁓ know, my background is in digital analytics. And so that's kind of where most of my expertise and, and
and work has been over the last 20 years. So I try to keep it focused there. And I find that, ⁓ there's an amazing sweet spot of challenges as well as needs for that particular group of companies. And so we have a good partnership with our clients on that level. And then, ⁓ more and more, we're getting into the AI strategy and execution piece of that, right? Just because that's what the industry needs.
Kat (07:19)
And what does that mean AI strategy and execution?
Michael (07:24)
Well, so I think, I mean, it's what we're going to talk about for the whole rest of this episode, right? Because it's, um, basically the first thing I noticed was that as soon as AI started to kind of come to the forefront, right. In 2023 into 2024, I started noticing like all of these LinkedIn job descriptions that used to be sort of a head of data and analytics or director of analytics and governance or whatever, all of sudden.
AI just started getting tacked on to the end of it. So it's sort of now it's head of analytics and AI, head of data strategy and AI, head of data governance and AI. So AI just sort of got sort of tagged on to the data roles that leaders were being asked to fulfill.
Kat (08:08)
the
rules got AI augmented.
Michael (08:10)
Yeah, exactly. Now,
what does that mean? Well, drill down into every single company and you'll find a different answer everywhere. However, it just shows that, okay, well, I do analytics. I work with executives on analytics. That is the question at the forefront of their minds. Well, then we need to be in a place to help advise where to go. So first.
You know, you've got data, you want to use that data effectively and, and at Stacked Analytics, kind of our purpose is to drive the closure of the data to value gap, right? That's always been what we've been about. Companies have believed in and invested in data for years and years and years. There's a big gap between that investment and the value derivation that it typically occurs for a lot of companies for lots and lots of different reasons. Well, AI potentially could change that, but there's a lot of hurdles.
to overcome and, and, know, I'm sure we'll talk about it.
Kat (09:09)
So right now you're at the epicenter of the rise of the AI analyst, right, from all fronts, ⁓ working with clients who are trying to figure out, ⁓ you know, if LLMs are magic or just expensive autocompletes. You're also one of the early adopters ⁓ of Ask-Y Prism, but also of a whole bunch of other ⁓ tools. This is really something you and I have in common is
We're sort of both like tinkerers, right? Curious minds and we like to try everything that's cool. And we just, you uh, like to play with new stuff. so what was your first aha moment with AI? Like, was it in the context of, uh, uh, analytics? Like, right. Like I really mean like not, generally AI, right? Yeah. Well, actually, actually generally AI and then specifically in the context of analytics and.
Like how did that happen for you?
Michael (10:12)
Yeah, no, it's good. mean, honestly, when chat GPT three came out, it was such a shockwave for everybody. And then, you know, like you said, like I'm always kind of tinkering. It's my favorite way to procrastinate is, you know, try things out or play with two tools or whatever. So of course I'm in there day one, just sort of checking it out. And the first thing was sort of, trying out different, ⁓ content or
text generation types of ideas, like have it write a poem or have it do this or, know, it was back in the early days of chat GPT three, like we were just all blown away that it could write a rhyming limerick for instance, like, wow, my gosh. And so that was probably the first aha moment of like, okay, this is different than anything else I've seen that called itself AI before that was sort of like, okay, there's something to this that is actually ⁓ a shift.
in what we've seen technologically in terms of LLMs or, you know, computer reasoning. And that was kind of like the first one. I would say for analytics that AHA was probably a little earlier this year, I had a client where we needed to connect a somewhat obscure CRM to ⁓ a larger data project. And so
Typically in that environment, you've got to go digging through finding documentation, figuring out how their API works, all these different things. Like someone has to go do the research, spend the time. It's a time consuming and it's doable. Well, I mean, it's just a thing you've got to do. Since I'm not super technical historically, I would have to take that task, package it up and hand it off to a more technical.
⁓ data engineer slash analytics engineer who has more facility with, you know, tools like postman and API calls and those kinds of things. But instead I sat down with L L ⁓ typed in, like, here's what I'm trying to do. And within probably less than an hour, I was writing API queries against this CRM I'd never even heard of before in gathering data and figuring out how their data structure worked and, and, integrating it with the other data. And so just.
for my ability to do that task, which is actually more of a technical task if you think about it, but it just sort of was sort of indicative of like, okay, this takes a thing that I would have had historically to hand off to someone else on the team. I can just sit down and knock it out and a lot faster than I thought. ⁓ and that was kind of eye opening.
Kat (12:53)
And that's really fascinating, right? Because I think that that's an amazing illustration of ⁓ the notion that I have around the rise of the AI analyst powering a full stack expansion of the analyst skill set. And you just illustrated the expansion to the left towards the technical aspect where, know, I'm like you, I'm not, you know, I'm not an engineer. My technical skills are limited. I find myself being able to do things that I
Michael (13:13)
Yeah, yeah, yeah.
Kat (13:23)
would not have been able to do technically doesn't make me a data engineer.
Michael (13:27)
Yeah, yeah, no, no, exactly.
Kat (13:29)
makes me a data analyst that can, you know, solve some of their own issues occasionally. And that's very useful.
Michael (13:36)
Yeah. So I think I see where you're headed here, which is the other direction, which is the analysis, which would all say is, think I'm still waiting for that. I, I will say that historically or up until probably a couple of months ago, I was sort of, um, not sure I believed AI would be capable of delivering actual analysis.
⁓ just because, and there's a lot of reasons for that, but I actually, at this point, I'm starting to see a path or I think I see a path. That. Okay. If you set things up a certain way, you can actually, and there's two parts to that. So there's progression of the AI industry itself and the models are getting better and better. And the, the, the concept of leveraging multiple LLM agents for different parts of the problem.
is sort of becoming widespread in terms of its use, which is then allowing for better or less hallucinatory interactions or more fully featured interactions, which I think helps with analysis. So there's, and then, you know, you layer in sort of like, we're all starting to understand some of the things required to deliver analysis around semantic layering, knowledge graph, things like that. Then now I think, okay, I do see a path for an AI to deliver analysis, but I would say the lift to do it
Is a lot heavier than what most people think it is. And of course we'll talk more about it. So that's why I would say my analysis, ⁓ hi, still coming. But I think it, ⁓ I do think it's going to happen.
Kat (15:16)
For analysis, I agree, it takes much more. think that where it does help is for people who have maybe less of an easy way with storytelling, like people who are able to align the bullet points, but have a hard time finding the way to wrap it up. For me, the way it really happened is with the floofies, my little characters.
that helped with the sort of teaching LLM concepts and prompting concepts. So when I came up with the characters, I decided sort of intuitively that they were going to represent the parameters in latent, in a latent space. That just seemed very obvious to me, but then I needed to create a very coherent framework that would map all the floofies features.
with AI ML concepts. Now, obviously, I could have talked to my CTO for six hours about that and verified, you know, that everything made sense, but that was not a very good use of his time. I think he probably would have quit at some point after a few hours of questions. But I was able to spend an entire weekend with, you know, three different models and
cross check everything, every single way to build a really complete mapping feature with, well, like eyes are attention mechanism era. Here's our context window. The first spiky pre-training, the first smooth it out, ⁓ post training, like everything actually works so that I can build stories on it. That would have been really impossible otherwise.
Michael (17:05)
Yeah. Yeah. No. So we're going to talk about it, but I mean, that's probably one of my primary use cases around AI today for myself is like building knowledge or education for myself or collapsing knowledge into more useful chunks. ⁓ so like conceptually there's stuff that sort of know about ish, right? Data warehouses or data lakes, but it's sort of like, okay, like, you know,
The beginning of this year, I was like, okay, I keep seeing this term rag, right? What is that? So of course I just went into the chat GPT and I said, okay, teach me about rag. What is it? How does it work? And it's just, it walks you through and it's like, okay, I don't understand this concept. What does this mean? So now I build a set of knowledge so I can be conversational about, okay, what is that? And now today, well, now I take that concept and say, okay, well,
that works pretty good for content, but rag doesn't work well for data, for instance. So understanding like, okay, now take that concept a step further into, okay, the next step in the evolution and then using AI to kind of, create a more efficient learning curve for myself on stuff that I didn't know about before.
Kat (18:20)
And
I think, Michael, karma might have caught up with you because I believe that that makes you the quintessential AI analyst.
Michael (18:32)
Hahaha!
Don't you dare. No, I'm just kidding.
Well, I know Tim will be pleased if that's
Kat (18:48)
sure he will. Tim, this one's for you.
Michael (18:52)
But to your point though, I think the structure of performing analysis and, when we think about AI, we think about, okay, can I let AI run an autonomous analysis from beginning to end? So from business question to insight or result or recommendation. And that's what I, so that's what I mean about sort of an analysis. I sort of like, can I set it to a task? it do that?
Whereas most AI data tools are what I would call data retrieval tools. Well, they'll go in, get a piece of data and hand it back to you, which then assists you in your analysis, but doesn't actually conduct an analysis on your behalf. But AI is very helpful at the component parts, little chunks along the way. Like you talked about storytelling. One part of delivering an analysis is constructing the narrative around or how it will be understood or.
accepted. And so like, like say you're super strong with your statistical skills and you've done an amazing analysis with great detailed and LLM is an awesome place to go validate how you want to present that information ⁓ to a business user and even give it personas to say, here's kind of how these people are. You dump your analysis in or, or whatever, and it'll come back and say, here's some bullets or ideas, how to make it more succinct.
how to make it more clear. This might be ambiguous to people. Like there's all kinds of ways like AI will help with that. And on the flip side, you can have an AI do some of the structural thinking around the data as well. mean, it can do things like write queries for you, write code for you, you know. So I think of AI is actually probably really good at things like, ⁓ you know, initial exploratory data analysis, you know, just sort of
getting an understanding of a set of data that you haven't really interacted with before. It can do a lot of that work upfront because there's a standard approach to it and you can kind of set it up to run through those steps effectively. And that just fast tracks your own ability to work with that data. So there's use cases within analysis that LLMs are really great at. What ⁓ is not quite there yet is AI running a whole analysis from beginning to end for you.
Kat (21:18)
And on that point, I don't know that that necessarily is structurally possible because I remember this very well. was in April at Measure Camp New York. You and I were talking. Sorry, shameless plug, Measure Camp New York, the measure camp with the best pizza. Got to do that as a New Yorker.
Michael (21:42)
Yeah, won't talk.
Kat (21:45)
Also, Measurable Camp New York is actually in New Jersey. So it's really New Jersey pizza. But nevermind, still the best. So you and I, were talking about coding and ⁓ analytics and how software engineering was disrupted by AI. And you were thinking a lot about the difference between coding and analytics. I was just in the process of writing my articles about it. And I know you read those. so
I'm wondering kind of like where you are landing on ⁓ where those two processes differ and why maybe software engineers can run agents autonomously for more processes than necessarily we can in analytics or at least currently.
Michael (22:29)
Yeah. Well, and it goes back to sort of what an LLM really is under the, under the covers, right? It's a probabilistic engine. That's trying to predict the right next thing. What's, what's so great about, ⁓ text or coding is that it's discrete in terms of what comes out the other end can be interacted with and understood very quickly. ⁓ in terms of software, it's sort of like, okay.
All the software exists in a corpus, right? So the, it, the LM is training on all of the instruction sets of software that it has access to. it's gaining lots of knowledge and ability then to predict what should happen next in a piece of code. can then go write code and then very quickly get feedback on is it work or does it not work? Right. That's sort of a, it does, or it doesn't kind of an outcome, which is an amazing for a computer.
because the computer gets that instant feedback and then can reinforcement learn against that to get better and better and better. So then, but in analytics, you have a more, uh, deterministic need, which is there is not, um, there is not the multiple answers to the question. There's usually one answer to the question or, um, you know, or direction that the question needs to go or not go. Cause what's happened, you know, what we're trying to do with data is interpret
the real world, not write something that does or does not. Right. So we're, it's a difference between probabilistic and deterministic kind of reasoning. And so an LLM itself is not good at that kind of deterministic outcome. And so it hallucinates, it makes stuff up. Um, and that's not going to fly in a business context. If you don't, um, if you don't have a good way to kind of, um,
understanding kind of correct that as the AI tries to navigate through that setting. So if it goes and looks at your data and it decides it wants to just make up a number about revenue for last month, ⁓ if you're not watching closely, it you just might end up with something you didn't intend. So that's what I mean by when you ask it to do analysis, it's just not good at that. ⁓ You know,
Kat (24:46)
Yeah, nobody needs approximative insights on maybe complete data, maybe not. Maybe partially made up.
Michael (24:53)
⁓ you know, it's, like directionally sometimes data is okay directionally, but sometimes you actually need to really know. And, ⁓ you just need, again, analytics and, and especially insights from data comes down to trust. It really does. Like the, if I'm a business owner or business decision maker, and someone comes to me and says, I think we should do this because the data indicates it.
Everything, every layer of that interaction has to do with trust. So do I trust the data at the base level? Do I trust the analysis or do I end? I trust the analyst or the person delivering it to me? So.
Kat (25:37)
can I trace it?
Michael (25:39)
Yeah, in essence, it's sort of a human intuition level of traceability or that that trust level at every level of that, which is traceability, right? And traceability, in a technical sense, delivers back some of that trust, right? Because if I can deliver observability and traceability to that process cleanly, then then I can prove that I didn't take missteps along that process, which makes a more trustworthy result. So
from a business decision making standpoint, that's a big deal. Because the more complex your question, the more nuanced, the more you're going to run into things like, I don't know if that's true. You know, there's a famous Jeff Bezos thing is like, if your intuition about the business conflicts with the data, more often than not, you need to probe that data or that analysis more because your intuition is probably right. And so if a business user really or business owner or business leader really understands their business and how it works,
more often than not, their intuition is probably more directionally good than whatever the analyst is coming up with. And that's what I mean by trust. It's like, is there, is my intuition guiding me in a different direction or is there alignment or do I see something? And sometimes the intuition is off and you got to learn and grow. And so there's an opportunity for that. But again, that's all very esoteric for an AI to really get in the middle of. And I think that's where the biggest difference between
software and analysis really exists. so people have written large lots of articles, know, like, you know, vibe coding is sort of what people call AI driven development. People have written articles about vibe analytics. ⁓ Those people are completely wrong and terrible, and they should be ashamed of writing those articles. And ⁓
that at the same time, I want to also just leave the door open that I said earlier, I do believe there's a path to doing analysis with AI. It's just that it's a lot harder path than people are, are letting on. Cause today people are sort of like, you know, drop metal, a lamb on top of your data warehouse and, just let the insights flow. it's complete baloney. I don't know if you, if you want an explicit tag on this podcast, but that's where you'll get it from.
Kat (27:57)
Yes,
we can get a... Please give us an explicit tag.
Michael (28:00)
It's bullshit. It's just bullshit. It doesn't work like that.
Kat (28:07)
It really doesn't. so clearly, I mean, the only way to have trust in your data is to run code on data you actually host somewhere. Right? This is the only way you can actually trust your analysis. The other thing I think that is very different between analytics and coding, you touched on it is in coding, you have unique testing, you have a yes or no, you have an acceptance, you have all of that. In analytics, we get an answer. We get three more questions and we really are.
as analysts, bringing elements of an answer to a person who will make a decision. That person has, as you mentioned, vast knowledge and intuition that is not in the data and cannot be captured in the data because as you said, the data is by definition a model of reality. So it's a reduction of information. We only have a certain amount of information in that data.
And within the domain of that information, we are able to say certain things that are an element of a decision-making by a person who runs a business. And our acceptance test is, is the person accepting the product of our analysis as something useful? Now they can make a decision based on it or not.
It can be that decision. It can be another one. That's a different question in my opinion. But do they at least accept the product of the analytics work as being a valuable piece of information? And that is a very subjective sort of unit test or acceptance test. So I don't really see how you can then run that into a process that will run something of any type of complexity.
without human checkpoints.
Michael (30:00)
Well, I mean, I think the theoretically what, what I think people think about it right now is that, you know, there's sort of layers to it. So the base level is my underlying data, whatever that is. Then on top of that, a semantic layer that helps the AI kind of understand what that data means and how it's mapped. And then on top of that, maybe a knowledge graph.
that is connecting or making allowing the AI to make connections between this data and other important data or contextually relevant data that might apply. And then a reasoning engine that allows it to kind of react to. So like, let's take an example. If you said, ⁓ hey, ⁓ this past quarter, conversions or revenue is down for paid search.
Right. So as an e-commerce example where people come to your website, we acquire them through various means, paid search being one of them and revenues now down a little bit quarter over quarter. So you would apply, you would do an analysis as a human against that, where you would first try to understand what is happening. Is that drop real? So you'd go in and do analysis to say like, what is the difference level? Is it statistically kind of, um,
Is the variation enough that it's not sort of just normal or is it actually actually lower than what we expected? Then you would drive down into the semantic layer, which we kind of cognitively hold sometimes in our heads, but you know, basically down into, so what's driving that? Like, let me drill into the different campaigns, the different things that are happening. And so you would then start to look at it across maybe three different.
⁓ perspectives, the first perspective being the channel itself, right? The marketing channel that's driving the acquisition. Like what is the composition? What changes happened at that layer of people of potential customers? I was acquiring through that channel that we're bringing to the website to potentially purchase. What differences happened there? Is there anything to that? The second layer would be maybe the website itself. ⁓ Is the website
Are there technical things with the website that may be affecting ⁓ people's ability to make transactions from that specific channel? And then lastly, you probably also look at the products themselves, the merchandise, because maybe there are specific products that are bought on that channel that were low inventory and are not available. And so we did not drive the revenue. thought we were drive because people could not purchase. like all those three things is what you would look at as an analyst. Well, just what I've described. ⁓
A lot of AI tools do not have access to all three of those pieces of information at a given time, because typically we're driving AI in on some vertical piece of information. Maybe we're driving it in on the marketing analytics tool. Maybe we're driving it in on the product analytics tool or the website tool. Maybe we're driving it in on the merchandising or the inventory side, but we're usually not.
very good at driving it in all across all three simultaneously. And that's what I mean by a knowledge graph is you need to be able to connect those for the AI. So the AI can actually go in and ask those three questions about those three components to be able to come back to you with. Okay. Revenue is down for paid search and it's actually related to this. Otherwise it's the classic, ⁓ you know, the, the, the guy searching for his keys, ⁓ under the streetlight and
They, you know, they asked them, well, where'd you drop them? I think it's over there. Well, why are you searching? Because that's where the street light is. Well, the AI has no choice, but to search under the street light. So if you only give it light in one area, it cannot search anywhere else. It has no knowledge. And so that's the thing is we have to really conceptualize like, what are you do when you do an analysis, you go and you bring all this other contextually relevant information about all these other things and other systems.
The AI may or may not have any visibility to. And so when I say that I believe AI can perform analysis in the future, it's only when you've done the work to think through that process and think about the knowledge required or the context required to basically run through that question and other questions of that sort that the, you can then expose the AI to that will allow it to give you those answers. So that's kind of where the.
The tipping point will come.
Kat (34:36)
So if you ever want a side gig as a product architect, you literally just describe the architecture of Prism when you using the semantic layer, et cetera. And I absolutely believe that that's the way to go, right? Is creating these connecting dots between the different tools that we use because the tools are fine. We have good databases. We have good ⁓ BI tools. We have good vertical products that can be AI augmented and that's all fine.
You do need to connect the dots with knowledge and context across the different tools that you use and use natural language as an interface to communicate with them. However, I do still believe that even by doing that, there is a, I'll give you an example with one of our design partners in e-commerce actually. So they do functional snacks.
⁓ it's, it so happens that when you do functional snacks, the holidays are actually a down period for you because nobody wants functional snacks during the holidays. So, you know, we get all of these contextual explanations about what the dips et cetera means so that when we do analysis, we actually take that into account because we have the memory and context engine to do that. ⁓ and all of those things are fantastic and great. However,
They run out of stock relatively, you know, relatively unpredictable periods in time. And that information is not in any tool. Like it just is not in the data, any data. so, so we can't know, we can't know from the data, whether they're out of stock or not. My point is, I just really think that business is so complex. The world is so complex.
And the data analyst is a person who brings a support for a decision, for a business decision, with a data model that is as representative as possible of the world. However, still limited and realistically, you need to that reality into account. You do need the human checkpoint in there.
Michael (36:49)
Yeah. So I completely agree. Like I don't think we're going to just have AI autonomously running all analysis. Like there's, there's going to always be human in the loop on that because for, lots of reasons, but primarily for that trust reason, ⁓ in the same way that you just need somebody behind the wheel of a car, even if it can drive itself, ⁓ today, the same thing, you just need a human in the loop on
And analysis as well, because the AI can't do all the things a human can do through that whole process to, because the thing is, like, what is the point of doing an analysis? Well, the point is to inspire or influence a change or a reaction or action against that analysis that is either continue, stop or start. Well,
What do we know about how people do change? Right? So that's never going to be an AI that convinces someone to change. It's that is even more complex than analysis. So of course a human has to be involved, ⁓ for all the reasons why. Influence is a thing that AI is probably not going to be able to really grapple with for a long time. I don't even know. Like that's way beyond my pay grade.
Kat (38:11)
And so when you work with clients now, ⁓ what type of requests do they have when it comes to AI and analytics? Is it about functionality? ⁓ Is it about efficiency? Everything has to go faster because you use AI and it does everything. And so I only want to pay a tenth of the price because you are 10x to all of your analysts or whatever. ⁓ Like along which lines does this manifest in your ⁓ client engagements?
That's that's that's
Michael (38:42)
Yeah, it's interesting. think, first up, first off, it started coming up just in casual conversations with clients about, know, we're interested in this, we're, working on this, or we've got a company initiative around this, ⁓ to try to drive efficiency. And so we aren't always, because of our work and our specialization, we're not always the first person our clients talk to about AI, but certainly we, you know, those conversations are happening.
On, on the side that where we work, you know, cause we're using AI day to day in our work more and more and more, we're very open with our clients about our use of AI and where we're using it and why. And we're also very open with them about where we should not be using it and why we don't use it in some places. Because not everybody has the same perspective because they, you know, they read the article. Someone wrote about vibe analytics being a thing you can actually do. And it's just sort of like.
Well, you can't, and let me walk you through kind of like the thinking here and, then show you where we can gain efficiency, where we can't. And ideally, like we've got a good partnership with our clients and we've got good trust with our clients. And so as an advisor, we're able to help guide that conversation. So I would say our clients, you know, they're either pushing internal initiatives or they're planning to, ⁓ I would say personally, we don't have any clients today that are on the sideline.
No one is sort of like, we'll wait and see what happens this whole AI thing. Maybe it's a flash in the pan. I don't think any of our clients are there. Everyone is at least trying something, some more than others. Some have integrated AI already into specific functions. And it's kind of fun to watch the AI kind of grapple with business questions that they come in via Slack channel and an AI bot that tries to go in and answer questions for the clients. I read through those intently.
both for my clients sake, but also for my own to understand like, yeah, how are we doing? And this is what's driving a lot of my perspective is when I watch some of these things happen in real in the real world. Like, what is the AI struggle with versus what doesn't it? And that's why like, AI is amazing at data retrieval. If you need to know like, how many units of that did we sell or how many leads came from this region? Well, AI can go in and grab that piece of information out of the database for you super easy. And it's so great.
And that's, think, ⁓ a great use case. And it's a, it's a nice compression of, of, ⁓ of a challenge that we've had for many years with the quote unquote modern data stack because SQL was a barrier to entry to the data itself for a lot of questions that didn't exist in a preexisting dashboard or report. A business user was locked out of that data up until now, but now we, those business users are now back into that data.
So they can conduct some of their own analysis on the data because they're able to ask those questions. Now that exposes again, some of the challenges about that, which is people can go ask questions in different ways and there's ⁓ semantic ambiguity, right? So, ⁓ the way you ask a question sometimes will determine what you get responses to. Now a good AI agent assistant.
We'll go clarify the question effectively and be like, now, are you talking about this? You talking about that? And so they'll help the business user kind of drive the question down, but you do need to be aware that that can happen. Cause someone might say like, well, I mean, you know, give me a list of our best customers. All right. So that's a question that gets asked in a business probably all the time. What does best customer mean?
And so now we've driven right down into semantic ambiguity where the AI is going to either make a set of assumptions itself, or it's going to say, I don't know what you mean drive that, or we have a semantic layer that defines that. the AI will pick up that definition and drive forward with, with what the, answer to that question is from the perspective of that. However, semantic layers are not universal because
Certain metrics or things like that differ depending on what department. So finance versus marketing versus operational. Well, hopefully there's some cohesion at some sort of functional level within the business, but certainly it's from the perspective of different departments or functions of a business. You're going to have different reasons. so, ⁓ hat tip to Cindy house and from thought spot where we had on our podcast recently, ⁓
kind of identified that like you actually might need multiple semantic layers for a business to actually effectively work with your data.
Kat (43:33)
I think you need a semantic layer that can hold the different definitions of different businesses, the different functions, because all those definitions are legitimate. They're all legitimate for their own purpose. It's not like somebody's right and somebody's wrong. It's not like this is something that needs to be fixed. This is structural.
Michael (43:53)
Yes, exactly. Correct. And I think that's the oversimplification where usually we walk in, you know, my perspective is much more on sort of the marketing and e-commerce side of the world. So I'm not a finance guy. So when finance people walk in the room, they have a perspective that's slightly different. And of course, over the years I've learned sort of like where those might cross each other. And so I'm aware and cognizant and try to work to make sure there's good alignment, but
If you don't know, you're just going to use your number and then finance is going to be like, I don't know what you're talking about. That's not a good number. then you fight over it and then, you know, you lose trust.
Kat (44:29)
Zach?
⁓ And so you talked about the technical and tool side of things. How do you see clients' organization change as they consider more of AI from a people perspective? ⁓ Are they expecting people to upskill? Are they looking for certain types of skills in people to hire? What do you see happening?
Michael (44:58)
Yeah, I think this one is a bigger hurdle because again, change, right? It's all change that people need to go through. I think probably most of your listeners are familiar with that MIT report from this past summer that kind of indicated how a large number of AI initiatives were kind of not proceeding to plan or failing within large organizations and there was challenges with implementing them.
And I think that speaks to the process side of this. And I think that's where process and training is crucial. Any AI initiative, you can't just sort of throw it out there and be like, we've got an AI tool, go nuts. It takes a bit more of that because it's like, well, what should you use it for? How should you use it? What is it good at? ⁓ because what, especially when it comes to trustworthy data and insights, you know, you still have to have a human in a loop to kind of like guide that. But I think that's where.
I think for, for our clients, they're starting to grapple with that or thinking about, okay, yeah, how do we need to upscale or think about how to roll things out? But it's no different to be really blunt than any other rollout of any other initiative or process change. It's just AI now. That's all it is. But you know, we think about adoption or.
execution against any new major shift in how you do things. ⁓
change management's been a thing for a long, time. Um, so we, yeah, it was just addressed at the exact same. It's yeah, that's challenging because it is crazy, crazy how fast it's moving. And I think more than anything that just leads people to feel anxiety about how quick they should be moving. I don't think personally, I think people should take the stress off.
Kat (46:35)
should've. ⁓
Michael (46:59)
But I do think that's part of the intensity is the expectation level, right? So because it's so like we talked at the very beginning, like the ground shifted with chat GPT three and has just kept shifting faster and faster. And Gemini three just came out and it's a phenomenal, ⁓ LLM and, and the way it works is amazing. And chat GPT five and like the,
the mode or the pace of change is rapid and advancing so quickly. So the pressure is intense in two directions from the top, right? CEOs are famous for writing these memos where we're like every initiative is now an AI initiative. If you didn't solve it with AI first, you're, it's not approved. Like, you know, everyone's saying that from the very top, like we're going to be an AI first company now. It's like, well, okay. Great. Like.
That's good. Cause on one level that is leadership and that's pushing down to say, everybody really think about how this is going to, how are you going to push this forward? Drive efficiency, drive gains. Here's the problem. The feedback up from the trenches is this stuff doesn't work. Cause when you start to put AI in place to try to do these things that we think it can do your, and so most data leaders are somewhere crushed in the middle of like, we've got to do AI initiatives and the analysts are like,
Yeah, but this is bull crap. Like it doesn't work like this. There's still lots of problems. So in the middle, you're still having to figure out, okay, how do I make inroads on something that's a mandate while at the same time, not being realistic about what's actually possible.
Kat (48:41)
That's really what I feel. It's like the expectations that organizations have currently for head of analytics, whatever the title is, right? Um, just aren't really quite realistic. And I think for several reasons, and one of the reasons is that software engineering has been really disrupted by AI. Like, yeah, profession has been absolutely disrupted.
Michael (49:06)
Da
Kat (49:10)
I, ⁓ you know, we talked about this a lot. We've seen, we've seen how difficult it was to hire anybody who had any experience with AI a year ago in software engineering. Today, we don't have that problem at all anymore. Everyone, every software engineer uses AI and uses AI a lot. Right. And there is always this sort of notion. Yes, but analysts do code, which is not untrue. Right. We write code. However, the,
process, the workflow is entirely different. That's not an easy thing to explain, I think. Explaining that software engineering has a linear workflow with a deterministic acceptance test. that analytics has a iterative circular workflow with a, not very deterministic acceptance test.
Michael (50:05)
Well, and context is governed strictly within the requirements and the code itself. So you don't have external context you have to maintain except within the window of the work you're doing right then. I'll also say this, like you're absolutely right. Software engineering has massively shifted over to AI, but also LLMs and the way they work in software engineering has shifted a great deal at the same time.
You, you, if you go and work with, ⁓ you know, codex or Claude or even now Gemini, when you sit down, like at first, when you sat down with those and said, all right, I want to write a piece of software and you drilled that into whatever, you know, CLI or whatever you're using, the AI would jump in and be like, all right, let me take a whack at that. And it would just start going and it would try to rate you what you thought, what, what it thought you wanted.
Now those tools go through a development process because it needs to. So the AI takes what your prompt and it says, okay, let me go write up a plan. Now let's review the plan together, make sure that looks good. And then, okay, now I'm going to fire up agents to start writing code against this plan and then checkpoints along the way. software development also has a process.
That AI has started to use. so like when we start working into multi-agent structures, software development, like was one of those use cases that was really easy to ⁓ latch onto for AI, but like the first go around where the AI agent just sort of took its first swing of the bat and just hope for the best. You heard all kinds of horror stories about how it just decided is deleting the database because that's what it thought it should do.
And you're like, I just destroyed half my code base, ⁓ of the thing it just wrote. Cause it didn't think about what it was doing. So now I've got to start over again. And so driving efficiency through that process. Well, there was a bunch more work. the iterative cycle is very quick. And so software development today. And again, I'm not a software developer or engineer by any means, but I've just sort of watched how that process has shifted where it used to be. The user had to think through all the structure and planning.
⁓ Now the AI can participate in a lot of that structure and planning in fact does it because it produces better outcomes. And so there's all those things are happening. Now what I'd like to think is there are a set of discrete things are so personally, I believe that most analysis follows patterns. Now we've never written them down like ⁓
Like we did for software, like design patterns and software, but most things follow a set of steps that you actually could theoretically follow. just never have, ⁓ we may in some, at some point in the future, but that means that if we can define those patterns effectively, then we can get AI's to help us run through those patterns effectively. So that's why I believe analysis can happen with AI at some point.
Kat (52:58)
That is true
Michael (53:24)
And again, that's, we're, we're, we're not there yet.
Kat (53:29)
No, but I believe that's true. That's what we do when we fine tune agents. We mimic the thought process of an analyst in a certain context and the decision points. And so what are your decision branches? Where would you go? How does that tree look like? And that's obviously very specific training for an LLM. But regarding context, I remember this quote from the Analytics Power Hour.
that kind of really struck me as interesting. I wanted to ask you about that ever since. So you said about an organization, they now have people full time whose job is to ensure that the AI is getting fed the right information. So basically the right context, right? I don't know if you can talk about this organization without giving any names or whatever, but what was the thought process like? What is this?
Michael (54:24)
Yeah. I mean, it goes right into the semantic ambiguity thing that we talked about before, which is how do we supply the AI with the appropriate context so it can be effective at applying data to a business question so that it can deliver a high quality result and or shepherding it along that path for various functions. So, you know, that's, ⁓ people's jobs are sort of shifting into sort of that coach slash mentor role for the AI.
to kind of guide it through the thinking of like, okay, look, this, this, and this. And, and so that's sort of stuff we're seeing, like people do kind of in, real time because the, the human needs to be able to monitor and kind of tweak it. Cause you know, at first, like the very beginning of this, was sort of like a whack a mole with hallucinations, right? It was sort of like, yeah, here's what you need to know. And you're like, wow, that is made up completely. Like, no.
You can't do that. And that's why like human in the loop is always going to be a part of the process for, for analytics, for probably anything AI does that drives an outcome on the business side, a human's gotta be the sort of the last gate to say like, yep, this is right. This is not right. And, you know, again, I have these thoughts about how that might play out over the long term, but I, in my mind, I kind of use the industrial revolution.
as sort of a way of thinking about it, which is, you know, ⁓ everyone used to make things artisanally and then sell them. And then we've made factories where you made one part of it or, managed a machine that made it. And so if you think about like knowledge work or things like that, like data analysis, whatever the things we all do in business today in a service economy.
AI will start to do the parts of it and it will sort of take on the role of managing the machine. That's stamping out whatever the part of the process it's running. ⁓ I don't know how people feel about that, but like that's sort of one way to sort of conceptualize what might happen with AI over the longterm. So if you think about that from a analysis perspective, you kind of think about, okay, then my job isn't to necessarily be the artisan creating the SQL and
writing the instructions into the code and aligning the data or setting up beautiful charts and graphs. ⁓ that will be the AI going forward. My job will be one layer above managing the orchestration of all those things to make sure we actually get the right quality level at the end of the process. So in a certain sense, like you think about our role is sort of going to shift towards a more supervisory versus, ⁓ detailed role, if you will.
Kat (57:18)
I would agree with that with one nuance. don't think that the fact that the role is supervisory means that it is less detailed because in. Yeah. I'm just going to voice it. If. Correctly supervise, you do need to know how to do the details yourself and understand the details.
Michael (57:28)
That's fair.
Yeah, yeah, no, that's good.
Yeah, it's, this is sort of an existential question that I hear raised about sort of like, okay, if AI is doing all this stuff, how will the next generation of analysts become knowledgeable at that level of detail? If they're never personally putting their hands in it and doing that analysis from soup to nuts themselves. Like it's quite easy for me to envision handing off parts of an analysis to an AI, but it's because I've been doing analysis for 20 years.
So I have a pretty good set of instincts for what's good and what's bad on the other end of it. And this is also why I say, um, outcomes in AI are largely driven by the expertise of the prompter at this point. Um, and I'm not sure how that will change over time.
Kat (58:31)
And so from your point of view of a business owner, how is AI changing the way you think about stacked analytics from a positioning point of view, an offering standpoint, a staffing standpoint, like all the different aspects that you have to think about as a business person of how you position yourself in this extremely fast shifting environment?
Michael (58:58)
Yeah. I mean, like I said before, we were created on the principle of helping companies close this data to value gap that I've just observed over and over and over again. And as I talked to leaders, I hear it again and again and again. And so that was the thing we started at Stacked Analytics to kind of address. ⁓ that hasn't changed at all. It's still true and it'll still be true into the future.
Because the nature of tech is that there's sort of a never-ending stream of vendors who convince you that if you buy this tool It will deliver value as if they're sort of like a magical step one skip step two three profits, right? So it's yes sort of like okay. We're a true We're a step to company. We help figure out what that middle piece is to drive those outcomes. So
The closure of that gap as it relates to sort of talent and ⁓ technology, technology, people are going to help do that. They're doing a great job at that. And there's, ⁓ you know, a huge flywheel of companies that are driven, driving AI really effectively. think the talent and people and the process and culture parts of that are where we'll continue to find opportunities to work with companies.
⁓ I don't think AI changes any of that, except in context, we're starting to engage clients on the AI part of data and analytics more effectively or more, more and more.
Kat (1:00:27)
And I imagine those clients expect you to have expertise in AI. And I remember about a year ago, you told me that a team member in yourself coded a tool in like an afternoon that would audit. I think it was auditing clients' websites. I honestly don't remember the details. I imagine that you've done more of this and that you're building this, like that you have like a sort of concerted effort to build that expertise.
Michael (1:00:54)
Yeah. So it's in our DNA, like, because it's my company, like we're going to run it the way I act, which is I can't not learn new things.
Kat (1:01:04)
So yeah, we're with that. You should run your company the way you want. ⁓
Michael (1:01:09)
Well, it's, if, if you're the leader, you, it's going to fit your personality one way the other. ⁓ yeah. So yeah, we, course, continue to build tools using AI and we use AI to develop those tools first and foremost, but we're always looking for ways to be more efficient. Like, Hey, how can we streamline this testing? Cause again, efficiency drives down to value opportunity creation. So, cause there's things I'd like to be doing value wise that we can't do today because we're stuck at.
this level with this company. my perspective on analytics is you walk into a company and there's an entry point. There's a point at which there is a need and a gap that the client needs your help with. And so you apply your expertise there. It's not the end. That's the beginning. There's a process we want to follow to drive to value.
And that isn't in that first project. may be five projects later. It may be a year and a half later, but I'm trying to go there from day one, whether the client knows that or not. I have a perspective and that's part of what makes Stacked Analytics what it is. We're trying to go there. AI is just going to be a tool we use to go there as, as to collapse that timeline as quickly as possible. So we are going to use those tools both internally and with clients wherever possible to do that. In fact.
just in a couple of weeks, we're going to have our first stacked analytics AI hackathon where we're having a company offsite where we're going to spend three days just doing AI use cases together and having fun. Cause we, yeah. Cause it's what we need to do. We're, we're going to invest behind like, not just sort of like all of us sort of one-offing it, but also coming together to collaborate and see what sparks and what tools and ideas.
Kat (1:02:45)
That's super cool.
Michael (1:02:59)
can be built from that. And I think that's probably like, it's important for us to do. And I think, you know, any company probably would have some fun doing it. And it's kind of neat too, because obviously those kinds of things just sort of, I always find them really fulfilling from the perspective of also like interacting with all of our people and also just the ideas that spark from it and, the direction that comes from it as well.
Kat (1:03:22)
And so for the, you know, the analysts that are listening to us, ⁓ can we look at this from your point of view as an employer and how about the skillsets you are looking for in team members? How has that changed from say a year ago? And what would you say to somebody today about what they should consider upskilling in?
Michael (1:03:48)
I mean, at the most foundational level, I don't think about skills. think about personality and aptitude in that a skill can be learned. A perspective cannot be, or, or, or traits cannot. So like a great analyst is insatiably curious, always learning and lazy enough in a good way to automate repeatable processes. I don't think any of that changes in an AI world.
I think that just comes to the forefront. In fact, it kind of, think those are ideal qualities for AI also. ⁓ I would say, like I talked about, like I, I do think in the data world, we're going to be challenged to kind of raise our perspective out of some of the day to day minutiae. Like, you know, people have made careers just implementing, ⁓ you know, Google analytics or Adobe analytics for companies. That's great work.
But what we're going to find is that there's going to be companies that emerge. There already are companies doing this by the way, like, ⁓ moon bird or tag assistant, that are creating, ⁓ AI solutions to sort of these sort of lower level use cases that used to be sort of the bread and butter of people in our industry. Now, the people that are really amazing at that.
They're still going to be high value for the same reason you just described their expertise will help drive amazing outcomes, even leveraging AI. ⁓ but our role in the process is going to go up a level and we'll need to embrace that. We need to embrace sort of the, okay, I need to govern the process and the results, not just like, did this tag get deployed on this page and is it firing correctly? Like that used to be a big part of our job. Now it's going to be a teeny part cause AI is going to do most of it.
And then, or, you know, whether it's build out a dashboard or report, maybe that was a big part of somebody's job. Now it's just going to be a little part. And then we're going to spend the rest of their time thinking, is this the right information for this dashboard? Is this leading people on the right narrative journey to think about what this dashboard is talking about? So it's sort of going up a level because if you change your perspective on, you know what this dashboard is good, but I think it needs to be changed this way. Okay. Right. The prompt AI will deliver a new dashboard and.
five minutes and you review it and then push it out to the rest of the team. So those are the kinds of things that would have been like, ⁓ that would have been a two week process of meetings and approvals and those kinds of things. Well, now it's a five minute, updated prompt and we roll it out. And if somebody doesn't understand what you did, they don't have to ask you, they can be like, okay, let me open this up in AI and ask. All right. I'm seeing some new metrics here. Help me understand what these metrics are. Sure. Thanks.
These metrics mean this, this, and this. And here's what it means in context and the relationship in this dashboard and how you can read it and think about it. We used to do that manually.
Kat (1:06:48)
Yes, yes we did. And what would you say is sort of things for people to look out in job interviews, in job descriptions that I'll tell you what I see and then tell me if you see something similar. So for one, when you look for AI analyst, I see a lot of job descriptions that are describing this like be all person who will set up AI infrastructure,
work with bedrock, maybe train some AI models on the side and also do dashboards. ⁓ yeah. Yeah. Yeah. Sorry. And be a data scientist. ⁓ like, know, like all of that, right.
Michael (1:07:24)
Yeah.
So this, ⁓
This is this reflects a lack of sophistication, right at the market level.
Kat (1:07:41)
I
agree. It's just a lack of maturity.
Michael (1:07:43)
Yeah, that that's really all it's reflecting is because, you know, when someone sits down and says, okay, we've got problems we need to solve. Go hire somebody who can set up a data warehouse, build out ETLs and also deliver insights to the business.
Okay. That exists in one person, maybe one out of a hundred times, maybe one out of a thousand times, maybe like, know I've developed talent and thought about this for years and years and years. And even way, way back in the day. I mean, I interviewed for job roles where they were sort of a list of things. And I would be like, Hey, listen, I want to be very transparent with you. This thing you've put on here. That's not a strength of mine. If that's a big concern, like, you know, I want
I don't want us to move forward in the process and waste everyone's time and literally got told like, yeah, don't worry about it. threw a lot of stuff against the wall just to see what people would come back with. Like, okay, how come on.
Kat (1:08:43)
Yeah,
Michael (1:08:45)
It's not good or bad, but obviously that has downstream effects, which it creates this, sense in people that they have to be able to do everything, which is just never going to happen. B, ⁓ for AI specifically, it creates a sense in people that is sort of like still buying into this magical solution concept of, and again, that's what I mean by lack of sophistication and the data and analytic space and in an AI, we're simply just not very sophisticated yet.
And as that sophistication increases, you will see job descriptions get tighter and more distinct and stronger. I hope. But, but I would say like in the data space is we still have some magical thinking about what we think one person could fulfill or what roles we expect someone to do, within a, within the context of data and analytics. Sometimes that's just going to have to slowly change over time. and.
That, know, what's interesting is I don't see a lot of job descriptions where AI is at the center of it yet. at least for analysts and data, sort of data analyst types of roles, but I do think that's changing. And certainly I do think every analyst worth their salt is using AI in some context, at least individually at this point, just based on sort of my anecdotal understanding of what people are out there doing.
When I talk to people like they're, they're leveraging AI day to day, ⁓ even without necessarily like a company wide mandate, like individually, they're kind of, you know, thinking about, and, ⁓ how to do that or what they should be doing or how it helps with this different part of their job or those kinds of things. ⁓ and I think that's probably really important because eventually their company will be using it or attempting to use it more broadly. so individually, you should be trying to skill up there, even if your job isn't
isn't requiring it of you. ⁓ but yeah, I think as time goes on, my hope is that we'll get more proficient as an industry of being more specific about what's reasonable for people to be able to consume. Like usually you're not going to be an amazing data engineer and also an amazing communicator of insights or a great data scientist and a narrative, ⁓ data storyteller. ⁓ you can be a great analyst and not be a great
Software engineer. I don't know. Like there's lots of different ways that this could kind of
Kat (1:11:16)
And the other thing that I see is ⁓ because there's a lot of push to drive efficiency everywhere, Across the board. I actually see a lot of shadow, I wouldn't say shadow IT, shadow data team sort of roles popping up, you know, like
in business unit type of shadow analytics roles.
Michael (1:11:47)
Yeah, I think this is driven primarily by asymmetry and progression. ⁓ departmentally a lot of times within companies. So one department needs to move forward in a, in a priority queue that doesn't evaluate what their needs are against other needs of the business. And so it crops up where in the short term, they've got to go find someone to kind of drive this forward, whether it's not part of like, let's say the, more broadly structured data and analytics org.
It's not super ideal from a government governance perspective or a process perspective for that to happen, but it's totally something that's been going on for pretty much forever. ⁓ and certainly more and more, ⁓ in the last 10, 15 years as, ⁓ you know, like I, there's sort of cyclical nature to a lot of this stuff, but I think the, the fact that there's a speed and a demand is driving that asymmetry harder.
Kat (1:12:40)
There is, yes.
Michael (1:12:47)
Right. So I need to have done this yesterday. Every department feels that pressure because of AI. And at the same time, our processes are kind of stuck in sort of a prior era, which means we can get to you in maybe six months to a year. so then we'll as a business leader, I'm like, well, what are my choices? Okay. You know what? Let me go find a contractor or consultant or an employee that I can bring in under my team and have them start running this stuff.
Kat (1:13:14)
And so if you were like a hands-on practitioner in digital analytics today, ⁓ faced with the changes that we just talked about and the speed of those changes, the need to upscale, you know, the complexity of the job landscape, et cetera, what would today Michael say to operational analyst Michael, as you know, your best advice?
Michael (1:13:38)
Well, I mean, to a certain extent, I am still hands on with data today. Like I still like, you know, my I, I still get into the weeds a little bit. You know, whenever I do an analysis that works well, I'm like, yeah, the old guy still got it, you know.
Kat (1:13:53)
It's great, right? I've got exactly the same thing! ⁓
Michael (1:13:57)
But with AI, I think what I think about a lot is I use it to upskill all the time. So I talked earlier about how there was concepts I was hearing about that I needed to get more educated on that I wasn't super educated on. And so I use AI to actually drive that upskilling. So I think that's first and foremost, a great tool to like use AI to teach yourself AI. Like that's.
the thing you ought to be doing because there's all kinds of things you can potentially do. ⁓ so just use that every day to drive into a concept. mean, ⁓ you know, with Gemini, you can do all these amazing things, but one of the cool things you could do is just create flashcards on any topic. So you can just be like, okay, well today I want to learn about, you know, this statistical concept or this thing and be like, okay, you know, write me a research paper and then build me some flashcards.
Kat (1:14:52)
That's
very cool. I have done a lot of research papers, never thought about doing flashcards.
Michael (1:14:58)
Yeah, I mean, it's just a way to like sort of drill or bring knowledge to you in a different format that might be more consumable. But again, like there's all kinds of ways to do that. Like with notebook LM, you can dump documents in there and turn it into a podcast, know, any way that you learn. Yeah, it's not like the best, but, if you're a more auditory learner, well then take those, take those papers and then like, there you go. So there's some ways to do that. And then.
Kat (1:15:15)
cast.
Michael (1:15:27)
Even in practical experience, like there's so many tools out there that are sort of ramping up. It's like, go build a data set and get into those tools. A lot of them have free versions that you can use. so go start using them. And so, so that you just get a taste of what's happening to drive and build perspective. So that's kind of what I would say. I mean, that's what I'm doing. That's I would say to myself.
Kat (1:15:53)
do today in an interview is not have an opinion on AI and not have experience with AI. That is just not really an option.
Michael (1:16:02)
Yeah, I think you absolutely have to say like, here's what I'm doing with AI or here's how I'm leveraging it. Here's my perspective on where it's going right or wrong. think you've got to have a take on it for sure. I don't think sitting on the sidelines is an option.
Kat (1:16:19)
And so in a world where analytics agents can handle longer and longer workflows with limits, still longer and longer, more complex workflows. I see an evolution of the analyst role ⁓ from an operator of platform to more of an orchestrator and architect of workflows where the AI agents ask the analyst for checkpoints and the analyst role is to architect the work, the steps, verify the results, you know.
more than it really is to design and execute every step of the analysis. What is your take on the future of the analyst role in that perspective?
Michael (1:16:56)
Yeah, I think it's very reasonable to assume that we'll continue to make progress much like we talked about earlier with software development, where initially it was, ⁓ write a prompt and the AI runs off and starts trying to write code right away. It's sort of what we're doing with a lot of data with AI right now, where we write a prompt and the AI runs off and write SQL against the database and tries to pull something back for you. Right. It's, it's a, ⁓ one step process.
That has data retrieval ⁓ capability, but not much else. As we start to build out the process and understanding that we'll be able to run. ⁓ Layers of agents or AI AI's against each of those steps of the process in a good order that will drive the ability for that to happen, which is exactly what happened with software development. ⁓ You know, the process.
started getting incorporated into the agent. today we might ask, ⁓ an AI, ⁓ data, ⁓ AI, you know, what's going on with this and the AI will just go grab the number and bring it back and be like, I think this is what's going on. But tomorrow it might say, all right, let me do some research into all these different areas that contextually I know are relevant to the thing you just asked. You mentioned earlier seasonality. ⁓ it could be.
you know, product, could be acquisition channels. It could be any of these different things that might affect how this data may shift. And it's going to go out and sort of think about, first, let me plan out all the different ways that I should look at this data and then bring it back to you and give you the ability to sort of say, okay, go further here. Okay. Dig into that. And it goes in and starts doing that. So as time goes on, we'll, we'll start to see the same things emerge. It'll take time and
The challenge is, ⁓ different than software in that with software, ⁓ again, the answer is one answer. It works or it doesn't work. Right. It's sort of, got the right result, read the test ran successfully and, and we're ready to push that code into production in analytics. It's more nuanced than that. It's like, I like the result or I don't like the result or as a business user, my intuition tells me something different. So I don't trust the result. So I.
I have like a more complex approval loop. So that will continue to fight us. And then the other thing that is just frankly, ⁓ not appreciated enough is the level of quality in the data is the thing that will stop most companies from making good progress here. You know, conceptually, this is not hard to ⁓ conceptually, this isn't hard to kind of envision, but realistically,
just a lot of companies struggle to like maintain good quality data and traceability, observability. Yeah. And so that kind of thing, you know, there's great use cases for AI to even help with that. ⁓ I actually think one, a great use case for, for AI right now is like, you like I said, think earlier exploratory data analysis. So just set up the AI to run through those steps. It's a
Kat (1:19:55)
So many horde-o-horde-o-h-e-
Thank
Michael (1:20:19)
It's a good set of steps. It's discreet. You can kind of just say like, okay, what's this data? What do I see in the data? What is the structure of the data? ⁓ what patterns are in the data? What potential first set of hypotheses about this data? And so just anytime I work with a new data set, there's a set of things I do to try to understand what I'm seeing. ⁓ and then that allows me to build a contextual baseline that then I can lift into.
that data set into other analysis or, or business questions that get provided. So, you know, just those kinds of things like that. Well, what if an AI could just run that process for you? That'd be a nice time saver for anybody who's on boarding to a data set. They've never worked with before. ⁓ you know, just have that available to people would be like, okay, here's kind of how the data looks. Here's some of the averages. Here's how the rages work. Here's some of the error rates. Here's some of the outlier structures, all the things we do with EDA.
⁓ Hey, could do all of that today. So that would be really, that would be something anybody could set up and start doing right now.
Kat (1:21:27)
Yeah, that is an excellent use case for AI. And this has been an absolutely amazing conversation, Michael. Thank you. I'd love to sit here with you for another three hours and hash this out. You know, I basically started this postcard, like the analytics power hour is basically, you know, the concept of the lobby bar, right? The conference lobby bar and having a drink at the bar and hashing things out. We're too old to drink. And I don't want to travel that much.
So I was like, okay, so how about I just invite all of my friends to one-on-one conversations on a podcast and we can hash this stuff out. Yes. So, you know, so thank you for doing this. ⁓ it was really great. I don't know if you can tell people where they can find you, ⁓ plug anything you want shamelessly. ⁓ you know,
Michael (1:22:06)
I love it.
Yeah. I mean, I'm on the internet at, ⁓ on LinkedIn a little bit here and there. write some articles on medium and I've got a website, stacked analytics.com. So you can always reach out to me there, but, ⁓ I will also say, let me just turn it back on you just a little bit is we first started talking about this a while back and your company Ask-Y was the one that I first.
targeted is saying, okay, they're the ones thinking about this the right way, because of the way you thought about how to construct the performance of analysis in the context of the analyst driving the AI through the process. And so I just want to give a little bit of a shout out to you too, because a lot of my thinking actually ⁓ kind of in parallel, but also driven by some of the things that we've talked about before, like ⁓
I feel like you asked why I was getting it right before a lot of other people were. So you should be proud of that.
Kat (1:23:21)
That's a big compliment. Thank you, Michael.
Michael (1:23:23)
Well,
you know, skate to where the puck is going is the hockey phrase, which I think is pretty relevant, which is sort of like, okay, don't just build what everyone else is trying to build, build the thing where it's actually got to end up. And I feel like that was kind of what made me, it made Ask-Y I stand out to me initially was.
Kat (1:23:43)
I definitely have opinions about that. obviously having been a hands-on analyst for decades, ⁓ you just don't forget that. You don't forget the hours, days, months spent fiddling with problems and in the details and fixing things and talking to clients and working through the hard problems. You don't forget that.
Michael (1:24:08)
It changes you and hopefully for the better, but sometimes for the bitter, you know, so.
Kat (1:24:10)
It really does.
I look, I came back to it. What can I say? think it's, it's what it's part of your identity. It's a, data is really very special, right? It's a passion. people who love data, it's something very specific. And if it's part of your identity, you might as well embrace it.
Michael (1:24:38)
Absolutely. Absolutely.
Kat (1:24:41)
So for everybody listening, you can hear more from Michael and his co-hosts, Tim, Mo, Val and Julie on the Analytics Power Hour podcast, which is basically the gold standard for real talk about what's actually happening in analytics. If you're not listening to the Analytics Power Hour, you should, and that's all I'm going to say about that. And if you want to work with Michael and the team at Stacked Analytics, well, you know...
They're actually the ones making this AI analyst thing work in production, we'll have their details in the show notes. And Michael, I have to steal your analytics power hour sign off here because it's just too perfect. The bots can analyze a lot these days. They can crunch numbers, spot patterns, maybe even write baby songs for your kids, but it can't replace what we do. So whatever challenges the rise of the AI analyst throws your way.
We'll keep analyzing.
Michael (1:25:40)
Absolutely.
Kat (1:25:43)
And so that's it for this episode number four of Knowledge Distillation. If today's conversation about context engineering and why the fundamentals matters hit home. We're working on similar problems at Ask-Y. ⁓ Visit Ask-Y, try Prism. Thanks for listening and remember, thoughts process data, AI analysts create understanding.