Katrin (00:09)
Welcome to Knowledge Distillation, where we explore how AI is reshaping the role of the data analyst. I'm your host, Katrin Ribant, CEO and founder of Ask-Y. Today I have the perfect first guest to help us unpack this transformation. Someone who literally wrote the book on AI agents. Marty Kine is an AI strategist at Salesforce, former VP at Gartner, and the author of five books.
Including his latest agent force. But before we were both evangelizing AI, Marty and I were both in the trenches of marketing technology, me at Datorama and Marty analyzing for Gartner and talking a lot about ⁓ Taylor Swift. Before it was really that much of a thing, actually.
Martin (00:58)
I was early with Taylor. It was and I I tell people this 'cause now it's there's actually a book that came out. ⁓ someone at I think it was HBR or or at ⁓ MIT wrote a book, The Marketing Secrets of Taylor Swift. And I ⁓ I was there ⁓ probably eight years ago talking about what marketers could learn from Taylor Swift and my my pu my I actually did research around it as well. And basically what I said was that Taylor Swift was infinitely personalizable. People projected themselves onto her.
And that's what you should do to be a modern brand.
Katrin (01:29)
Yeah, I remember going to one of those presentations. It it was really definitively ⁓ in advance and and very, very interesting. ⁓ so yeah, thank you for that. Of course. ⁓ so just around where Salesforce acquired Datorama, Marty joined the strategy team at Salesforce, where we were colleagues for a couple of years. ⁓ we've known each other for over a decade, actually way over a decade. And honestly
Few people see the data analyst evolution as clearly as Marty does. So let's dig in.
Martin (02:02)
By the way, I I used to be a data analyst. It wasn't before Gartner, I was I worked at Digitas. ⁓ it was ⁓ basically an ad agency, digital advertising, and I did measurement and that was my job. So I I I I used a lot of Excel to be honest, but in those days. Yeah, but that was my job. ⁓ making dashboards, you know, things like that.
Katrin (02:22)
Okay, everybody does, yes.
So last time we spoke was on your podcast Paleo Attec, which I highly recommend to anybody who wants to learn about the world of At Tech and how all of that came about. So we talked about my Havars days and my Detorama days, and that was great, but now it's my turn. So first, how does it feel to go from writing House of Lies, which was adapted on Showtime, and consulting?
To now writing about AI agents, that's kind of quite a knock for a writer.
Martin (03:02)
It's different 'cause the house of lies you're referring to, that was my first job after business school. I was a management consultant and it was ⁓ irreverent. I I wrote it when I realized I was not gonna be a management consultant, that I was gonna leave the business. So it was okay for me to say whatever I wanted to say, you know, within bounds. So it was very irreverent and a bit of a satire and
what I'm doing now is much more straightforward, but I I think that I mean the writing process is pretty much the same and also the the ⁓ the tone, the the kind of the flow and the tone is sort of the same. So it doesn't feel that different for me. It's just I'm I guess I'm holding back on the humor a bit. I mean agents can be funny, but they're not that funny.
Katrin (03:48)
A little bit maybe. and if I remember well you told me you wrote the entire book without AI.
Martin (03:56)
Yeah. I I mean that that's always gonna be true for me. I I actually am one of the few people who enjoy writing and a lot of people don't like it and I think they embrace AI for that reason. But I li I like it. I don't see why I would give it up to anybody else.
Katrin (04:10)
So when Salesforce acquired Datorama in 2018, what did you think ⁓ from your vantage point? You were at Gartner then and you were watching ⁓ the Martec consolidation. Did you see what was coming with the data and AI convergence? And ⁓ most importantly, I would say, did you have a notion of how it would affect the digital analyst role?
Martin (04:35)
Yeah, I think something was definitely happening. This was sort of the beginning of the big data era, or a few years into it, like Hadoop, people talking about. And this is like massive amounts of information. And it's for an analyst, it's a different approach. It was before that. People can't remember the before times. It wasn't really that long ago. But there were the problem was the scarcity of data. There really wasn't enough. So you had to make statistical inferences and ⁓ it was all aggregate data and
And what happened in the big data era is that actually there there came to be a lot more data. It was coming, you know, real time off of social networks. ⁓ it was like the Twitter feed could be a source of data. Your website, et cetera, every every event on your website could be a, know, source of information. And so the scale exploded. I think I remember a meeting, I don't know if I ever told you this. ⁓ yeah, I went to Lando Lakes. I when I was a gardener, Lando Lakes was a they're a Midwestern dairy company.
Very you know, very Yes, I know. And they did it was the analytics team and they showed their process and they had a very impressive command center. So their command center it basically aggregated their media data and and some of their owned channels as well. But it was sort of aggregated using data rama. You know, they they they they organized it using data rama and then they had it put into a data lake and they and they they had dashboards.
Katrin (05:36)
Were they a client?
Martin (06:04)
And so they had a they had a kind of a overview of all of their marketing efforts that was very impressive. And it was a small team. And I thought this is interesting. So the key technology in there was Data Rama sort of sitting in the middle. And it was the the convergence of the data, like the data inputs, and then Data Rama's doing the organization at the campaign level, as you well know. And then ⁓ making it available to the team. So it was sort of making data available in a way. And I thought, that's that is new. That was
kind of net new. And that's sort of the space where the customer data platform came later.
Katrin (06:38)
Yeah, that was that was really Datama Data Rama's mission at the time, right? It was all about I mean the the main problem was really bringing all that data together and organizing it. And I feel like today ⁓ we've moved from that not that that isn't a problem, that is a problem, it's a structural problem, right? It's there's complexity in it. But today we're really moved sort of to the cognitive era, right, where
It's really about ⁓ using the data that you have in a way that is much more semantic. That is about what does that data actually mean and how do you make your analysis meaningful for business decision making.
Martin (07:16)
Yeah, the e every every part of the analyst value chain that the the process of analytics has improved with technology. And I think well the one good thing about the the analyst or data analyst that that particular role is that it was always extremely heterogeneous. ⁓ data analysts were people who use always use multiple tools. You know, you never use just one. And also they were data analysts or
were people who adopted open source, you know, early on, like R and Python data analysts were the people who kind of got to know those. And th that was open source technology. So they're they were they're people who use tools as they appear. And so that's definitely true now as well with ⁓ LLMs making ⁓ it's like an evolution.
Katrin (08:02)
Yes, as you said, basically the tools have improved immensely, right? And I feel like really whereas it was really a question at some point of do you have good enough tools to wrangle this amount of data, this type of queries, etcetera. And the problem with was really getting tools that were good enough. I feel like the problem has now moved to those tools are good enough, right? Your
Connection tools and your databases and your BI tools, they're they're they really work well. But the problem is that in between all of those steps, connecting to data, hosting the data, transforming the data, transforming the data for pipelines, transforming the data for analytics use cases, building your outputs, getting to your stakeholders with your outputs, all of these steps are handled by different roles, different people generally, definitely in different tools.
And the memory and the context between those steps get lost. And so you basically get in front of your stakeholder with your report. The first question you're gonna get is, ⁓ ROI, great. What are you using for revenue? What is your source and and and how do you get to that revenue number? And now you have to go back to basically like where does this revenue number come from? Retrace all of these steps, which is across people and tools. And there's really no way to keep
track of what happens at all of these steps because no nobody can do documentation at that level. That that's just not something that happens. And I feel like that's really where one of the benefits of AI and using natural language to talk to all of these, to generate the code, to talk to to all of these, these tools and keep the trace of what you're doing comes. What do you think about that?
Martin (09:51)
think it's it's I like to think look at it as ⁓ a situ we're in a situation now where the the tools as we said are getting easier to use and and a concrete example would be ⁓ it wasn't that long ago when you really had to know SQL and be a SQL power user to query data from databases. that ⁓ is still true. I mean people s of course still use SQL but now you can go into something like ⁓ you know a C D P and sh just use natural language and describe a segment.
And then SQL will be written for you by the machine, but they they'll take, you know, they'll take that nasty step of putting together these very convoluted nested queries. So you can just describe it. So your focus, you as the analyst are focused, what am I trying to learn here? And you to be very precise in your instructions. So you still as an analyst need to know what you want to know. And you also need to know the data sources even better than ever. But the actual techniques like SQL writing, code writing are less important.
So the tools are easy to use, but they're also getting harder to understand simultaneously. And the reason is because they're they're built on neural networks. Neural networks, you know, large language models are just really big neural networks, are extremely large now. You can have trillions of parameters, billions and billions, and all of them have a weight and a bias and they kind of feed into one another. And as a human being, we
We can literally n not know if we have an input over here, a question, and an output over here, we're not gonna know exactly how that output happened. that's why it's unpredictable. So we're gonna have maybe change the prompt a little bit to try to change the output if it doesn't and but what's happening in the middle is quite literally a black it's a black box. Now we can inform ourselves on the method and we can ⁓ you know, try to get smart about what's going on and have an intuition and we can have an intuition if the output fits, but
The the process in the middle, as you said, the context ⁓ setting, all of that needs to be carefully monitored. I think that the the danger here is that we trust the output too much. We shouldn't trust the output too much. We should always be interrogating the data. You can ask models where they came up with an ⁓ an answer. You can just say, what are the three drivers of this trend that you're seeing? And they will tell you. You can ask them, ⁓ you know, how did you come up with this recommendation? And they will tell you, they work for you, you don't work for them.
So and also using common sense and stuff like that. So I think it's it we are it it's really an important part of the data analyst's job now to be checking, ⁓ constantly checking and monitoring the output of, you know, what their tools are giving them.
Katrin (12:25)
Yeah, I mean that's why we consider our core mission really to empower the AI analyst. We're focused on digital analysts who wants to wrap their head around using AI across the entire span of the analytics process. So from data collection to transformation analysis and you know, the final step. The step the step that the analyst would like to be the final step or hopes is the final step, which is presenting your stakeholder with you know, your output so so that they can make data informed decisions. And I think really the
The three main points that I see that an analyst needs to upscale on in order to become an AI analyst is one, and you mentioned it, is a deep knowledge of how LLMs work, but also a deep knowledge of how different LLMs work differently because they all have their personality, so to say. The second one is prompt engineering. Prompt engineering is how you talk to LLMs, I think.
As an AI analyst, you really need to understand prompt engineering very, very deeply. And that means that you need to understand how ⁓ LLMs take your prompt and analyze and pay attention to your prompt. And actually, to that effect, you know, I've created these little characters, the flu fees that help explain ⁓ the mechanics of of LLMs.
Martin (13:48)
And
Katrin (13:50)
Terrible.
He's a terrible dry subject, right? So I was thinking like how can I make it something where it's not just me talking about it because people will just fall asleep. And it's incredible what you can do today with generative video. I mean, I've done all of those those videos for almost no money. All of them generated ⁓ with with VO3 mostly. And and I've got gotten like tremendous return from people who are not
even remotely in our field about how much they understand better what LLMs are and how they work, which I think is a really important thing in the world in general. ⁓ so I have point one LLMs, point two prompt engineering. Point three I think is really ⁓ if you're going to do analysis with AI, you're going to generate a lot of code.
Reading code that has been generated by an LLM is not like writing code. It's a completely different skill to read the code and understand really what the code does. And if you're going to run that code, you need to on your database, you need to understand how the code is generated, how good or bad it is, and you need to get good at it. So getting good at code generation as opposed to code writing, I think is a separate skill.
And most importantly, really escape your critical thinking. Like creep keep your your wits about you. It's it's not magic, it's a tool, and you need to become really good at using the tool. And as you said, trusting the output blindly is generally a recipe for disaster. In your book, you talk about giving agents trick rules, data sources, actions, and guardrails.
That's basically what we're doing with analytical context, right? So what's you what what's your take on specialized analytical agents versus generalized AI, specifically when it comes to digital analytical workflows?
Martin (15:52)
Well, I think there there's ⁓ there's a lot of kind of loose terminology in this field. ⁓ A AI in general, what does that mean? You know, it's it's become an umbrella term. It used to be actually companies avoided using the word AI because it had a bad reputation ten years ago. And now it people are ⁓ you know, falling over themselves to put AI in the name of their company. So it's changed the PR has really improved recently. ⁓
But I think in general, like what we say at I work at Salesforce, as he said, and ⁓ we have this phrase, don't DIY your AI, ⁓ don't do it yourself. Large language models are ⁓ they're as he's you know, they're very generic. And as a consumer, I could go to Chat GPT and ask it to do a workout routine or, you know, recipe or whatever. And whatever it gives me is okay. I can tweak it, maybe make some mistakes, no big deal. But if you're a company, it's you know, in in an enterprise context, it's completely different.
Yeah, your entire brand reputation is at stake with every single interaction with your customer. So you have to kind of harden it. And it has to be if you're gonna be doing personalization at scale, which is the promise of these agents, ⁓ it has to be relevant. So it has to be grounded. And so there's a lot of L L LMs themselves aren't applications. ⁓ LLMs are it's infrastructure. It's basically, you know, we think of it as like a platform, the LLM.
And you've got to build on top of it to make it relevant. And ⁓ a lot of things you've got to build into it is specialized models would be ⁓ you know, levels of trust and governance, compliance. There needs to be auditability for what's going back and forth. There's ⁓ you know, sp ⁓ focus on your particular industry. There would be some kind of an intelligence engine, so ⁓ like a reasoning engine that's sitting on top of multiple LLMs.
And you mentioned earlier that LLMs themselves have different strengths. So you you should be able to select which one you're using. ⁓ or you know, they get small language models now, so you can train your own third-party data, and that can be very effective in certain kind of more focused areas. And also need you need to have support for structured and unstructured data and retrieval log bandageneration, all those processes like that. So I think that the the s more and more specialized the model can be.
As long as it's making the output more personalized to your and I'm come from a marketing context, so it's all marketing more more relevant to the consumer, the ultimate consumer on the end, then ⁓ that's w then that's the way you need to go as an enterprise. ⁓ but it's it's a long way from just a generic LLM to something that can be really adding value to the business. And so there's steps along the way that I think people don't appreciate. A lot of companies now are
⁓ surprised at the amount of work and it's it's just seems so easy as a consumer to you know have a recipe or whatever. But as a company they're like, why can't I just product why can't I put ⁓ product ties and and you know automate all my workflows and all this stuff overnight. And it it doesn't really work like that. The the strength and the weakness of this new generation of AI is that it is ⁓ not deterministic. It's not just this then that.
you don't know what the output is. It it has its own kind of agency. ⁓ and that's it that's why it's good. That's why we use it, you know. But on the other hand, that that makes it a little bit harder to control.
Katrin (19:16)
Yeah I think that's true, right? We're sort of in in this a little bit of this magical era, right? This era of magical thinking where there's a confusion between getting a recipe for, you know, a Thanksgiving casserole and having a an application that will do actually work for you in an enterprise context. And that applicative layer is not really well understood.
⁓ including issues of security around that are really not well understood at all. And reliability, you know, how to build, especially for analytics, it's really a challenge, how to build something deterministic. Because in analytics no one needs random insights on approximatively correct data. That's just in my entire career I have never been asked for for that.
Martin (20:12)
Occasionally you'll hit something brilliant, but not all the time.
Katrin (20:16)
Yeah, but who knows if it's correct. I mean really you have to you have to run code on data, right? ⁓ although otherwise you're you're never going to trust the output. And from that from that perspective, there really is a little bit of this magical sort of like it will solve everything with no effort. And not really understanding no, you have to
Martin (20:18)
yeah, all right.
Katrin (20:41)
You have to still have a use case. You have to know what use case you want to solve, and you have to find a solution for that particular use case and make sure that that solution is compliant with the different aspects of your business. And from that perspective, I sort of really don't want to talk about this, but let's talk about it anyway. the big AI is replacing analysts.
scare. So you mentioned in Agent Force that AI agents work as ⁓ reflectors, co-workers and advisors. Where in your opinion does the AI analyst role fit in this new world?
Martin (21:21)
Well, on that topic, ⁓ there's there's a chapter in my Agent Force book, Agent Force is Salesforce's AI agent platform, and there's a chapter in there on the future of work. And I I you know, I said it about writing that chapter in the beginning. I didn't really didn't know. I I mean my opinion, to be honest, was, you know, these models are pretty good and I feel like maybe, you know, there's gonna be a lot of jobs at risk here. But I I did research and I read everything I could find on this topic. ⁓ and there have been a bunch of studies, not just
you know, McKinsey and Deloitte, but ⁓ International Monetary Fund and World Economic Forum and Davos, they talked about it. And the consensus is, and it's really a consensus across all of these big studies of people who must be smarter than me, because they've written big studies, is that in fact every technology revolution brings with it two things. One is a fear of job displacement. So computers appear, internet appears, there'll be massive job displacement.
But then what ultimately happens is there there are more jobs created. The the difficulty is that ⁓ first of all it hur helps the economy, so companies get bigger, so they need to hire. But the difficulty is in trying to predict what the roles are. And the example I would give there is that my entire career in the last twenty years, I was a digitas with digital advertising and now I'm at Salesforce like cloud software. And it none of this existed when I was in college. You know, and I I was in college before the internet. So
Yeah, it literally these are new jobs, but not a job that I at the time could have, you know, thought of. ⁓ you're asking about what the role of the digital analyst would be. I think it's gonna change. I really do. And I think ⁓ in good ways, I'm very bullish on the the future of the digital and the data analyst role. I think it's ⁓ it's always, as I said earlier, a very flexible role. So data analysts are people who understand the business good ones.
understand the business, but also understand all the tools and are open to using all kinds of different tools to get answers to questions. And the real the skill there is in trying to define the problem and determine the data sources and also knowing if the answers that you're getting are directionally right or not. There's a lot of intuition and common sense. And so I think that that higher level reasoning, ⁓ orchestrating, choosing, selecting the tools among those that are available.
Knowing the business well enough that you can ask the right questions. All of that is still relevant and it's gonna be even more relevant in future. I think what will go away ⁓ is the parts of my job that I used to hate, ⁓ you know, when I was doing this, like building dashboards, you know, it was a very manual process, going and and cutting and pasting data into Excel. my god.
And even just getting the the data, it was a headache. Like, you know, you had to get the files sent over and sometimes it was too big in the wrong format. Anyway, I won't get into that, but
Katrin (24:17)
Our audience a and and us know about know enough about that. But I think that one of the ways as ⁓ analysts we can ⁓ sort of see a little bit in into the future is we're in a way lucky enough that ⁓ this disruption happened to software engineers before it's happening now to AI analysts, right? The AI engineer is a role that has now existed for the whole of two long years, which in AI time is
Two million and obviously Ask why is startup, so we have an engineering team and we've seen very clearly from the very beginning where we started hiring and how difficult it was to h hire software engineers that had any experience with code generation and understood how LLMs work, etcetera. And so we had to have a selection process that would ⁓
Martin (24:48)
Yeah. ⁓
Katrin (25:15)
That would look for people who had the potential to switch, not necessarily had already switched. Today, and this is only one year later, we don't have to do that anymore. Today it's a given for software engineers. And you know, and I I've written a lot about the software engineering process versus the analytics process, how they're different and how you
Can't simply take cursor and apply it to analytics because it just doesn't work for the workflow of an analyst. I really, really do think that something like this will happen to analysts and that the same same the same thing will happen in terms of yes, you always need to be a good software engineer. ⁓ you still need to be a so a good software engineer in order to understand how to use the code that you've generated and create an application that has all the different bits and pieces, security, etc.
You still need to be a good analyst to understand what it is your stakeholder wants or needs, what it is your business does, what are the steps involved to getting that that answer, given your setup, giving your data layer, giving your data structure, etc. And then you have to orchestrate those steps. And this is, I think, where a lot of the difference really is. So we're focused on giving analysts
repeatable workflows with AI. We call that skills. I think that once you package your workflows and you have an AI doing a lot of the steps for you, your role as an analyst really changes more from being an operator of a platform, or, you know, in the case of a BI tool, you know, I was joking about so many clicks.
Setting up a a dashboard is what you used to be a lot of clicks, right? To select the variables and the charts, etc., a lot of clicks. To basically being an orchestrator of code that will control what is being done in the different pieces in the analytics process so that you get to the output that you need. Do you have a view of how that switch from operator to orc orchestrator?
Well work in practice. Do you see that around you at all?
Martin (27:32)
Well, I think there's always been a difference. I don't know how to say this. there's always been a difference between good data analysts and the other ones.
Katrin (27:47)
Yes, that's true, and that is very well said. Thank you.
Martin (27:50)
Yeah, I I mean I don't wanna, you know, if any buddy out there is one of the other ones, which I doubt it, 'cause you wouldn't be listening to this if you were, but but I think that, you know, it then this is a role people, you know, it's data analysts, you're like, that's a low level role. But ⁓ it's underestimated. I think you can do more with a single person who's very good in that role than you can anywhere else in the h entire enterprise, I would say. And the the when I realized that, I was at Gartner and I went to ⁓ a c a company on the West Coast, like a big computer company.
They they sold hardware. And there was one they hired one guy there and he was a data analyst and he came up with a new way to segment their market so they could change their go to market policy, ⁓ go to market practice, and it worked really well. And but it was just him and he he took it upon himself to like ask the right questions and get the right and he's like, maybe we try this different segmentation method. And he he didn't get enough credit, ⁓ in my opinion, because he actually turned that business around.
And that was that's one data analyst. So I think that that kind of a person who can come in and ask super smart questions and apply the right tools and you know know what's going on ⁓ will always be very valuable. There's there's no there's no existing AI process that could replace such a kind of a a broad or orchestrator, as he said, that kind of a thinker. But it's the lower level, the the the analysts who are content to focus on a single channel, ⁓ single task that can be automated.
who are content to do a single kind of a job and not ask deeper questions. those people are ⁓ those people are in in danger, they're in trouble. I think I'm very I am optimistic about not just data data analysts, but people in general, because people are very adaptable, human beings. I think that's how we've survived. And so people whenever anyone asks me, the you know the ad agency is doomed, and my thought is you worked at an agency. I'm like,
The ad agency is really just a bunch of people and they can see what's going on even better than we can and they can change what they do, change their what they offer. So their the ad agency's not going anywhere, you know. It it'll be around and it's just this adaptability, this constant change. I was at a dinner in Chicago this week and it was these parents talking about their kids like, I'm so worried about my I don't know what to tell my kids. Should they learn how to code? Should they and I'm thinking, don't tell them anything. You know, they'll they'll figure it out.
But whatever they do, two years from now if there's something they need to know how to do, they'll learn how to do it and then they'll kind of apply for those jobs and so on. So I think adaptability is
Katrin (30:21)
Burning
in your age as we all are a testimony, ⁓ both of us are a testimony to that.
Martin (30:28)
Yeah, that's right. You gotta keep learning. Yeah, I mean you and I we we've studied L LMs and that's in the past couple of years, and that's not easy to understand how those work. I mean, I challenge you, whoever is out there, try to figure out exactly how they work and it's not easy.
Katrin (30:45)
It's not that simple and then try to explain it simply. That's really but it's very interesting. So if you think about ultimately this evolution, right, of tasks that can be automated that disappear, that is not new. I think that it what is probably new is the pace at which it's happening with LLMs. It is certainly faster than anything else. I still don't think that somehow this is the
be all of you know, transforming everything into the bots are gonna do everything and the humans are going to be obsolete. I just I just can't can't
Martin (31:22)
See that. Either way, you know, that there's ⁓ rising unemployment apparently now. I was just reading that and every every story about this said, it's AI. It's all caused by AI. ⁓ and I thought, well, you know, there's throughout my working life there's been unemployment and there have been periods when people are laid off and we didn't have any AI to blame. There was something else, the interest rates were being blamed, or so I think it's gonna be a scapegoat now for any kind of bad news.
Katrin (31:45)
And obviously it's a better story than, you know, we overhired and then now we have now we have
Martin (31:51)
Or our companies are badly managed. Yes.
Katrin (31:55)
Course they're not. No, no, no, no, absolutely not. Given that that example that you that you gave about this ⁓ you know, this AI analyst, it's it's a reality of the the data analyst role in most organizations, except if your organization's product really is analytics, right? But except for that exception, the data analysts doesn't make decisions.
The data analysts support a business stakeholder who makes decisions. And it is true that it is very often the business stakeholder that gets the credit. Yeah. And not necessarily the data analysts that had the brilliance in the case of, you know, your data analyst there to ⁓ to find another way to to segment the business, which is if it's actually working commercially, that's huge, right? It's absolutely huge. I
feel that for a good AI analyst who's able to ⁓ understand how to use these tools well in order to explore opportunities in data, like opportunities in in analysis, it will just make this something that is easier to get to because you'll just be able to circle through scenarios faster. Because I've always felt that this was the
The one thing that was hindering the creative process in analytics is it is very costly to test a hypothesis. Trying something takes a long time. And that's where I think this orchestration aspect is very helpful because imagine you have one analysis process, you have your data, you have it's it's organized the right way, etc. And now you want to test six hypotheses.
with small variations, you can run them in parallel in six tabs, and they will call you when they need decisions from your on your side. And you can now be a lot more effective at exploring finding those gems.
Martin (33:59)
Yeah, I think also underestimated in my opinion is this sort of role of ⁓ like a presenter agent or the it's the UX component of because data analysis is, you know, it's numerical and it's quantitative. But the way that that quite often data driven decisions are are ⁓ spread through an organization, particularly through the business side of the organization, is through visuals. You know, it's always through like powerful, useful visuals, some way
Visual will will convince people where numbers don't, even if it's exactly the same information. And and creating those visuals is something that, you know, I think AI is going to be very good at and it'll be instantaneous. And that was not always true, like trying to think about exactly the right way and and so optimizing the kind of the visual display of information, if you will, is something that's gonna be increasingly automated, but think very powerful for this role ⁓ going forward.
I know that the w the part of my job when I did it that I liked the least was ⁓ trying to come up with the right charts and graphs. Not very visual, you know. A lot of quantitative people really aren't. So our UX skills are not so great.
Katrin (35:09)
I think nobody is, you know, equally strong across the ha the entire process. I also think this is where ⁓ LLMs really power the AI analyst. I'm thinking of it as as like a full stack extension of the skill set where I'm never going to be a data engineer. I'm not good enough at SQL, but given a good LLM, I can generate
Code that I would not be not have written by myself because it would just have taken it taken too much time. So I can move left towards more technique, left of the stack towards you know more technical aspects. But also nobody's equally strong in analysis, visualization, storytelling, translating insights into different ways of presenting them to literally different people who will react.
to different metaphors or or people who hate metaphors or whatever it is, right? Ultimately you end up presenting to somebody. You need to present to whoever these people are with what you know that will get them to actually break through and understand what they what they're looking at, right? And that or quite frankly, just business domains. I mean, you ask me to do an analysis for supply chain optimization, I have no idea what the key metrics are. None.
But with a good LLM, I'll do something decent. I'll I'll I'll have a notion of what we're talking about. And you know, I'm an analyst, so like I'll I'll I'll be able to make it happen. And so there's also an extension of the skill set to the right towards more of the business side. And I think that that full stack extension is really something that AI analysts should embrace to move away from that commoditization that you talked about.
Martin (37:01)
And I think w what you know, these think about the the an L L is not is not a person. They don't have lives. They're basically like somebody who, you know, a a blank slate, who went into a library and read every single book, like literally every book in the library. That's who they are. But they don't have to negotiate with people. They don't need to go around, you know, ⁓ drive to the store and all this all this stuff that we do as human beings that we take for granted and how to deal with other people and difficult people and you know, not difficult people and
And so that whole element of being human, which is so important for marketing, it's basically the message that we're transferring to our consumer, we're connecting with them, is something that an LLM has to mimic based on what it's read, but it's never gonna be as convincing as if if we do it, or if at least we guide it, we as human beings, because we will always be better at being human. And so I think that that like this idea of common sense is underestimated because humans we
It's just that we take it for granted. But there's there's things that like for instance the first version of GPT, it wasn't good at adding. Like if you put in what's three plus three and play well, why didn't it know that? It knows, you know, the history of the French Revolution. And the reason was 'cause that that's sort of common sense. And I guess it didn't show up in enough books. So
Katrin (38:17)
Yes. so there is still hope for us humans, right? Agent Force ⁓ you know, just came out in June, right? I think it was in June 2025. Yeah. So when you're thinking, I mean, you're a writer, right? so I imagine you're going to continue writing. Are you already thinking about your next book? Is there a sequel to the ancient revolution? Are you going for a sci-fi novel? What's that?
Martin (38:47)
I don't think I would inflict my fiction on anybody. ⁓ not sure about that. Yeah, no, I I always I mean I have a bunch of ideas. I you I had the Paleo Ad Tech podcast, so I wanted to do a book based on that. It's like the history of ad tech. Great. I think that would be really interesting to a small group of people. ⁓ so it'll be like a niche title, ⁓ probably, because most people don't really care w how ads are served, you know. Really? They should.
Katrin (38:59)
That would be
Martin (39:13)
and but I I got very interested when I was thinking about the future of work and looking at ⁓ all those ⁓ those very thoughtful pieces on, you know, where work is going and I got interested in trying to predict the future. So I think I'm I'm working on something that's sort of more of a like like a futurist point of view on where we're going. And it's very interesting. I mean it's you have to do scenarios. I don't think anyone can exactly know where we're going. But
There's a lot there. And AI's yeah, we'll change things. I mean the world to of tomorrow will not be the same as the world of today, but then it never is. No.
Katrin (39:46)
It never is. ⁓
And yeah, I'm looking forward to reading that. ⁓ so you're ⁓ you're at work on the book already or you're like sort of in the phase where you're where you're thinking about it?
Martin (39:59)
No, I do well I I always do a lot of research first before I the writing part is actually easy. Like the Agent Force book I wrote in about a month, but the research part took many months. And the research part is basically assembling the facts and putting them in the right order. So you could see how if I have that in front of me, doing the writing part would be much easier. Yes. So I'm in the fact gathering.
Katrin (40:20)
I understand. I I ⁓ when I produce content I do the same thing. I have to have my structure in front of me and then wrapping around is that that's easy.
Martin (40:28)
And the information, like the actual data points, yeah.
Katrin (40:32)
So where can people find Agent Force?
Martin (40:36)
well the best place to go to Amazon, Amazon dot com. And I mean it's available on any online bookseller and it's called Agent Force, that's the name, and and then the author is me, so it's easy to find. And I also have ⁓ website, Marty, Martinkine or Marty Kine, either one dot com, K I H N. Good. Yeah, I years ago I did. And there there is another Martin Kine out there.
Katrin (40:57)
you got both URLs?
Martin (41:03)
⁓ in South America and we became friends because we would sometimes get each other's email. That's another story. Luckily he was a hipster and his Instagram feed was so impressive. He was like ⁓ he really improved my brand.
Katrin (41:20)
Mani, it was ⁓ it was really great. Thank you very much for being Knowledge Distillation's first guest. And
Martin (41:28)
Yeah,
thank you for inviting me. I'm I'm honored.
Katrin (41:30)
Well talk to you soon. Bye Marty. That's it for our first episode of Knowledge Distillation. If you're building your own AI analyst workflow, check out ask-y.ai, where we're practicing what Marty and I just preached out about context engineering and intelligent analytics automation. Thanks for listening and remember bots won't win, AI analysts will.
Thanks to Tom Fuller for the editing magic on this episode. If you want to work with Tom, head to ask-y.ai and check out the show notes for his contact info.