Katrin Ribant (00:00)
So welcome to Knowledge Distillation, where we explore the rise of the AI analyst. I'm your host, Katrin Ribant, founder of Ask-Y. And today we're tackling something every analyst is thinking or should be thinking about. How do you upscale during the hype cycle without betting on the wrong thing? In our previous episode with Mike Driscoll, we talked about the rise of the data scientist during the MPP hype. So for those of you who are too young to...
Remember that, that was around 2010. The rule was like all the rage at the time. And it still exists, but the question is in what shape and which rules used to exist, but don't anymore and have been replaced by the data scientist. So what actually stuck? What scales mattered? And when the hype settled, because all hypes settle, that's a golden rule of hypes.
And most importantly, what can we learn from other cycles as we navigate the rise of the AI analyst? And so who better to talk about the tech hype cycle than a Silicon Valley VC. And yes, Chomik, I shall call you a Silicon Valley VC because you are. And by that, I mean real life Silicon Valley, not the TV show.
Shomik (01:15)
That's true. I live in Silicon Valley.
Katrin Ribant (01:23)
⁓ Although I have to say, would really love to have Peter Gregory on the show. So Peter Gregory, if you're out there, find me on LinkedIn. I'll get you on the pod. So today I'm talking to Show Me Gauche. Show Me, you have experience investing across the entire span of the tech life cycle. So from inception, ⁓ which is the very beginning of a company, often before the company's even incorporated.
In fact, you led AskWise seed round. We were incorporated though. Up to late stage in companies like Anaplan, CircleCI, Remitly, AreaOne Security, some of those went through IPOs. So you really have a view across the entire span. You've seen multiple hype cycles from both sides as a growth stage investor backing companies and as a day zero investor working with founders before they...
sometimes even had a name for the company. So you also think about this stuff a lot. I mean, you think a lot in general, but you think about this stuff in particular a lot in your software snack bites, podcasts and newsletter. So let's distill some knowledge. First, Jamek, what does a tech hype cycle look like to you and specifically?
How does this particular gen AI cycle compare to what you've seen before walk us through like a technology wave you invested in from beginning to end? What did the hype cycle actually look like on the ground?
Shomik (02:54)
Yeah, well, Katrina, thank you for having me on first of all, and to also, you know, it has been a lot of fun ⁓ working with you from, you know, just when we were making the first hires and shipping the first product to see where it's come now and, ⁓ and really, you know, how you are, think, spearheading kind of this rise of the AI analysts and the tooling that will be needed, right, to enable that. But ⁓
You know, I think the hype cycles are really funny, right? Because you see that Gartner chart all the time, right? Where it's like, you start at the bottom, then you climb up, and it's like, oh, things are great. Then you have the trough of disillusionment. And then things go back up, right? And so it's kind of funny because every one of those rhymes, it may not follow the exact same path, it follows that curve at some point, right? And I think like,
One that was really interesting that I saw actually fairly recently was in the DevTools ecosystem. So you had this whole dynamic, especially with the rise of cloud, where...
Katrin Ribant (04:00)
So just one thing, because
our audience is mostly analysts, not software engineers, could you explain a little bit about DevTools and the rise of that? What is that really?
Shomik (04:12)
Yeah, so DevTools kind of started with basically if you were a software engineer, you need products that can help you do your job better, right? And now what you have is you have these cloud systems that you're spinning up. They require a number of microservices to be chained together to do certain tasks. And that is quite complex, right?
The tool you can use is you could use it directly from the cloud providers, which would be AWS, GCP, Azure, and others. But those companies do not prioritize developer experience necessarily. They're prioritizing individual developer experience. They're prioritizing in many cases like a more ⁓ either one, they're almost purposely complex because the eccentric of the world and Deloitte's get a lot of money helping.
Katrin Ribant (04:47)
Hmm.
Yes.
Shomik (05:02)
large customers implement these. Or it's
just not the top of mind thing for them because they are serving the largest companies in the world. And when you talk about an individual developer that's using the product, that is something that they just sometimes don't pay attention to. And that's where these companies step into the void. They would help with spinning up a continuous integration pipeline. ⁓
Katrin Ribant (05:22)
Mm-hmm .
Shomik (05:29)
you know, help you with, with testing. It would help you with all these different aspects so that you're, when you're going to chain all these different complex microservices together to make an application run, this was a more seamless way to do it with like tools that you knew how to use. And so back in probably like, you know, 2016 going up till probably, you know, COVID through COVID times, you had this bring your own tools to work trend, right? Where basically individuals were bringing tools into work.
especially software engineers, because people start to realize, wow, software engineers cost a lot of money to hire. If we can make them more efficient, then that's a good investment. And so they were starting to bring these tools into work and people would pick up, they'd use it in open source. Then they'd say, hey, we'd love to use it, but with security around it and so on, they'd pay for these developer tools. At 2020, 2021, that's when really we reached a peak because everyone's working from home.
Katrin Ribant (06:18)
Mm-hmm.
yeah. Yeah.
Shomik (06:27)
So then you just needed more and more of these tools. you were having a software engineer in one group would use one tool and the next group would use a different tool. And you'd have like five products that were the same being used in one organization. Crazy, right? ⁓
And so that was kind of that peak, right? That's when everything was amazing and we were talking DevTools of the future. And then we hit the trough of disillusionment, right? Budgets dried up. People started to consolidate to one tool or say like,
you know what, we're just going to use a cloud providers tool because go figure it out, right? Because that's just what we can afford to spend. And then ironically, you're actually now seeing it come up the other side, right? With AI, now you're starting to see this all over again, where it's like, you might have someone that's using Copilot from Microsoft, right? ⁓ Cursor, and maybe even ⁓ using Windsor for cloud, more likely like Cloud Code or something like that as well. Because they...
Katrin Ribant (07:07)
Mm-hmm.
Shomik (07:23)
It's just, that's what they wanna use. And that is happening right now because again, it's bringing about efficiency. And so you see these curves keep on happening, but at the same time, we're back on that hype cycle again. And true adoption has always continued to pick up because these are still tried and true things that are happening. It's just, they sometimes get over-hyped and then they kind of trough and then they go back up again.
Katrin Ribant (07:44)
And when you think about, so one of those cycles, so let's say the DevTools, let's take the data scientist cycle and the DevTools cycle.
In both cases, you have a technology evolution or change ⁓ that fosters a new role, right? Ultimately, to bring your own tool to work, essentially, I mean, it's kind of a new role in a way, right? So you've got a new technology that's bringing about a new role and a certain skill, specific skill or specific set of skills
get you hired. It gets you hired more than anything else and it gets you hired at a sort of rate that is considerably different than the one that you would have had before, right? And so that peaks and then other people catch up to you. That sort of normalizes. But then you still go back onto that cycle where, you know, it continues, but it continues differently.
In DevOps, so in data, in the data analyst type, the skill set was really being able to code analytics function with generally SQL or Python, depending. You know, R, there was a lot of R to really depending, but you would have the religion of the database. There was the certain database that you would sort of like work with and you would be either a SQL specialist, a Python specialist, and you would ⁓
Shomik (08:58)
You.
Katrin Ribant (09:16)
code your analytics, complex analytics functions on large amounts of data and do that effectively at that cost. And that was a skill that was obviously very valuable at the time and, you know, and highly priced. In the DevOps peak, what was that skill? Like, how did that work?
Shomik (09:32)
So ironically, in the DevOps peak, where it came from was actually more ⁓ around, could you understand how Kubernetes worked? And could you understand how to use ⁓ tools to manage Kubernetes and containers that we're using to deploy it on Kubernetes? And so these orchestration engines, these abstractions of core systems that were, again, making it easier.
Katrin Ribant (09:54)
Mm-hmm.
Shomik (09:59)
And so if you knew these sort of tools that were ⁓ these core abstractions that had become standard, right? That was really useful to your career, your employment, your salaries, things like that. Now, where that is today, right, would probably be, ⁓ if we're using software engineers, again, it's like the cursors of the world and stuff like that. But also, where did we just standardize upon? We just standardized upon MCP.
model context protocol for tool calling, right? So now all of a sudden, like if you are still stuck in the world of REST APIs and ⁓ that's how you use to call ⁓ tools, like you're gonna be in for a world of hurt because everybody else is gonna be using MCP from their model to from Claude or from, know, ⁓ chat GPT or whatever to call out to the tools that they wanna use and boom, they're just gonna get going.
Katrin Ribant (10:45)
Yeah.
Shomik (10:54)
Right. And meanwhile, the rest of the tooling ecosystem is catching up. Microsoft is shipping MCP stuff, Google and AWS, right? Everybody is shipping it to go and make this easier. So if you do not embrace it, you will get left behind. And then what happens is like, you're not even going to be on any part of the hype curve. You're going to be on the, you know, the train that is on the back, you know, that is getting left behind. And, and unfortunately it's really hard to catch up.
Katrin Ribant (11:18)
And so there is these two aspects, right? The skill specific aspect and the tool specific aspect and, you know, usage of a tool as a skill, right? But, but it really, there really is an aspect where you have to catch up with certain tools and you have to develop certain skills. And so that's kind of what, what we want to look at is for the, the AI analyst, what does that look like? And in a way as analysts, we're lucky that we can look towards software engineers and get some understanding of how AI is disrupting a domain because
You have seen a number of startups at different stages have to adapt to the rise of the ADI engineer already. It seems like that's already there and already almost gone, or it's just a given. It's only really been two years, which is crazy. The term was coined two years ago. What do you see both from, for individual basically like
Shomik (12:02)
I know, it's crazy.
Katrin Ribant (12:14)
for analysts as being the skills and the type of tools that they should look at.
Shomik (12:22)
Yeah, so I think when this first started out, skills is very much similar to what the AI engineers had to go through, right? was like, you just have to learn ⁓ what AI is. And I'll plug your blog post. You have a bunch of them that are quite good on this topic. But ⁓ I think one of the things that you actually taught me about this was really ⁓ how code can be fairly deterministic, but then analytics can be
that same output ⁓ using these models becomes more probabilistic when you're using the models applied to that data, right? And that is quite challenging because you needed to.
Katrin Ribant (13:00)
It's really the fact that
as analysts we live in a world where there is no unit test for our output. The unit test is a human being accepting the result of your work. It's not very deterministic.
Shomik (13:13)
Yeah, and so that's a crazy thing, It's ⁓ like a skill is just understanding ⁓ almost just one, first of all, what AI is, two, understanding how it could help you in your work, right? But then three, also understanding where those shortcomings are. And so for example, there was a lot of text to SQL tools that came out, right? ⁓ And people started going and running and using them.
Katrin Ribant (13:15)
It is.
Yes.
Shomik (13:37)
Pretty much none of them have reached core adoption because of the fact that at the time, and models have improved since, so I think it's going to start, but at the time, the analysts were going and doing this and then they were realizing like, God, the output's kind of shitty. We should probably just.
Katrin Ribant (13:57)
Yeah, we have an explicit rating
so you can go ahead and we can use normal language.
Shomik (14:04)
Okay, okay. But, you know, it's... ⁓ So people would have this weird thing that would happen where they'd look at this and they'd be like, wait, this tool is supposed to help me, right? It's supposed to help me do the job better. But instead, actually, it was like, you should just do the sequel yourself ⁓ because you might actually be able to get to the answer ⁓ because you understand the shape of the data, the schema, like all that sort of stuff. Now, I think that's actually where, ⁓ you know, it's kind of funny because that's still going to be true.
in this next era, right? Like the AI analyst still needs to understand all of that. But then now you need to start to weave in the tools that you need to understand about that, right? And I'm curious like if what you see around.
Katrin Ribant (14:45)
Well, you you said one thing that I think is for me, it's like number one skill for the AI analyst, really know LLMs deeply. And that's one of the reasons why I the Fluffy series is to help people grasp very complex AI ML concepts.
in a very, very simple and hopefully entertaining manner. And also, you know, it gets me to ⁓ wrap my head around generative video, which is a lot of fun. But I mean, I actually mapped every feature of the flu feast to an AI ML feature. I did that with AI. And so that allows me to build very complex stories about what is actually
happening when you poison your context, when you have attention dilution, how, you know, forward pass back works, what is a loss function. So that understanding of what an LLM really is, I think is extremely important. And also the difference between the different LLMs.
Shomik (15:37)
you
Katrin Ribant (15:50)
I think increasingly we are going to start to see divergence between all the different LMS. They're not going to be all equally good at everything. They're already aren't. And knowing that is really a fundamental aspect. But more importantly, what you said about the SQL code generators, I think this is probably the first time in history where we can say, I wrote it, but I didn't read it. And really like,
Yes, maybe you don't have to write as much code, but you have to read a lot of code. And reading code is a skill because you have to understand from something that you haven't written, what is the logic? What does it actually do? And is that the logic that you want to apply? You're responsible for that. Like that's what you do as an analyst, right? And so, you know, whether the query actually works or doesn't, that I think is a problem that was
more difficult a few months ago, today, the queries will generally work. Do they do what you want them to do? That's a different story, right? And being able to do that reliably and fast, read through, understand, does it do it? Yes, execute, no, is the result actually the result of the logic that I wanted to apply? That's a scale. That is real. And that is a very different scale from writing code.
because it happens at a very different rhythm.
Shomik (17:16)
Yeah.
Yeah, and I think what you're saying is almost more of this, like this is the most, in order to be an AI whatever, analyst, engineer, just even a person using it. Like my mom uses AI, right? And it's kind of crazy that she does, right? Because she's quite older and she doesn't need to go and learn new things, but it's embraced.
Katrin Ribant (17:40)
I was going to say, your mom
actually really uses AI? That's cool.
Shomik (17:43)
Yeah, yeah, she actually
loves it for ⁓ cooking, right? Because she can just put in things that she can get. so ⁓ what I think is really interesting is it's it's embracing this learning mindset, right? That's the biggest thing that we all need to do, right? I mean, because it's very easy for all of us to get into our jobs and just say, okay, yeah, we can do these tools and stuff like that. ⁓ the AI engineers who are really, really successful, it's not.
Katrin Ribant (17:48)
amazing, yes.
Shomik (18:11)
It's not that they, like even the young ones who are just starting off purely coding with cursor, they still understand the architecture, right? They still understand the components that are needed in the application. And same thing with the analyst, right? Like if I'm trying to be an AI analyst and I don't understand the shape of the data, that to me is like, good luck. Like you can have the best tools in the world. You will not be able to even understand the output from those tools. You will not be able to...
write the right prompts, you will not be able to understand what you should even do. And so there's still fundamental learning that you need to do. But then once you get past that stage, then if you don't adopt the tools, if you don't think about how the LMS could work and stuff, you are just, that's where you're to get left behind because the people who are doing it are going to get it done quicker. They're going to get promoted. They're going to be, ⁓ they're going to be exploring more topics. They're going to be taking on more scope, right? Like all that's going to happen. And then, and then, you know, you're going to be left behind. And so I think that's the weird thing is like,
you still have to do the work. If anything, you might actually have to do more, you have to do more ⁓ learning, I would say, than previously. Yeah, yeah.
Katrin Ribant (19:15)
considerably more learning.
you know, so this is really what we want to talk about. So we have this aspect of the skills, for example, learning about LLMs, ⁓ learning how to read code rather than write code, analyze the logic of code fast. Those are all skills. They're generic. And then you have the aspect of sort of latching onto certain tools, learning how to
how to use certain specific tools because that's what gets you hired, that's what is sought after during the hype cycle, et cetera. That second aspect is probably what is the most fragile to the hype and sort of lower and then sort of continuation, right? So that's probably where you need to be a little bit more choosy about what you're going to learn because you don't have infinite time for learning. And if you think about those
two different sort of terms. During a hype cycle, when it comes to managing changes and choosing exactly how to upscale and what to upscale and to position oneself on the job market, what pattern do you see that people miss when they call in the moment? Like, what do you see are big misses of people who just like get basically missed a wave?
Shomik (20:33)
Oh.
Yeah. I mean, so the classic one, is 15, 20 years experience. I know how these systems work, like the back of my hands, and doesn't actually embrace the learning mindset.
And, you know, unfortunately, those people, they have such rich knowledge, right? And they could be superheroes. And they probably are actually right now currently superheroes, right? In terms of just the extent of stuff that they can do. ⁓ But when it comes to hiring, especially, I would say on the startup side, but even in large enterprises as well, ⁓ if that person is just kind of saying like, you know, like,
I know all this stuff and this is all hokey pokey and I'm not going to use it. That person's not going get hired. I mean, I can tell you that in the boards that I'm on, the conversations that we have, I I think you have even said, you've even told everyone on the team that you need to be using AI tools. You need to be learning about stuff. You need to just be building with it. Even if they don't use it in the actual...
Katrin Ribant (21:50)
We've
been AI native from the beginning, right? We really made that, radically made that choice from the very beginning. And we're a company where if you don't code, you get a side project. If you don't have a side project, you work on one of my nonprofits. They always need some help, but you are going to wrap your head around ⁓ code generation, vibe coding.
every possible flavor of using AI to generate code, to do analysis and to understand how to prompt, because there is just no way out of that.
Shomik (22:29)
Yeah, and even the learnings that we got, right? Like just from when the company first started to then, like, as we started to explore the models more, we started to be like, wow, you know, the hires that we thought we would do, we actually want them to look a little bit different. Because like originally we were like, ⁓ we know we're going to hire senior this, senior that, right? ⁓ And then now we're like, hmm, actually.
Katrin Ribant (22:37)
Yes.
Shomik (22:52)
We kind of need them to be embracing this mindset instead and that's actually more important than if they're senior, junior, middle, whatever. If they're embracing that, then that's what we need. And if they're not, then we don't need them.
Katrin Ribant (23:04)
Yeah,
because you cannot give somebody curiosity or make somebody want to learn. That's just not possible. Especially in a startup, you just can't have the overhead of pushing people. That's not possible. You need people to have a drive. I think that's probably true everywhere anyway, right? But in a startup, it's particularly true.
And so I see what you mean with the attitude, you know, to bring to an interview, more specifically about the tools that you want to, that you're using. Thinking, so if I put myself into the shoes of an analyst today who is, you know, maybe looking for a job or looking for a change of, you know,
a change of direction, career, whatnot. mean, the sort of specialty, et cetera, just wants to change jobs for whatever reason it is. AI is going to be part of what they're going to have to talk about during their interview. There is no question, right? I can't see a company that would not make that part of the interview process. So currently, it's all about generic LLMs.
It's all about knowing generic LLMs and maybe some MCP and maybe understanding the difference between chat GPT and code when it comes to manipulating Excel, or generating code, which one is better at which. That's probably about the extent of it. But quite soon it's going to be about which specific AI component of a database can you use well? Which specific...
generic AI system can you use well? And there I can see why one would hesitate to learn too many of them because there's quite a few and you never know which ones are going to survive. My theory is that if you take the DevOps hype or the data science hype, let's say you became a specialist of ⁓ Vertica, know, analysis on Vertica.
That was a little later than 2010, but you know, whatever. Let's say that. Okay, well, maybe Vertica ended up being a little less successful at some point and not so much of a differentiator as a skill anymore. However, your ability to write on that columnar database with R or with SQL or with Python, that's something generic that you can take to another database very quickly. I suppose in the DevOps,
era in the DevOps hype, you were a community specialist, that's also translatable to another tool of the same type, right? Can you talk about that a little bit?
Shomik (25:53)
Yeah, if you're using Snowflake, right, ⁓ and you're the best user at the world of Snowflake, that's amazing, right? And there's a lot of people that use Snowflake, but you may go to a company where they use Redshift or they use BigQuery or they use, you know, ⁓ Azure's whatever they call their thing. I don't actually know the Azure name for it.
What's interesting is like kind of what you're talking about is what are the key concepts of, storage and compute and how you actually run queries and what's the cost associated with running those queries and like, how do you do it most efficiently in terms of what schemas and things like that, right? Those are stuff that are all transferable between different tools, right? You may have to relearn a small aspect here or there, but in general, they're transferable. But then there's stuff that you will not, that you will have to relearn completely.
which is like how do you actually run the query, right? What is the cost associated with running that? What is the systems that, you how do you make connections? How do you output it to a dashboard? All these sort of things, right? Those are all gonna be very specific to that tool. And that's where it's like, ⁓ you almost don't wanna go, you wanna learn that, ⁓ but you don't want that to be your whole worldview.
Right, because if that's it, if that's all you know, and you don't understand these other components, then it's not transferable. Then you are truly only the Salesforce administrator that only can do Salesforce. You can't do HubSpot, right? Like, that's the challenge.
Katrin Ribant (27:14)
Yes.
And I mean, you know, pre, pre MPP, had the Oracle admin. I, you know, I, I grew up in the age of the Oracle admin. You were, you were, you were not like hiring a database specialist. You were hiring an Oracle admin.
Shomik (27:19)
Yeah.
Yep. Remember that, I mean, did speaking of hype cycles, like, mean, DBAs are not even a really a term that's used anymore. Right? Isn't that crazy? Like that was the hottest and most sought after job and extremely, extremely important, right? You got paid a ton. It's gone because technology moves so quickly. Now, those, really good DBAs, they are, they are still around and they're still doing exceptional jobs. ⁓
Katrin Ribant (27:37)
Exactly.
Shomik (27:58)
They're just there it is changed because they've just changed with the tooling but there's a lot of others who are still stuck and the only jobs that they can get is like, you know in in an Oracle shop where you know, probably they're not innovating much and and you know, that's and that's okay, too But that's that's that's you just get pigeonholed into that
Katrin Ribant (28:15)
Well, that still exists, and if you don't want to or can somehow not work in a competitive environment, that's perfectly fine, right? But this is not the case today. I I really do feel like this AI disruption is going faster and more across the board than really anything that happened before. And ignoring it, it just feels like not a good idea.
Shomik (28:42)
Well, I think, at least this is my view and I'm curious to hear what you think about it. But for me, when I think about what modern AI, what this generation of AI is, we had platform shifts. We had like, hey, we're going to mobile and then you were going to the cloud and so on and so forth. But now, I almost don't think of this as a platform shift so much as I think of it as an enablement shift.
And why I make that distinction is like, this is revolutionary, but also because it's so revolutionary, the pace at which it's going to be adopted, since it's an enablement shift versus a platform shift means that it's going to be adopted and permeate through the economy much, much faster. Because enablement means me and you and everyone else who's listening can get much better at our jobs today using the tool. happened with cloud, for example, was you had to relearn systems. have to port them over. You had to.
You had to do all this work before you could get to that value. Here, in some cases, just by using ChatGPT, you can actually improve your own, ⁓ like when I research something, like my research is augmented immediately by ChatGPT, right? And this is something that I think is the reason why that learning mindset and just moving quickly is so important, because that's why you're gonna get left behind, is people are just being enabled much more quicker, and they're picking up, they're using it, and then they're going on learning the next thing and doing the next thing.
And that's why the cycle so fast. But I'm curious, like, I don't know, do you agree with that?
Katrin Ribant (30:13)
⁓
I think it goes actually further than that. think that it's an enablement across every layer of basically our existence. It's basically, you know, it's sociological enabler, it's a cultural enabler. It really permeates every layer of our lives, whether people work or don't work. And it's one of these things that ⁓
Shomik (30:36)
That's it.
Katrin Ribant (30:41)
is creating such a profound change that if you do not embrace it even just in your daily life, not even talking about work, right, even just in your daily life, the world is going to feel very foreign to you very fast.
In my opinion, it goes way beyond simply work. It really is existential.
Shomik (30:57)
I think that's
Yeah, I think that's right. But I think it all goes in my mind to this concept of it is just happening at a pace and faster than we all think because of the fact that we can all use it for various tasks. Like you said, even if my mom is not working, right, she's using it for personal tasks or for and she's using it for figuring out gifts. Like she's using it is just but but
If you don't do it, like everybody else is doing it and it will just change things. Right. And so that's where I think, you know, I don't know if that's the right term, but like, truly do think about this. Like when people think platform shifts, there's still this phase where you have to go through that, that adoption of that, right? When mobile came out, it was still a while before we got Uber and Airbnb and stuff like that. Right. It was just like, people were trying to figure out, well, do we design for iOS? Do we design for Android? How's that even work? Like all this sort of stuff. Right.
Well, first before that Palm pilot, you know, like just blackberry, right? Like just so many things. This is not that, right? This is like today in Africa, in India, in the U.S., in people who are working, in students, everybody is using it and is enabling them to do something, to learn faster, to do a task faster, to, you know, to just it's doing something. And that's why this urgency is there.
Katrin Ribant (32:00)
Yes.
is
Yes.
It is often compared to electricity, right? It's like, you know, a multiplier of brain power, the way electricity was a multiplier of muscle power. But it is like if electricity had been discovered and mastered when the grid was already there.
Because this is what we already have the platform for this. The enablement is possible at this base because the diffusion platform is already there. So hence it can go at that base. And I don't think that ever really happened before.
Shomik (32:55)
So.
Katrin Ribant (33:08)
for any revolution really.
Shomik (33:09)
I think that's right. don't think like even, I mean, even industrial revolution, when we're talking about, you know, manufacturing cars and stuff like that still took time because it just, again, you're changing fundamental systems versus here, even if you don't change the system, you can still go and use it. But then what's going to happen is
Katrin Ribant (33:30)
There was never something where the
diffusion mechanism was universally already there, adopted with a, I wouldn't say nil because that's not true, but very, very low barrier to usage. That just has never existed before. ⁓
Shomik (33:46)
Yeah. And
I think that's where it's like, you know, these, these terms, right? AI analyst, AI engineer, they can almost feel a little bit scary, right? Because it's like, well, am I an AI engineer? Am I an AI analyst? Like, I'm just, I'm just an analyst. Like, I'm just, you know, like that's what I am. And it's, it's, it's not meant to be, ⁓ to be scary in that way, right? It's meant to truly be like, the world is our burrito. Like we get, this is the rule.
Katrin Ribant (34:12)
It's not meant to be scary at
all. It's meant to be the opposite. It's meant to basically say it's not magic. It's not going to replace you. The bots won't win. AI analysts will. It's a tool. It's a tool and you have to learn it. And you have to learn it. have finite amounts of time. know, like, well, try and help you figure out what to learn.
Shomik (34:35)
Yeah, well, and have to also is like have to is is but it's like it's like you almost want to learn it too because because the coolest part
Katrin Ribant (34:42)
Yes, I mean, you know, not
everybody wants to learn, suppose. I do.
Shomik (34:46)
No, but I mean it in this way. mean, like the reason why I think most people want to learn it is because even if they know how to do their analytics workflow exactly right every single time, if you could speed up a part of that process, wouldn't you want to do that? Like, I don't know a single person that loves like going through the same manual toil every single time, right? Like if you were like even one step of that process, I'm just saying like,
Katrin Ribant (35:08)
I would.
Shomik (35:16)
you know, generate me the SQL query, right? Like that seems pretty cool. Like I would, I would want that. I'd just be like, okay, cool. I just did that to you, right? And then I'll just go through and continue doing it. Right now in terms of, you know, frankly, keeping up pace with the, with, with the world, I think you're going to have to do more than that. But I just mean like everyone should want to adopt at least something in this chain because of the fact that's making your lives easier.
Katrin Ribant (35:22)
Yeah, definitely take that.
Shomik (35:42)
And so, and that just means like we're going to have more time to do other stuff. Like as a data analyst, the cool part is actually doing the analysis, Like that's the fun part.
Katrin Ribant (35:51)
Yes. And,
and so, so you've, you've invested in, we talked about like, you know, what people may miss. You invest in companies that went through major role transformations during technology shifts. When you watch these transformations, you know, talking to these founders, to these companies being on those boards, ⁓ hearing them talk about their clients and how their, their clients roles shifted, how
their companies' roles shifted from your investor point of view. What separated the people who successfully evolved with the technology from those who didn't?
Shomik (36:36)
It's a hard question because there's so many different aspects to it. I will say this, like, so in the modern, you know, there's this modern data stack, right? And talk about hype cycle or, you know, but that is real.
Katrin Ribant (36:46)
Yes.
It's starting to be a little
comical to say modern data stack at this point.
Shomik (36:54)
I know,
I know. But like, you you just saw 5Tran and DBT merge, right? ⁓ There's a lot of other players that have merged, Alterix is putting itself up for sale. Like, there's just, there's you know, there's a lot of stuff that's happening ⁓ as a result of this. And ⁓ I think what happened is...
Katrin Ribant (36:59)
Mm-hmm.
Shomik (37:14)
It was this birth of again, like kind of a new platform, so to speak, of Snowflake and ⁓ Databricks and Redshift and all these things that was like, bring this about, right? And everyone's getting very excited. Like, this is the new role. This is the ⁓ new way to do it. And it's like, yes, it was for some of those large platforms. ⁓ For some of the ecosystem around it, what happened was the budget went really, really fast, really, really high.
Katrin Ribant (37:20)
Mm-hmm.
Shomik (37:41)
because people were trying to get used to using these tools. And so anything that would make it easier would do it. But then what do those tools do? What do those platforms do? mean, those platforms then turned around and looked at the ecosystem was like, okay. Like there's a bunch of data quality tools. Like what if we just built some data quality internally as well? And then they started building the data quality internally and ETL pipelines. Like what if we did some of that internally and like so on and so forth. And all of sudden then what happened was when you got past that initial cycle and you...
Katrin Ribant (37:44)
Mm-hmm.
Shomik (38:10)
Now you're on the trough, right? The budgets froze up very quickly for stuff that wasn't, let's just use Snowflake as an example, like wasn't Snowflake. ⁓ And so to either, you know, maybe like a 5Tran had built so much density within the Snowflake ecosystem or a DBT had built so much density that they were still getting a decent attach rate to the Snowflake spin. But a lot of others were left behind, right? And what's interesting is actually there's certain companies
Katrin Ribant (38:20)
Yes.
Shomik (38:38)
where like, for example, you know, I have one company in mind and actually I'll say the name, you know, Bariat Hex has done an exceptional job of, you know, building a notebook in that modern data stack era, ⁓ riding the tailwinds of that, seeing AI, and then being like, hmm, how would AI change the concept of the notebook in that era and adapt it to it and moving with it? And so now all of sudden, like, he's now riding the next tailwind in AI.
Even though the budget from that era and the the the buyers and the tool requests and all that stuff has dried up ⁓ and so, you know, ⁓ I give that as an example. It's just like What I've seen the best people do is they ⁓ They don't glom on to ⁓ to just the here and now right like ⁓ you know we ⁓ People want to use snowflake better. So let's go and do that. They are thinking much more holistically about
Like, how do we make the role for this person better, not for a specific platform? Right? And when you do that, and you're trying to really focus on the end user and focus on their role, that leads to you just understanding when AI happens, like, wow, this could enable that user a lot better. ⁓ Versus saying, well, can I make that change? Because that might break my snowflake connector. that's like most of my business. You see, it's a very different way.
Katrin Ribant (40:08)
So to help our audience sort of see the forest from the tree a little bit in all this crop up of new tools that is coming, that is there and coming and is going to continue, obviously, coming. You've written about the need to escape the Silicon Valley filter bubble, that everybody's like building the same thing and chasing the same trends. You want to talk about that a little.
Shomik (40:34)
Yeah, well, first off, I would say like, you know, it was really funny because when you started building Ask Why, like you were an example of like you were not doing what was normal in Silicon Valley, right? You were going into a space that people were just like, oh, like, you know, like there's already platforms there. People are going to use it, like all that sort of stuff. And you're just like, oh yeah, no, this is where I want to focus, right? Meanwhile, rest of Silicon Valley was seeing what was happening with CodeGen.
and they were building more cogen. And they were seeing what was happening with ⁓ clay and GTM enablement. And ⁓ they were building more clay competitors. And it was just like, I almost sometimes think of this a little bit like, ⁓
you see what's successful and then you immediately are just like, okay, well can take some share of that and then that's gonna make me successful, right? And instead it's like what made those companies successful was innovating ahead of where others were or fast following, right? And so Windsurf is a great example. Like I think me and you talked about it a lot, right? Was like, wow, like Kertzer had come out and then Windsurf came out and we were both like, hmm, this is actually a pretty cool product, right? Because they were a fast follower that was thinking through it. But the long tail of others who raised...
Hundreds of millions of dollars have all faded away into nothing. It's kind of crazy. There's a bunch of them, right? And they were all extremely high caliber folks and hired amazing talent. It just was market dynamics that caused this to shift. And this is what I mean by like this Silicon Valley just like doing the same thing, right? Like, ⁓ you know, the reason I even wrote the piece was actually kind of funny. was like at the time,
Katrin Ribant (42:02)
Yes.
Shomik (42:27)
people were saying like Google was going to lose an AI. It was just like, if you thought through first principles, they have TPUs, right? They have the tensor processing units, which are their own native, essentially GPUs that they can use, right? They are, they have all the AI talent in the world. But because they botched one model release, which was a big botch of the initial, you know, I forget, what's it even called? Gemini? I don't remember. But like the first model that they released was not good. Everyone was like, ⁓ perplexity is going to win and like, you know, and this and that.
And, and, that was causing this whole Silicon Valley ethos of like, yeah, that they're innovators dilemma and they're going to lose and all this sort of stuff. Cause everyone's read the same books and they think the same way and they know that it's just, you need to be able to step out. Like Elon Musk is, is regardless of his, ⁓ you know, some, some of his other stuff, but like as a entrepreneur, he thinks from first principles. Right. ⁓ and, and I think like that's where.
Katrin Ribant (43:07)
Yes, my god, yes, that's so true.
Shomik (43:22)
You came in and you were thinking from first principles. If someone was like, I'm to go and attack the marketing data analytics space. Like at that, at that point in time, I would have been like, I you're crazy. Right. But, but it wasn't, it wasn't actually crazy because there was stuff that you were actually to, the learner's mindset. You were showing me stuff you were doing in chat, GPT, which I hadn't seen from anybody else, including the cogen people. Right. The people who were doing cogen who should be on the high end.
We're not doing the same level of work in chat GPT that you were doing. ⁓ and this was back when, you know, chat, he was still nascent. yeah.
Katrin Ribant (43:59)
Very nice. yeah, well, I kind of like got a little lucky. I invested in an AI company in 2021. So it was like, you know, not, not very high P yet. This is way pre-chat GPT. And so as a result, I ended up, it was also the pandemic and, you ⁓ there was not much to do. So I ended up just like,
watching all these lectures ⁓ on YouTube, MIT, Stanford lecture about AI and ML at the time, which some of the math was a little over my head. Most of it, you know, is not that complex. Like you can have these in principle, get it, and you can like understand what this is about and how it works. So when we got ChadGVT, my first thinking was patterns.
And I mean, I struggled with patterns my entire analyst career. So I was like, wouldn't it be amazing if it could do this, you know, passing things that we're doing in marketing, because basically in marketing, have to, when you create your data layer, you have to use the systems that exist. They only have the fields that exist and everybody has to hijack those fields and everybody hijacks those fields in different ways.
which means you end up, structurally, end up with a mess, always. Even if you organize it, you'll end up with a mess. And putting that together is basically most of the use case of the digital analytics, of, know, ETL, transformation, data preparation, all of that. ⁓ And when I saw what it could do, I was like, wow, this is like,
another universe from everything I've done before in terms of ETL for non-technical user, helping data trap, helping do all of this easier. It's kind of one of the things that where I decided to do this again because I was like, I just don't want to miss this era in technology development. It's too interesting.
Shomik (46:10)
But you know what's crazy? like, it was still revel- like when we saw what you were doing, and I forget what you had called it, like some sort of, there's a name that you had for it, but it was like the data bot that you were running, right? You were like, look at the analysis that I can do with this and stuff. And I remember you were showing me that. was like, whoa. Like, you know, my mind was blown. I was like, I didn't know you could, it could do that sort of analysis. And you were like figuring out the prompts and all that stuff, right? And what's funny is now it's laughable thinking about what you were doing back then. Because now like-
Katrin Ribant (46:33)
Mm-hmm.
yeah, I mean yeah, absolutely.
Shomik (46:40)
Now, it's like, the stuff we were thinking was so amazing, it spits it out like with nothing, right? ⁓ But I think what's really interesting is like, because you did that learning, ⁓ it actually helped you when you start to see the models improve and stuff, because you could push, you could start to understand the capabilities more, right? You could be like, wow, okay, what is the step function that I can now do and that I can now enable?
And that allowed Asquad to start moving really rapidly forward because you were already pushing it to say, okay, well, I already run these queries before. Now I rerun them. Wow. Like it's doing this much faster. It's giving me more Richard context. Like now what else can I push the push the envelope on?
Katrin Ribant (47:24)
Yeah, I think you need to build in function of where you think the technology is going to be a few years from now. Very hard thing to do in the case of AI. But to a certain degree, it's somewhat possible. yeah, for that, you do need to have an understanding of how it actually works, right? But for an analyst sitting here in 2025,
they're seeing this explosion of AI analyst job postings and AI gets either tacked onto a job description or it's in the title or the job description is an amalgam of every possible skill on the planet that has anything to do with AI, et cetera, et cetera. And that's just the lack of maturity, that will pass, right? But the question really is what to learn.
and what to learn, they're being learned to prompt engineer, understand the limbs, get comfortable with AI, all of these sorts of things. In your opinion, which one of these skills will stick?
I know, hard question.
Shomik (48:34)
That is
Yeah.
Katrin Ribant (48:36)
Looking for the future, tell us. Well, you you do see very sort of like forward thinking.
technologies ⁓ every day, obviously, and lot of people, and you do see patterns, right? So
Shomik (48:52)
Yeah.
Katrin Ribant (48:52)
it's not a question of like, know, I mean, we're not going to hold it against you in two years if you're wrong in this podcast, but like, it would be really cool to get your gut feeling of what it is was actually likely to stick.
Shomik (49:07)
Yeah. So first off, would say that you mentioned prompt engineering. Do you remember, like even last year, there was, like that was the hot term. Everyone was hiring prompt engineers. And now nobody's hiring prompt engineers. is not even a role anymore. ⁓
And I think what's really interesting is we looked at a bunch of prompt engineering tools at the time too. And that's where the patterns started to come into play, which is like, hmm, if we think a little bit further ahead is, first of one, are we going to be the ones doing the prompting? I don't know, right? But I think a.
Now we can probably say we probably won't be the only ones doing the prompting. In fact, like an agent might prompt another agent to go do something, right? ⁓ Based on maybe some high level instructions or specifications or things that we've given them, or even we pass them, we give them ⁓ some data and we give them context around what we were trying to achieve. And maybe it starts to go and do that. Right. And so, ⁓ so I think like really what's going to stick is especially for the AI analyst.
Katrin Ribant (49:57)
Mm-hmm.
Shomik (50:20)
It's this fundamental understanding of what I think we would call the shape of the data. Understanding what is in a ⁓ revenue file by category. ⁓ Why do SKU prices need to be in there? Why does geography need to be in there? That is still fundamentally something that is going to be helpful because when you're starting to flesh out
for the agents to go and do the work, you will then be able to just draw better specifications for the agents. And your outcome, and what you're trying to engineer from that outcome is going to be much better because you understand that. And if you don't, that is going to be a case. And it's funny because Jeff Bezos has that, what things will not change, right? And that is one thing that will not change. If you don't deeply understand still your actual
Katrin Ribant (51:12)
Yes.
Shomik (51:20)
data. It will be very, very challenging, even if you have the best agents in the world, to be able to achieve the outcomes that you want. And I think tooling will help with that, right, because they're going to help you do that analysis and understand better. But like, that's still something that you're going to have to, you're going to have to prompt to learn it and then, and then go in and then go and execute.
Katrin Ribant (51:21)
Mm-hmm.
Yeah, I definitely agree with that. Would that be sort of close to your ⁓ idea maze framework that you use to evaluate founders?
Shomik (51:50)
Yeah, like
I would say that's very similar, right? Where IdeaMaze really starts from this concept of, and so ⁓ Balaji Srinivasan, I think he was the CTO of Coinbase. He started a couple different companies. He kind of coined this. did some lectures on it. ⁓ But really, it's like understanding the, if we're using a tree analogy, the roots of the tree. ⁓ And then as you start to go through and you start to test these different things, like...
Where is the roots? Well, how are the roots expanding? know, how are they getting water? How do they cross each other things like that, right? You start to explore all those different nuances and then you start to go like, okay Well now how does this form together to form a tree, right? How's the tree start to grow like all those sort of components? that's part of this idea maze and again, if you go to if you just start it with like ⁓ Look, there's leaves on the tree and you don't understand those different components. You don't understand how the tree works so if it starts to blow over in the wind like you don't understand like
Well, why is this happening? Right? Like you need to, you need to get that whole, that, deep understanding from the roots. And that's the same thing with what we're talking about here is like the, the, the skill that will not change is you still need to understand the roots. But if you do understand the roots, then the tooling will allow you to, you know, figure out everything else that much exponentially faster. and, ⁓ and, and that's what, and by the way, I think also.
even to understand the roots, think AI will help with that too. But I'm just saying like, that is where the course still needs to start is like, you still need to understand the data. You still need to get to understand the components of it. And then you can, you can even actually be able to interpret that data as well. Because if the, if the agent starts to return you something back and like it's giving you revenue by geo, but you have no sense for what that looks like, maybe it gave you something that was incorrect, right? How would you know if you don't understand the data? And so that's where the, I think the key concept.
Katrin Ribant (53:47)
⁓ Yeah, I definitely agree with the fact that AI will allow you to do that exploration much better because I mean we've never really had the ability to have long conversations about deeply technical or philosophical or whatever subjects with, you know, somebody who read all the books about it and never gets tired of answering your questions. I mean you can run out of credits, that's another problem, but ⁓ it really...
It really, mean, I, my best experience of that was building the Fluffy's framework and going through cycles and cycles of, okay, so the eyes are attention and the ears are the context window. But does that work for every case, every possible mechanism? Is that, is that, is that like crossing it in every possible way? If I had that, had done that with my CTO, he would have quit. Guaranteed.
Shomik (54:20)
Okay. Okay.
Katrin Ribant (54:44)
⁓
So we talked about the hype cycle, we talked about past transformations. ⁓ What are you seeing right now in the companies you're working with that are building tools and, generally in this space, not necessarily in analytics, but generally in the space with AI, what are they building for? Like, what are they thinking of being the direction?
Shomik (54:59)
Okay.
Yeah. So I would just say like the biggest change that has happened in general has been the rise of the lovable and like this and cursor and basically this whole vibe coding ⁓ ethos. Right. And when I say vibe coding, I don't just mean specifically coding for engineers. mean, literally like in lovable, you can just say like, you know, ⁓ I want to create a calendar app that, you know, we could track what my family's doing or something and it could go and do that. Right. So, so, ⁓ this, this sort of
And even from chat GPT, right? We type in a question, we get an answer. That is so fundamentally different than where we've been before, right? You typed in a question into Google and you got articles that were really relevant for you to go and read and explore what those, and explore the learning that you were trying to get, right? Now you're getting the summarization of that in your, just directly in your hands. And that sort of ⁓ shift.
to this answer engine format, to this upfront value is something that is changing for the builders in the ecosystem. Everyone now needs to build with that in mind. there's an example of ⁓ there's a product where they're taking tons of complex data sources and ⁓ they were delivering these outputs and really robust automations through that. But in the end, what was happening was ⁓
Nobody was going and using the product because ⁓ what they actually wanted was first off to just be able to go and ask a question and have it give an answer and then realize like, wow, okay, this is, I asked this thing, it gave a result, that result proved to be correct. Okay, great, now let me go and start to see what I can do with it, right? So what I mean by that is like the initial work that we ⁓ used to do in products has gone away.
And now users do not have ⁓ tolerance for that initial piece of work.
Katrin Ribant (57:08)
yes, I can see that. So something that you wrote about incumbents comes to my mind. You said like, you wrote a piece about why incumbents love AI, and your thesis was that because AI is an enablement shift, it often benefits the incumbents who already have the data, the workflow, the relationships, et cetera. ⁓ If that's true for companies, would you think that that's also true for people?
Analysts established companies with deep domain knowledge have a defensible mode. I think, I really do think they do, right? If the upscale versus, you know, situations that I've seen recently where companies just eliminate an entire data team to replace them with a team built under a leader with, you know, some AI experience that he starts with, like bringing new people in. What do you think of that situation?
First of all, you see that situation? And if yes, what do you think of that?
Shomik (58:05)
I think
you see all sorts of different situations. So you definitely do see the one where they bring in, you know, the kind of so-called AI expert, which by the way, like whenever that is the thing that's happening, ⁓ it's usually not a good sign because, know, like AI is everywhere, right? Like the experts actually could be the IC that's doing something versus like this manager who somehow has this like AI.
expertise, regardless, ⁓ I think that's a pretty, ⁓ pretty terrible anti-pattern when people ⁓ kind of, ⁓ for example, just say like, ⁓ I need a VPN or a CTO or a, or a ⁓ head of data or whatever that is, ⁓ that is, you know, that is an AI expert. Well, I mean, okay. Like what does an AI expert mean? I actually don't know. Like I don't know.
Katrin Ribant (58:51)
I don't know either.
Shomik (59:03)
Like, does an AI expert mean like the lead researcher at Meadow? Like, okay, well then good
luck getting that person. And like, would that person know how to run your specific organization, your specific thing? So I think like really AI expertise comes in the form factor of like, it is just everywhere. Like we've talked about, is this enabler that is like pervasive throughout every single thing you're doing. So ⁓ it is a mindset shift. It is not a...
⁓ a so-called ⁓ person or role shift or somebody is a mindset shift. You have to go and embrace these things. You have to learn them. You have to understand how they do it and what they can do for you and where their shortcomings are today and how those shortcomings, by the way, may not be there six months from now either, which is like the pace that's going. Honestly, they go away pretty quickly. ⁓ I think that's the most common thing I'm seeing is the successful companies.
Katrin Ribant (59:48)
Yeah.
Shomik (1:00:01)
They're not trying to hire for like ⁓ the so-called AI expert. They are trying to hire much more for this, for more of like the curiosity, right? The curiosity component, the learning component, the stuff like that. And also making sure that you still have those fundamentals, right? And this is the biggest thing I think for coming out of like college or something like that, you know, it's like, ⁓ how do you still have those fundamentals? How are you still learning those fundamentals?
Katrin Ribant (1:00:05)
Mm-hmm.
Yes.
Mm-hmm.
Shomik (1:00:28)
while you're using Purser because it's quite easy to not embrace those as well.
Katrin Ribant (1:00:34)
think that makes
total sense and the way you demonstrate that curiosity and the way you demonstrate that mindset is by having projects of your own, knowledge of your own that you can showcase in your interviews. You have to have something to show for yourself about how you use AI, what you do with AI. It's kind of like one of the things I want to develop on the podcast is get people to come in and talk about in their day-to-day, in their day as practitioners, as AI analysts.
how do they actually use AI and just like have people talk about the cool stuff that they do? Because everybody's tinkering and everybody's kind of like, you know, discovering different things and this era will never come back. So I kind of just want to hear from everybody about that. But specifically, so in the way that AI is transforming the role of the analyst, but really, I think most jobs really. What I see is an evolution where
Increasingly, we're going to get, I'll just talk about the analyst role, but I really do think it's applicable everywhere. ⁓ We'll increasingly go towards situations where you'll have AI agents that are going to be increasingly autonomous, doing increasing large parts of work by themselves and having checkpoints with the human, and you'll be essentially running a virtual team of specialists.
orchestrating and architecting them. Does that seem pertinent to you?
Shomik (1:02:09)
I think that's like, I think that's a hundred percent where we're heading. And I think, I think really, I think we're going to have a new terminology around this, right? I think we're gonna have a new language to describe what this is, but it's, it's much more, it's much more focused on the outcomes than, and then, and then kind of working backwards from that to then enable the agents to go and achieve those outcomes. And so for example, if the
if the outcome that you would like ⁓ is sell more batteries of this particular kind. Well, then it's like, okay, well now what do I need to know to figure out what campaigns did we run? What ⁓ are the channels that we sold through? Like all that stuff that goes to, again, you still need to understand the process, but you're gonna be much more focused on like, hey, this agent,
Katrin Ribant (1:02:43)
Hmm?
Shomik (1:03:05)
is going to be focused specifically on trying to figure out the channels and segmenting those. And then maybe that then passes off to another ⁓ agent that is doing ⁓ analysis in some way or whatever it is. And then maybe that one is calling out to MCP to get Salesforce data into the mix and so on and so forth. And you're have these, this is just gonna happen. That's the cool part about it is like, you don't need to now be trying to figure out like,
Katrin Ribant (1:03:16)
You
Yeah.
Shomik (1:03:35)
Well, how do I build that API to Salesforce? Or how do I query that? That will eventually not be something that you will need to worry about. Instead though, you do need to worry about like, what is the outcome that I want? And then like, there still going to be processes and ways to get that outcome done in a way that gets you the right answer, not necessarily the right answer, but the more correct answer for what you're looking for. Right? And so what I mean by that is like,
⁓ If the process is, you know, ⁓ if the process skips over integrating Stripe data, well then like you've missed a large component of, you know, of what could be adding value to that outcome that you're looking for. Right. And so that's what I mean is like, there's all these little components that you will need to understand. And so that's where the orchestration part's really interesting where, you know, again, that base understanding is necessary to get to that. But like,
Katrin Ribant (1:04:06)
Yeah.
Shomik (1:04:34)
I don't know, we need to come up with a new term, but we are going to be focused on outcomes. That's it.
Katrin Ribant (1:04:38)
Yes.
So, would love to continue this conversation. mean, every time we talked, I felt like, you know, you always came up with something that had a point of view that wasn't mine and was interesting. Like, I learned from every single one of our conversations. And so, last question before we wrap up.
If you were starting your career as a data analyst today, knowing everything you know about hype cycles, like you're a very wise college graduate, right? You know about the hype cycles, you have knowledge of the world and technologies and all of that. What would you focus on?
Shomik (1:05:21)
⁓ well, so first off, would say like ⁓ the proliferation of podcasts is the biggest boon to learn and you could ever ask for it. ⁓ And by the way, like the tools that you can now start to use around it. So notebook L.M. is a godsend from Google, right? Google's notebook L.M. ⁓ I will, if I'm ever trying to learn something new, any single thing, for example,
Katrin Ribant (1:05:33)
That's cool.
Shomik (1:05:50)
If I were to look at investing in Ask Why, for example, I would currently go into Notebook LLM. I'd upload the website. I'd look at blog posts. I'd put ⁓ Katrina's LinkedIn there. I'd put in the, you know, I put in the team's LinkedIn's. I'd put in, ⁓ you know, maybe I'd put in a competitor's website. I'd put it, you know, like I just do everything I potentially could. Put it into the Notebook LLM, write some prompts into that, choose the longest setting and just be like, boom, spit something out to me.
That's where my research starts. ⁓ It's completely different from before. That is where the research starts. And then I start to query all those sources. And I start to ask, because now I've got this podcast, that notebook I generated. I listen to it for, call it an hour, 30 minutes, whatever. Now I've got a decent ⁓ base of understanding to start with. Now I go and start to query. OK, hey, this is what I'm thinking about.
Katrin Ribant (1:06:22)
That's amazing.
Shomik (1:06:46)
These are some questions that came up. Let me start to ask that, right? And going through that. And I would just say like as a college grad or even someone, not even a college grad, even someone who is senior and in their role, like not listening to the Dwarkesh podcast, which I know me and you are both big fans of, right? Like ⁓ not listening to, you know, when Andre Karpathy came onto that, like I have listened to that four times and that is a, two hour long? Like that is eight hours of my life.
that I've spent on that one podcast.
Katrin Ribant (1:07:17)
done that a few times
for a bunch of episodes as well, yeah.
Shomik (1:07:21)
Yeah, but like the amount each time it brings about a different question or a different thread or something like that. And then I go and explore that thread and it comes back to a learning, right? Or comes back to a research paper or comes back to something which is building my overall understanding, right? And that's what I would just say is like, it's not even for new grads. It's like for all of us. It's just like, do not let the do not let it do not have information anxiety or anything like that. Figure out how to distill it in the way that makes sense for you.
Katrin Ribant (1:07:30)
Mm-hmm.
Shomik (1:07:51)
For me, it's like this notebook LM thing, right? Like I have anxiety over like, ⁓ there's so much content being created. Like, how do I know what's going on? It's like, okay, cool. Like whatever I think is relevant that comes through a Twitter feed, Katrina sends me, whoever sends me something, I'm just like, boom, notebook LM. Like tell me what's happening, you know? And then like now I can start the.
Katrin Ribant (1:08:09)
Yes, I did not know
that. So when I send you stuff, goes into notebook LM. I had no idea. You see, I always learn something with you.
Shomik (1:08:18)
But you know, it's this sort of thing is like is just ⁓ this is the most fun that we will have. It's also I think it's hard to write because like you're having to use new muscles. You're having to to just figure things out. But like the reward to it is just so cool because like you have to do less of the mundane stuff. It's the stuff that we don't want to do.
Katrin Ribant (1:08:40)
You know, I would say
you get to use new muscles. It's a privilege.
Shomik (1:08:46)
That's true. That's That's true. I would say that's true. by the way, like, ⁓ you know, for me, like, you know, I would encourage anyone who's listening to go through the, the, the analytics versus engineering blog posts that you've written. But for me, like, that was where I learned a ton about this difference between like, wow, okay, actually, you know, like, like, why isn't it, why isn't cursor just being used for analytics, right?
Katrin Ribant (1:08:47)
It really is.
Shomik (1:09:14)
And ⁓ then when you start to break it apart, like, wow, like code is very deterministic, ⁓ but analytics is not. that to me was like, I don't know why that wasn't in my brain before, right? But then finally, when you started to break apart, I was like, ⁓ my God, this is really different. And now I get why.
Katrin Ribant (1:09:20)
Yeah. Yeah.
We never
had a really good reason to think about that before.
We really didn't. Yes, so thank you so much, Chomé. This was really incredibly valuable. So, Shimlet's blog time. ⁓ Where can people find you? Your podcasts, Substack, your writings. Do you want people to contact you or do you not want people to contact you?
Shomik (1:09:40)
That's.
So, on Twitter, I'm at ShowMcGosh21. Anyone can follow me on Twitter. I'm also on LinkedIn, ShowMcGosh. If you want, you can reach out to Katrina to get in touch with me. But also, would say, software snack bites on YouTube, on Substack, wherever you want to go. I think right now, I would just say, again, I'm trying to be a sponge and learn everything. So, anybody who has interesting...
ideas, interesting articles, like things that you want to talk about, whatever. I'm open to all of it because ⁓ this is some of the most fun that I've had in my career at this point. so, ⁓ yeah. And of course I would say shameless plug, but I think the Ask Why product is exceptional. I think it really helps with a lot of this stuff because you've thought through even how to expose the data, for example. Like the data is something that you can still see.
So when we talk about the shape of the data concept, it's not just like you're trusting the AI to do it, but you can still go and see yourself and visualize it and understand it even just from the interface, ⁓ let alone then having ⁓ the templates, the workflows, the agents kind of doing everything that you would like to achieve. So ⁓ yeah, think bright future ahead, ⁓ think we are, it's coming, the AI engineer, the AI engineering summit is a big thing. The AI analyst summit is going to be.
You know, it sometimes just takes a little bit time, but it's going to be coming and it's going to be big. Yeah. I love it. Yes, that's right. That's right.
Katrin Ribant (1:11:21)
It's gonna be great. It's gonna be great. And it's gonna be called night agitation.
So thank
you so much, Chomik. And for our audience at Asquire, we're building Prism around exactly this insight that the fundamentals survive every hype cycle, context engineering, critical thinking, understanding what good analysis actually looks like. So you can check out what we're doing and try the Prism platform at asquire.ai. And that's it for our third episode of Knowledge Distillation. If you're building your own AI and list workflows using AI in any way, check out asquire.ai.
We're practicing what Shami and I just preached. And please remember bots execute workflows, AI analysts navigate ambiguity.
Shomik (1:12:12)
I love it. That's great.