When we started Ask-Y to build our analytics platform, we made a conscious decision that would impact us more deeply than we could have imagined: everyone at our company would work with AI tools daily—not just our engineers.
We instituted this approach from the get go, and I wanted to share some observations on how it’s shaping our company culture and product development in unexpected ways. Spoiler alert: the robot revolution hasn’t happened yet, but our Product Manager’s dog has been caught chewing through ethernet cables in what we can only assume is a preemptive strike against our digital overlords.
The “Coding Pet Project” Rule
Perhaps our most unusual policy is that everyone—regardless of role—must maintain a side project that involves using AI coding assistants. We also encourage experimenting with any type of AI tool to help with design, analytics, ideation and we build tools internally to streamline every process we can make more efficient (more about this when we talk about product development).
I’m not a developer by training (my background is in data analytics), but I’m currently building a Brazilian Jiu-Jitsu training app that helps classify techniques.
Our product manager, who’s actually quite proficient with code, is developing a website for an environmental nonprofit.
Everyone else uses AI tools for code production, I’ll write a more detailed post about our learnings both on the technical and the workflow side at a later date.
Tools of Choice
We’re primarily a WindSurf/Lovable shop, but we maintain a “use what works for you” philosophy. If a team member wants to try another tool and stick with it we encourage experimentation: the point isn’t standardization—it’s ensuring everyone has firsthand experience with these tools and understands their capabilities and limitations.
This process also helps new employees practically understand the knowledge and skills gaps they need to close as they get onboarded, a great benefit for all but particularly important for an early stage start up.
We document the team’s findings about the good/bad/ugly experiences they have with the tools and build a knowledge base of use cases for which they are effective and issues encountered, such as the tools getting stuck in a loop and how to get out of it etc.
We invest in buying licenses to any tool we want to experiment with as long as the person using the tool takes the time to compare and test in a structured way and document their learnings.
One of our core tenets as a company is to learn how others out there build and use AI tools. We spend time and resources actively investigating what works, how it works and when it works as well as what, how and when it doesn’t work and we build and test hypotheses as to why it does or doesn’t work.
The Unexpected Benefits
Writing Our Knowledge Base for AI
We’ve also fundamentally rethought how we document Ask-Y’s knowledge and processes. Our goal is for AI to serve as our company memory and knowledge repository, capable of answering questions ranging from the mundane “How do I submit expense reports?” to technical questions such as “Explain how our parsing engine handles nested JSON structures” to historical knowledge about our platform like “Who refactored this module and why?”.
This led us to design our Notion workspace with AI readability in mind, establishing clear documentation standards that make information AI-accessible, and implementing a data architecture that lets us both query our history and use our knowledge base as context for new work.
The unexpected benefit? Our documentation has become significantly more structured and useful for humans too—turns out what makes content digestible for AI often makes it clearer for everyone else.
LLMing – working with LLMs
As LLMs develop rapidly, we’ve discovered they’re not yet universal problem-solvers—each has distinct strengths and limitations. When we hit a wall with an AI tool, we don’t just shrug and move on. Instead, we launch what we call a “LLMing” investigation: a deep dive to understand exactly why something isn’t working, followed by developing a tailored approach or “recipe” that leverages the LLM’s capabilities while working around its constraints. Our recipe book now includes everything from simple automation scripts (like smart document management systems that properly name and file documents) to sophisticated workflows for complex tasks (such as generating comprehensive article summaries from multiple sources without hitting context limits).
These recipes aren’t just productivity hacks—they’re building our deep knowledge of how to design and develop our LLMs driven products.
This approach has created some fascinating team dynamics:
For context, we are a fully remote and distributed team
- Reduced communication noise: We’ve practically eliminated Slack. By using AI to answer questions and solve problems, we’ve cut down on unnecessary discussions and interruptions, letting the team focus on deeper work.
- Better documentation: Our knowledge base is not only more organized and better linked, but also more detailed with minimal effort. Since we designed it to be AI-readable—with clear structure and contextual relationships—it’s become far more usable for humans too. The necessity of making content “findable” for AI forced us to create a truly navigable information architecture.
- Agility and productivity: Our learning and experimentation culture means team members constantly discover better tools and “LLMing recipes,” preventing us from settling for mediocre workflows. When something isn’t working, we improve it.
- Enforced standards: Routing our code, documentation, and knowledge through AI interfaces has naturally enforced higher standards. Our code is more consistent, our English more precise, and our configurations more standardized—all because AI interaction requires clarity.
- Boosted language capabilities: Working with AI has improved everyone’s communication skills. For non-native English speakers, it’s become an invaluable language practice tool. For everyone, it sharpened our ability to articulate problems clearly and concisely.
- Critical LLM discipline: Partnering with AI has strengthened our critical thinking. We’ve developed a healthy skepticism—we use AI extensively but rigorously test, fix, and debug everything it produces. We remain in control and responsible for all output. We even use AI to AI prompts, which helps us check that what has been written is unambiguous, technically sound and covers all needed requirements for the task.
- Higher collective standards: These new tools have raised our expectations of ourselves. We produce more, at higher quality, and hold each other to increasingly ambitious standards that would have seemed unrealistic before our AI-native approach.
- Empathy for our users: Since we’re building an analytics platform where LLMs are the primary interface, having the entire team—from different technical backgrounds—work with these tools gives us diverse perspectives on the user
How This Shapes Our Product
Perhaps the most valuable outcome is how this approach informs our product decisions. In traditional companies, there’s often a significant experience gap between the engineers building the product and the non-technical team members who might better represent user perspectives.
When everyone actively uses AI tools, those boundaries blur.
This creates a built-in feedback loop where the entire team naturally emulates different user archetypes—from power users to novices—allowing us to build a more intuitive product.
The Challenges
For most engineers, this is a fundamentally new way of organizing their work and it requires changing some mental models.
- Navigating uncharted territory: We’re building with technology that’s still maturing. There’s an inevitable inefficiency in discovering which tools work best for which tasks—sometimes we invest time in approaches that ultimately fail. Waiting a couple of years would make this journey smoother, but that’s not an option for companies aiming to stay competitive in this rapidly evolving landscape.
- Infrastructure gaps: The tooling ecosystem around AI workflows is still developing. We frequently find ourselves needing to build custom solutions for knowledge management, recipe automation, and workflow integration. Adapting existing tools or developing new ones requires significant time investment and organizational discipline.
- Unpredictable process boundaries: LLMs enable rapid progress but often reveal their limitations only after we’ve invested considerable effort. The confidence with which AI presents incorrect information can lead teams down costly rabbit holes, requiring new verification protocols and quality checks throughout our development process.
For non-engineers, the necessity to understand how AI tools affect the product building process might be an even greater challenge.
We encourage everyone to build an understanding of how the current dominant transformers architecture works, this can be done by listening to podcasts such as Latent Space, Gradient Dissent (gold medal for the best podcast name in AI history) or Machine Learning Street Talk but we also discuss how to make AI work for our use cases by intentionally referencing the fundamentals of how the current models work to foster a culture of working with AI from first principles.
This avoids getting stuck in routines and known mechanics and encourages experimentation and creative thinking.
If you’re building products in the AI space, I’d encourage you to consider how your company’s relationship with these tools might extend beyond your engineering team. The insights might surprise you.
And if nothing else, you’ll get to witness the unique joy of watching your marketing team try to explain SEO concepts to an AI, only to receive a literal interpretation involving actual spiders (and yes, this is the best AI joke I could get for my conclusion).
If you read until here, the whole team at Ask-Y (including the Product Manager’s suspicious dog) thanks you and wants to talk to you about joining!
Link to the full article.