The rise of large language models (LLMs) and foundational AI models has revolutionized the software landscape, offering immense potential for product engineers to enhance user experiences. But how can companies effectively leverage this technology?
I spoke with Igor Luchenkov, an Artificial Intelligence Product Engineer who has built infrastructure to utilize LLMs at scale and created the hackathon platform HackathonParty to gain insights into applied AI in product engineering and making users happy.
Defining AI in SaaS Solutions
Luchenkov defines applied AI in the current SaaS context as centered around foundational models derived from LLMs.
“A foundational model is a machine-learning algorithm trained on a massive amount of data,” Luchenkov said. “These models can understand text, images, sound, and virtually any input within a specific domain.”
He points to well-known examples like OpenAI’s GPT and Anthropic’s Claude, as well as open-source alternatives like DeepSeek R1, Mistral, Gemma, and Llama. The applications are vast, ranging from chatbots and meeting summarization tools to code generation and AI-powered data analysis platforms.
“The spectrum of possible use cases is very high and yet to be determined,” Luchenkov said.
Evaluating the Need for applied AI
Luchenkov advises a pragmatic approach to AI implementation.
“First, build a product that works and brings value to the customers. Do it without AI,” Luchenkov told me. This allows for a baseline against which to compare AI initiatives. The key question then becomes: is there a good use case for AI?
“We’re looking for product opportunities where human-like, thoughtful decision-making is needed,” Luchenkov said. “The focus should be on automating tasks and increasing user productivity.”
Luchenkov illustrates this with his work at Clarify, where AI powers meeting preparation, email drafting, and deal summarization within their CRM.
“We took a clear problem known for decades in the space (customer relationship is a long, thorough process) and made it easier with AI,” Luchenkov said. “Companies should “identify the problem that needs an intelligent system to be solved and make sure this problem is worth solving.”
He also recommends consulting Google’s rules-of-ml for guidance on building AI/ML systems.
Crucial Infrastructure Considerations
Luchenkov stresses that applied AI applications are, first and foremost, applications requiring solutions to traditional software engineering challenges like scalability, response times, monitoring, and alerts. However, AI introduces its own set of considerations.
“You need to look for model performance decay, data distribution shifts, and other things specific to your particular task,” Luchenkov said. Observability is crucial for understanding the impact of system changes on performance and business metrics. Foundational models also present unique challenges, particularly in evaluating open-ended responses.
Luchenkov cites the example of a model summarizing text: “How do you know whether LLM correctly summarizes a text and doesn’t make things up?” Metrics like AI judge and perplexity can be used, but the specific choice depends on the use case.
“Generally speaking, evaluate and monitor metrics that make sense for your particular task,” Luchenkov said.
Democratizing AI Usage
Luchenkov believes applied AI should be accessible to everyone in an organization.
“AI is a commodity nowadays,” Luchenkov said. Restricting access hinders innovation. Beyond product teams, he suggests establishing a dedicated AI R&D team to track emerging models and techniques and explore new use cases.
“The goal of such a team is to uncover new use cases for using AI in the product and innovate across various areas of the product,” Luchenkov said.
He also recommends the books “Designing Machine Learning Systems” and “AI Engineering” by Chip Huyen for further information on infrastructure and evaluation.
Mitigating the Risks of AI
AI, trained on vast datasets often containing biases and misinformation, carries inherent risks. Luchenkov highlights the potential for AI to generate harmful or inappropriate responses, citing a chatbot that suggested suicide.
“Any precedent like that is a tragedy for people and a huge reputation loss for the company,” Luchenkov said.
Even seemingly harmless errors, like incorrect customer support responses, can damage trust and lead to negative publicity. He reiterates the importance of constant monitoring and evaluation to ensure performance and identify potential problems.
Addressing Reputation Concerns
Luchenkov acknowledges the potential for reputation damage due to AI’s unpredictability. He points to examples of AI assistants making bizarre statements or generating biased responses.
“That’s why it’s crucial to have proper safeguards in place, like content filtering and human oversight,” Luchenkov said.
He notes that human supervision is essential in sensitive areas like healthcare, finance, and legal services to ensure accuracy, compliance, and ethical responsibility. The ultimate goal, Luchenkov concludes, is to “harness AI’s benefits while protecting your company’s reputation and maintaining customer trust.”