Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Not every problem needs AI: A solution architect’s view on responsible tech

byIndranilla Tsybikova
September 12, 2025
in Artificial Intelligence
Home News Artificial Intelligence

This article was originally published in Dataconomy’s Expert Articles section. The guest author of this article is Indranilla Tsybikova, a leading enterprise solutions architect with extensive experience in enterprise-wide digital transformation and responsible technology implementation. Invited by Dataconomy to share her professional insights, Indra provides a solution architect’s perspective on the risks of unchecked AI adoption and explains why, in many cases, traditional automation or software may be the smarter, more cost-effective choice.


Across industries, the demand for AI shows no sign of slowing down. As McKinsey observed in 2023, nearly a quarter of C-suite executives viewed the personal use of generative AI, and more than 40% professed to invest even more in AI. Boardroom pressure unleashes a rush to implement AI or risk being left behind. Even McKinsey acknwledges fewer than ~15% organisations had made it past pilots, and most pilots never generate genuine ROI to speak of. Industry peers also caution that companies are under “ever-growing pressure to scale the use of AI technologies” even as more than 70% of AI pilots leave no real traces behind.

In short, CEOs are under enormous pressure to use AI but consistently overestimate the gap between hype and real results.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

Risks of unchecked deployment of AI

Indranilla TsybikovaThough thrilling, AI is no panacea, and premature uptake has consequences. In finance, to mention but one industry, businesses and regulators have learned the hard way. The U.S. Consumer Financial Protection Bureau indicated that “several large banks have limited use of…OpenAI’s ChatGPT by employees” because of data-security and regulatory concerns. Accordingly, Morgan Stanley analysts recently outlined how ChatGPT continues “to make up facts” and hallucinates fictional answers and cautioned on generative AI being used as an aid to augment human effort, “not” as a replacement for professional discretion.

These warnings disclose poorly fitting AI assignments that can fall apart. Risky industries like finance and health care pose particularly great stakes where an AI chatbot providing counterfeit advice or breaching confidentiality can ruin customer trust and cause regulators to issue fines. Even outside of regulated industries, excessive reliance on AI can reduce quality: automated software generally can’t see nuance or context that a seasoned hand would soak up.

In practice, a number of high-profile blunders by AI have surfaced during 2023. As an example, take retailing: Online computer hardware store Newegg launched a ChatGPT-powered “PC Builder” to make suggestions about what to buy. Testing out the product, the AI regularly exceeded budgets and chose incompatible options – e.g. suggesting a $1,351 “tiny” build on a $500 budget. PCWorld criticised Newegg’s AI as “flat-out broken”, reporting that even software itself reported “I’m an AI just starting out” as its own suggestions were erroneous. This is a misapplication of an otherwise well-intentioned software feature: instead of streamlining customer purchasing, the bot produced outrageous and useless outputs. That is to say, AI was an inappropriate tool to bring to bear on the problem.

Choosing the right tool for the job

Here’s a fundamental lesson: not every problem needs AI. Most tasks, especially those routine and rule-based are better handled with conventional solutions. Try simple automation or old-school software before turning to AI. Consider, for example, a consultation case study revealing a client’s request for “AI” to categorise digitised documents. In reality, the documents were already properly organised; the “intelligence” needed was simply a way of sorting files. The architect’s team created a simple script to interpret filenames and route each file into its rightful folders. This automation option was about 90% cheaper and took a matter of weeks to put in place, compared to the months it would have taken to build an AI model. Similarly, a business looking for an “AI chatbot” to assist with customer service found that 80% of questions involved simple issues like order status, store hours, and returns. The architects set about improving the FAQ section, integrating order-tracking tools, and automating answers to those routine questions, leaving AI for the other 20% more complex questions. As their analysis correctly put it: “AI excels when you need interpretation and reasoning, while automation efficiently handles information retrieval and routing.”

These case studies reinforce a solution architect’s first rule of thumb: to make the solution fit the problem under consideration. Work depending on hard-and-fast rules or lookup tables in billing, simple alerts, data routing is often best handled with Robotic Process Automation (RPA) or superior business logic, often at higher efficiency and reduced cost than with AI. However, problems with unstructured data, complex predictions, or natural language might require ML or AI. But, the architect should first insist on proper data hygiene: garbage in, garbage out. More often than not, a wiser approach is to clean and normalise data, streamline current processes, or enhance human expertise, saving AI as a last resort where its value is absolutely evident.

Case Study: Finance – cautious investment

That is a prudent definition of the banking industry. Large banks were skeptical towards generative AI. JPMorgan Chase barred its employees from the use of ChatGPT early 2023 due to compliance considerations, a move later adopted by other Wall Street banks. Goldman Sachs and Citigroup did not wish to deploy the tool due to fears of leakage of confidential proprietary information. Bank of America even had ChatGPT listed under unauthorized business programs. Those acts were undertaken, not due to technological fear per se, but risk management: financial institutions believe an AI algorithm may unintentionally utilise sensitive information or provide poor financial advice. Playing it safe as far as implementing AI on the customer-facing level until there is sufficient protection to allow it, the banks protect their customers and reputation.

To the solution architect, these situations highlight the absolute imperative of due diligence. Before a vote of confidence in the abilities of AI, architects should think seriously long and hard about privacy, regulation, and ethics. When an AI tool can handle personal information or become involved in key decisions, good governance is the day. As Morgan Stanley convincingly argues, generative AI arrives at the party to enhance and not displace analysts. Taking as an example how AI can be used to input investment ideas into the hopper might be fine, as long as a human takes a glance first. But relying on an AI tool alone to approve bank loans or generate compliance papers would come too soon using existing technology. To the extent that clever architects take something from the financial world, they take their cue and apply multiple levels of control rather than falling into rose-colored glasses when it comes to the promises offered by an AI.

Case Study: Retail – Newegg’s overreach of AI

With retailing, innovation projects with AI can go off course just the same. A well-meaning failure example is the Newegg PC builder. Dynamic pricing is another one. Giant retailers such as Amazon employed algorithm-based pricing for many years. In 2023 the FTC even sued Amazon for potential algorithm-based price collusion. This demonstrates how “smart” pricing technology can simply induce antitrust inquiry if in the wrong hands. Smaller-sized neo-retailers meanwhile that jump on the bandwagon for employing the chatbot or recommendation engine without firm strategy gain little benefit. More often than not, simply optimising one’s inventory management system, loyalty schemes, and one’s supply-chain workflow brings greater ROI than the deployment of flashy AI modules.

These retailer examples show how overselling AI pushes customers away (e.g. through cold-bot failures) and wastes funds. Solutions architects thus introduce AI only when it offers a straight answer to an ongoing customer or business requirement. If an issue is analytical (e.g. “what products run low stock?” ), vetted analytics or rules engines are a safer bet. If it’s language or creative (e.g. “create customised marketing copy”), AI offers relief, but only through the rigor of thorough testing. The Newegg demonstration reminds us, e.g., that just throwing ChatGPT around a tool does not make it intelligent. The architect must test output, add guardrails, and keep rollbacks on stand-by if it disappoints customers.

Evaluating AI: A solution architect’s checklist

For every solution architect, there will be thorough review before any choice of AI. Questions are:

Problem fit: Is the work learning from large data or from convoluted pattern recognition? Not, select less challenging automation or routine software. Yes, select an AI pilot.

Data readiness: Do we have good, governed data? Invest in data cleanup if not. (See one example of saving 90% of costs by repairing data/processes rather than developing AI.

Value & ROI: What are the extremely specific advantages (cost saved, revenue generated, risk mitigated) AI will achieve? Begin with a modest, quantitative pilot and measure the end-result. Don’t do “AI for AI’s sake.”

Cost vs. complexity: Line up the cost of an AI solution next to others (time, talent, money). Frequently, at several percent of the budget, a rule-based system can be put together in weeks, not months.

Governance and ethics: In regulated environments, conduct a check for regulatory, for fairness and for privacy concerns. Adhere to accepted models – e.g., NIST’s 2023 AI Risk Management Framework dictates designing-in AI system trust from design to deployment. Engage legal, security and ethics stakeholders at an early stage.

Human oversight: Plan for review and for intervention. A wise word of advice comes from one finance firm: treat AI as an aid (“augmentation”) and have experts review its output. Maintain an audit trail so that decisions are always traceable back to either AI or to human intervention.

Iterate and learn: Adoption of AI is iterative. Treat small missteps as learning (as BCG and KPMG leaders advise) and not as catastrophes. Tune use cases with every experiment and scale cautiously.

For practical implementation, architects will typically use decision matrices or flowcharts to map these criteria. As a simple example, if business goals, data, and risk scenarios remain the same, an ML solution would be justified; otherwise, a workflow engine or report would suffice. There are recommendations to ask the question “What particular process or decision we’re automating?” rather than the question “How do we implement AI?”. This requirement-driven approach ensures the tool (AI or otherwise) works on the requirement rather than the requirement working on the tool.

Recommendations for responsible tech adoption

Put business need before technology hype: Begin from the clear goals. If cost savings or better service is the goal, pilots should be started to see if rapid and simple solutions (process improvement, RPA, analytics) would get there first. Don’t deploy AI if there isn’t a particular gap better algorithms would fix.

Adopt a governance strategy: Use frameworks like NIST’s RMF to manage risk. That way, you can integrate and couple fairness, transparency, and security controls into any AI project. Establish in-house policies (like some banks have done) so that employees know when they can use AI tools and how to use them securely.

Iterative pilots with measurable results: Pilot the AI under controlled conditions. Test success on actual KPIs (e.g., customer satisfaction, error rate, time saved), rather than just “innovation.” Don’t fear to pivot or eliminate projects when they’re not obviously contributing to value.

Keep humans in the Loop: Develop systems that allow human professionals to check or monitor the outputs of AI, particularly in mission-critical applications. Train personnel so that they recognize the tool’s shortcomings. Train the “highly educated users,” Morgan Stanley suggests, to recognise errors on the part of the AI. Involve the end-users early on such that the solution actually meets their requirements.

Establish AI literacy & data foundations: Invest in staff talent & data quality. A poorly trained staff or undisciplined data stack condemns most AI projects. An excellent data & technology foundation does the opposite, facilitating safe experimenting with AI where it counts.

Pick AI as a “tool,” not a silver bullet: Last but not least, stay problem-first. Ongoing technological evolution means there will be new AI tools that emerge — but the final rule still stands: pick the simplest tool that will solve the problem. That will most frequently be automation scripts, business process re-engineering, or traditional software. When if you do pick the AI solution, do it with eyes wide open to the costs and caveats.

By guiding ourselves thus, businesses may responsibly take advantage of innovation. Not every problem requires the flash of the AI era, and the bright choice too frequently is an humbly familiar technology. An IT executive’s job is to illuminate the choice: balancing business objectives, data realities, and risk to keep technology — whatever technology it is, whether AI, automation, or older technology, improving the strategy and the customers. By wise judgment and good government, decision-makers may take the advantages of the AI era when it makes economic or business sense to do so, and say “no thanks” proudly when the simpler solution is the better choice.


Featured image credit

Tags: trends

Related Posts

UAE’s new K2 Think AI model jailbroken hours after release via transparent reasoning logs

UAE’s new K2 Think AI model jailbroken hours after release via transparent reasoning logs

September 12, 2025
Barcelona startup Altan raises .5 million to democratize software development with AI agents

Barcelona startup Altan raises $2.5 million to democratize software development with AI agents

September 12, 2025
AGI ethics checklist proposes ten key elements

AGI ethics checklist proposes ten key elements

September 11, 2025
Google Gemini now transcribes audio files

Google Gemini now transcribes audio files

September 11, 2025
Thinking Machines Lab reveals research on eliminating randomness in AI model responses

Thinking Machines Lab reveals research on eliminating randomness in AI model responses

September 11, 2025
CuspAI raises 0M for AI material discovery platform

CuspAI raises $100M for AI material discovery platform

September 11, 2025

LATEST NEWS

UAE’s new K2 Think AI model jailbroken hours after release via transparent reasoning logs

YouTube Music redesigns its Now Playing screen on Android and iOS

EU’s Chat Control proposal will scan your WhatsApp and Signal messages if approved

Apple CarPlay vulnerability leaves vehicles exposed due to slow patch adoption

iPhone Air may spell doomsday for physical SIM cards

Barcelona startup Altan raises $2.5 million to democratize software development with AI agents

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.