Conferences are loud. Panels, demos, hallway debates.
Strip away the noise and one idea keeps cutting through: software is moving from something you operate to something that delivers results. The leaders who win next will redesign pricing, teams, and guardrails around outcomes rather than effort.
Framing the lens
Across three conversations I moderated at Web Summit Qatar 2025, the same pattern appeared in very different contexts. In software engineering, the meaningful unit is no longer lines of code. In SaaS, it is not licenses or seats. In industry and energy, it is not CPU hours or dashboards. The unit is the outcome. Fewer defects. More revenue. Less flaring. Stable throughput under real constraints.
This shift changes the human role. Operators become directors. The most valuable skill is not typing the solution. It is specifying the problem, setting constraints, and orchestrating AI agents that execute reliably. As Andrew Filev, CEO and Founder of Zencoder, put it, “engineers who adopt AI are starting to get good at breaking down complex problems into simple ones.” As Anand Kulkarni, CEO and Founder of Crowdbotics, added, “the more that we can start to see these tools be adopted, the more important it’s going to be for us to think about everybody else in that organization.” Writing the right brief becomes the new superpower.
Once you start looking for outcomes, you see the pattern everywhere.
From typing code to directing agents
A simple question hides a lot of anxiety. Is AI a friend or a foe for developers? Most engineers I meet feel both things at once. Generative tools can write code on demand. Search results sometimes claim developer jobs are collapsing.
Andrew disagreed with the collapse narrative. He called out one viral job statistic as a signal about a single recruiting platform, not the profession: “I don’t remember the last time I posted a job for engineers on Indeed.” The more interesting point is not whether engineers vanish. It is how their work changes. He described internal and public benchmarks where coding agents “can solve half of the bugs or half of the problems, and that’s amazing.” The exact percentage will vary by codebase and setup, but the direction is clear.
Agents are not toys. They are tools. Ignore them and you fall behind.
Anand’s perspective complements this. When teams audit a sprint, the time engineers spend typing is smaller than most leaders expect. “It’s something like, at worst, 10% of an engineer’s time,” he said. The rest is planning, coordination, searching, debugging, and aligning with product intent. If you only point AI at the code editor, you miss the bigger win. The best teams compress the whole cycle. They generate drafts of requirements, test plans, and scaffolding. They let agents map large repositories and propose changes that a human then inspects and approves. When you measure the full loop, you can trim a third of a sprint and sometimes more, not because a bot types faster, but because the whole system thinks better.
What does this do to the role. It pushes engineers toward product judgment and design thinking, and less toward the rote parts of implementation. Anand called it becoming “a director of agents.” I like that phrase because it captures the dignity of the work. No one becomes a software engineer in order to memorize syntax. They do it to create. This change gives them more time to design what should exist and less friction to make it real.
The education question always comes up. Should you still study computer science. Andrew’s answer was immediate: “Now more than ever.” Anand agreed: “There’s no substitute, I think, for learning the foundational elements of computer science today, more than ever.” The abstractions keep moving up. We went from Assembly to C to Python to the cloud and open source. With each step, the same fundamentals let you do more. The difference if you are starting today is scope. Your first projects can be larger because your tools already carry more weight. Anand put it more plainly: “Now is perhaps the most exciting time to be a beginner in computer science because you’ve got so much at your fingertips.” And Andrew’s career advice still applies: “Be the best person at these new technologies,” he said, “and that’s going to launch your career much faster and further.”
Here is the takeaway for leaders. If you measure tickets closed, you will get tickets. If you measure customer visible outcomes, you will get outcomes. Start writing metrics that match the results you actually care about and set your agents to work on those. Then train your teams to write excellent requirements and guardrails. That is where the new leverage lives.
Pricing the outcome, not the seat
If agents can do the work, why are you still pricing the tool. Kanchan Ray, CTO of Nagarro, offered a simple taxonomy that helps. There are user directed agents, where a human sets constraints and asks for a plan or artifact. There are copilot agents, which sit alongside a human and suggest. And there are autonomous agents with guardrails, which execute within defined bounds and escalate when they need approval. “That’s going to be huge,” he said of autonomy.
Each mode invites a different business model. If the agent only suggests, a seat may still make sense. If the agent acts, the logic shifts. Kanchan’s sales example is easy to grasp: “I’ll only charge if the sale happens, not just because you’ve used some resources.” Sellers will worry about losing predictable revenue. Buyers will love the lower barrier to entry. Micropayments for micro outcomes can unlock adoption at a scale that a seat price never could.
Yuri Dvoinos, CIO at Aura, described the broader picture as “almost like a virtual workforce working for us.” We used to operate tools. Now the tools operate on our behalf. That means two important things for leaders.
- First, the agents sit on top of your existing stack. They cross the silos. They will start a process in one system, fetch data from another, and complete the transaction in a third.
- Second, management becomes orchestration. Your job is to specify what gets done, by which class of agent, with what authority, and at what threshold you step back in.
Gautam Rege, Co-Founder of Josh Software, pushed the personalization angle further: agents will “not just [be] suggesting, but could also be deciding for us.” That is attractive for routine tasks. It is also where healthy caution belongs. The line for how far to delegate is not a vendor’s choice. It is yours. Draw it intentionally.
The risks are real, and we should talk about them plainly. Explainability matters when an agent gives a discount or reverses a charge. As Kanchan reminded us, “keep the empathy, right? Keep the human in the loop.” Data security and compliance determine the shape of what you can legally do. Guardrails are not posters on the wall. “I will put up guardrails,” as Kanchan said. They are approval limits, revocation paths, and logs that humans can audit. And creativity gets better, not worse, when you keep the human in the loop. As Yuri put it, “AI-assisted, human-generated creativity is the future.”
If you are a SaaS leader, the actionable path is straightforward. Identify one outcome that customers value this quarter. Instrument it so both sides can see it. Offer a pilot that prices against that outcome in one narrow flow. Keep a copilot mode while you learn. Codify guardrails early. Then review incidents and drift every month and adjust. You do not need pure autonomy everywhere to begin moving your model.
When outcomes are physical
Greg Fallon, the CEO of Geminus AI, put it simply: “in the industrial environment, precision counts. High precision counts.” When you control a refinery, a well, or a pipeline, an approximate answer is not good enough. You can damage equipment. You can hurt people. That is why industrial AI evolved differently from consumer chatbots. There is a lineage in scientific and engineering circles that focused on high precision predictive models, often trained with a different mix of data and validated against physical constraints.
What does that look like in practice. In one deployment that Greg described, the AI monitors a natural gas network. It watches conditions at the wells, the atmosphere that affects gas in the pipeline, and the state of downstream processing. Then it recommends actions. “Turn up this pump, turn down that valve.” It is not writing an email. It is operating a system that moves real molecules through space.
The outcome is clear and not negotiable. Reduce flaring. Keep throughput high without breaking constraints. Avoid spikes and stalls as inputs change. The data strategy is different too. Industrial firms sit on massive sensor streams. A single field can produce staggering volumes. Those streams drift and degrade, especially in harsh environments like offshore rigs. So the system blends sensor data with synthetic data from simulators. It cross checks sources to catch drift. It often runs locally because data sovereignty rules restrict what can leave the site. It sometimes needs to run on older edge hardware because that is what the plant has available.
The architecture is changing as well. The same way we talk about foundation models for language, there is a push toward enterprise and sector foundation models for energy and industry. Greg called it an “enterprise foundation model or an energy foundation model,” and he was unapologetically bullish. “We have some pilots ongoing… at a country level and also at an enterprise level.” The benefits compound. A small efficiency gain multiplied across an entire chain is not small. It is a strategic lever.
One more quote landed with the builders and policymakers in the room: “It took 130 years for the world to create all the wires that generate the electricity we have today. We have to double that in the next 20 years.” This is where the outcome economy meets the physical world. You cannot hide behind vanity metrics. Either the flaring went down or it did not. Either the process held steady across shifting inputs or it did not. The methods and guardrails look different from a sales agent, but the managerial task is the same. Specify what success is, instrument the system so you can see it, give the agent authority within limits, review what happened, and improve the loop.
The new organizational chart for outcomes
If you accept the shift, your organization changes shape.
Product managers gain leverage because they write the contracts between intent and execution. They define the outcome, the constraints, and the escalation path. Business analysts become translators who turn messy goals into agent readable requirements and data contracts. Reliability and security leaders move closer to the center because they design and monitor the guardrails. Engineers keep their craft, and they also grow their role as directors who decompose problems, review proposals from agents, and make the final call when the system hits an edge case.
Education matters more, not less. Fundamentals give you judgment. They let you smell a brittle solution before it breaks. They also free you to delegate the right things.
You cannot orchestrate a system you do not understand.
A simple checklist to start:
- Pick one outcome that matters this quarter. Write it in one sentence.
- Instrument for proof. Decide what data makes the outcome visible on a shared dashboard.
- Start in copilot mode. Use an agent that suggests and ask your team to accept or reject. Track speed and quality.
- Codify guardrails. Put approval limits, rollback steps, and incident playbooks in writing and in code.
- Reprice one flow. Where you control the commercial model, try an outcome linked pilot with a willing customer.
- Upskill the team. Run short workshops on writing requirements for agents, designing prompts that look like specs, and reviewing agent proposals.
- Audit and adapt. Review drift, incidents, and false positives every month. Adjust guardrails and authority levels.
Risk, trust, and the human edge
It is tempting to wave away risk in the excitement of a new capability. We should resist that. As Yuri reminded the room, adversaries get more efficient too. “AI is the thing that multiplied scams,” he warned, “and that’s what made these scams so efficient.” AI is already multiplying social engineering, deepfakes, and enterprise scams. The correct response is not to hide. It is to build defenses that are just as efficient. That means verification by default, tighter identity controls, and agents that can detect anomalies in the same places they now automate work.
Reliability is not only a security problem. In industry it is also a data problem. Sensors drift. Hardware weathers. Models that worked last week can slide out of calibration this week. The answer is not a single clever trick. It is a process. Validate data from multiple sources. Recalibrate on a routine cadence. Give the system a way to tell you when it is outside its comfort zone and needs a human.
The most encouraging thread in all three conversations was about creativity. We like to believe it is what makes us human. Yuri’s line captured the balance leaders should aim for: “AI-assisted, human-generated creativity is the future.” He also cautioned against a common trap: “over-reliance on AI is another bias that a lot of people have.” I do not believe everything becomes automated. I do believe that becomes the norm. It respects human judgment. It scales human taste and intent.
A leader’s playbook for the outcome economy
If you lead a team or a product, here is how to start.
Pick one outcome. Make it specific and near term. For example, reduce average resolution time for a certain class of customer issues by a clear percentage. Instrument your system so you can see it. If you cannot measure it, you cannot price it or govern it.
Deploy one copilot agent where you have human capacity to review. Use it to draft requirements, plans, or code changes, and track accept or reject rates. As acceptance rises, allow the agent to execute a narrow class of actions within guardrails. Treat each expansion like a product launch. You are giving authority. That deserves care.
Where you sell software, pick one flow where you can link price to value. Offer a pilot where a customer pays only when the outcome happens. Use it to learn how to meter and how to share risk and reward.
Train your people in the skills that compound in this new world. Problem decomposition. Requirements writing. Guardrail design. Data contracts. These are not side notes. They are the new craft.
Institute a monthly review that looks like an air safety meeting. What went right. What drifted. What failed safe. What failed in a way we did not expect. Improve the playbook and the code.
This is the simplest way to navigate the noise.
Buy outcomes. Price outcomes. Staff for outcomes. Judge your AI by what it accomplishes, not what it consumes. Ship value, not just software.