Enterprise software still makes people work for the software. Teams click through nested menus, reconcile scattered data, and interpret charts on their own before they can act. We digitised forms and moved spreadsheets to the web, but we didn’t eliminate the effort. The promise of automation remained mostly aspirational.
That promise becomes real only when intelligence sits at the core of the product. AI‑first (not AI‑layered) systems understand intent, bring the right context together in seconds, explain what’s happening, and, when appropriate, take the next step autonomously under guardrails.
This article distills our Seven Peaks keynote on Intelligent Apps. It outlines why the traditional model is broken, what AI‑core design looks like, and how data, engineering, and product teams can ship it responsibly. It closes with concrete narratives and a pragmatic way to start.
The next decade of B2B software will be defined by applications that think, decide, and act—turning back‑office tools from record‑keepers into decision partners.
Most of the systems we build and buy—HRIS, procurement, CRM, service operations, finance—are digitised versions of paperwork. They centralise data and standardise processes. That’s progress. But the experience is often a maze: finding the right module, filtering the right list, opening the right record, scanning the right tab, reconciling values across screens, deciding what it means, and then switching context to take action.
We replaced clipboards with dashboards. We did not replace the work.
When interpretation lives in each user’s head, decisions vary wildly. Every extra click, tab, and query bleeds intent and attention. Software becomes a destination users must navigate, not a partner that delivers outcomes.
Digital isn’t helpful if insight is still DIY.
In the last 18 months, many teams tried to add AI to existing tools—drop a chatbot in the corner or sprinkle generative text on forms. Helpful at times, yes. Transformative, rarely.
When AI is attached as a surface‑level layer over legacy architectures, three frictions persist. The model has poor context because the data beneath is fragmented. The old menu‑driven journeys stay intact, now with an extra step. And the user still does the heavy lifting to reach and execute a decision.
Real value emerges only when we re‑architect the product so that AI has grounded context, can reason across steps, and is authorised to act inside well‑defined guardrails. That is what we mean by AI‑core.
An intelligent app changes the unit of work from clicks to outcomes. It behaves less like a page you operate and more like a colleague you brief.
It starts by understanding intent. A manager might say, Show last quarter’s top performers and the skills gap in my team. The system assembles the relevant context across sources—performance reviews, project outcomes, learning data—and returns the answer as a clear explanation with the right visual attached. Crucially, it proposes next steps and can, with permission, carry out part of the plan.
Answers first. Drill‑downs second. Action close at hand.
The most important shift is mental. Don’t design pages. Design decisions. For every core decision ask: what intent triggers it, which signals matter, what rules and approvals apply, and what safe automations are acceptable?
Every object in the system—employee, supplier, invoice, asset—should be addressable via natural language. The command bar is not an accessory; it is the front door to the product’s intelligence.
Default output is a reasoned answer with just‑enough evidence and an option to drill. The system should explain why and what next, not just what.
Role‑based scopes, soft confirmations, approvals on sensitive steps, and full audit trails make automation trustworthy. Safety is a product feature, not a legal footnote.
Decouple the interface from the model and from the business services. This allows you to swap models, scale cost‑effectively, and keep the UX responsive even when decisions are complex.
Capture explicit feedback at the answer level, verify actions post‑hoc, and monitor model drift. Reliability earns adoption.
Intelligence requires clean, connected, contextual data. The quick wins rarely come from bigger models; they come from better grounding.
The first step is unifying sources into a governed warehouse or lakehouse. Normalise and enrich records with stable identifiers, timestamps, and consistent units. Add meaning through simple ontologies or graphs so the system can step naturally from employee → team → product line → revenue → churn risk. Vectorise policy and knowledge so conversational answers can be grounded in the company’s own content. Track lineage, freshness, and quality so answers are explainable and repeatable.
When the data layer is trustworthy, conversational analytics and agentic workflows stop being demos and start being operations.
Consider the familiar journey of investigating an employee’s leave pattern. Today, a manager opens an HR dashboard, hunts for the employee module, searches and opens the record, finds the leave tab, exports dates, reads notes, interprets patterns, decides what to do, and then switches tools to take action.
That is eight steps before the conversation even starts.
In an AI‑core HR app, the journey collapses to one line: Why has Jane used nine emergency leave days this quarter? Is this a policy issue or a wellness risk? Draft my options. The system assembles context across attendance, workload spikes, policy changes, prior notes, and sentiment in recent one‑to‑ones. It explains the pattern, assigns confidence, and offers actions such as scheduling a check‑in, triggering a coaching plan, or escalating to HR with a drafted message ready for approval.
Old way: five clicks times five tabs over fifteen minutes. New way: five seconds to insight and one click to act.
The difference is not cosmetic. It is a new unit of work for managers.
Live call analysis suggests talking points and compliance reminders, scores the call, drafts the follow‑up, and schedules the next touch. Managers monitor outcomes, not keystrokes.
The system screens suppliers, flags contract risks, auto‑routes approvals, and proposes alternatives when prices rise or availability drops. Nobody waits for a custom report to understand exposure.
Conversational closing becomes practical. List unreconciled items over 500,000 THB with likely causes. Prepare a remediation plan. The agent assembles evidence, proposes entries, and stages tasks to the right owners.
The platform triages tickets, drafts responses with links to policy and history, and triggers back‑office actions such as credits, replacements, or escalations—all with guardrails and audit.
Intelligence without governance is a liability. The product must make the safe path the easy path.
Start with role‑based scopes so the agent can only see and do what the human could. Use soft confirmations for sensitive actions and keep a human in the loop where risk is non‑trivial. Log prompts, context, decisions, and outcomes to create an auditable trail. Guard personal data with masking, minimisation, and time‑boxed retention. Evaluate models for correctness and bias, and monitor drift over time. Be transparent about sources, assumptions, and confidence so people understand why the system recommends a step.
Trust is not a feature you add later. It is the reason people accept automation at all.
The best way to start is not with a platform overhaul, but with a single high‑frequency, high‑friction decision in one domain. Pick a decision such as investigating unusual leave patterns. Write the outcome in a sentence. Describe the signals the system would need, the acceptable actions it could take, and where approval is required.
Connect the minimum set of systems that contain those signals. Define just enough shared meaning—names, IDs, and relationships—to reliably assemble the context. Create a small retrieval index of policies and procedures. Make the command bar your entry point and the answer card your default output, with one click to approve the next step. Pilot with a small group and measure how long it takes to get to an answer and how often answers lead to action.
Harden what works. Expand gradually to the next decision. Each new decision becomes easier as your data and patterns mature.
Time to insight becomes the headline metric: seconds from intent to answer. Action rate shows whether answers lead to meaningful steps. Correctness can be measured with user confirmation and post‑hoc validation. Trust is captured directly on each answer card. Cycle time shrinks as steps disappear. Cost to serve falls when routine work is automated and handoffs are reduced. The most important signal is outcome impact: fewer escalations, reduced attrition, faster approvals, higher satisfaction.
Measure week by week. Intelligent apps should compound.
Product and design shift from page maps to outcome maps. Data becomes a product in its own right—modelled, governed, and observable. Engineering moves from large releases to orchestration of events, tool‑use, and guardrails. Operations pivots from policing to enabling: defining safe autonomy and measuring value. Leadership sponsors fewer, clearer bets and protects the space for learning.
This isn’t a new widget. It’s a new way to build.
It shouldn’t be. A chatbot answers. An intelligent app answers and acts, because it is wired into your data and workflows with permissions and audit.
It replaces steps, not people. The value is in moving humans to approvals, coaching, and strategy—work that benefits most from judgment and empathy.
Risk is exactly why we build guardrails. Start in low‑stakes areas, measure carefully, and expand as confidence grows.
Abstract models and tools behind an orchestration layer. Keep your data model, retrieval index, and business logic portable.
Industry momentum is clear. A growing majority of enterprise software vendors are embedding generative and agentic capabilities. Interfaces are shifting from screens of options to intent and outcomes. Teams that redesign around an AI core—supported by governed data and cloud‑native architecture—will set the standard for how back‑office work is done.
We finally have the tools to keep the original promise of automation.
For years we tried to deliver automation by digitising steps. We got better forms and nicer dashboards, but the work stayed largely the same. Now we can change that. If we build with AI at the core—conversation as the control surface, agents as the engine, and governed data as ground truth—our products can finally deliver on the promise of seamless, intelligent workflows.
Not as an overlay. As the operating core.
Seven Peaks designs and ships enterprise‑grade management systems, dashboards, and mobile applications for B2B organisations across Southeast Asia and beyond. Our multidisciplinary teams bring together design, data, and engineering to modernise critical workflows and unlock new business value with AI‑first product thinking.