What Is Agentic AI, Where Did It Come From, and Where Is It Going?
From Planning Machines to Acting Ones
Few terms in technology have gained traction as quickly as “agentic AI,” and with that speed has come both genuine promise and a fair amount of noise. Separating the two requires understanding where the idea actually came from – because the ambition to build systems that can reason, plan, and act on their own is not new. It is, in fact, one of the oldest ambitions in the field.
The earliest era of AI research (1950s–1970s) was built on the belief that intelligence could be expressed as symbolic logic. In 1957, Allen Newell and Herbert Simon developed the General Problem Solver (GPS) – a program that could decompose a goal into subgoals and work toward a solution through means-ends analysis.1 GPS was, in a meaningful sense, the first attempt at goal-directed autonomous reasoning – the conceptual ancestor of every agentic system running today. In parallel, researchers were laying the perceptual foundations that would prove equally important. Kunihiko Fukushima’s Neocognitron, published in 1980, introduced the first multilayer neural network capable of recognizing visual patterns regardless of position – the architecture that would eventually evolve into the convolutional neural networks underpinning modern computer vision.2 The reasoning half and the perception half of what an agent needs were both taking shape, though decades apart from being unified.
The second era (1980s–2000s) advanced both threads. On the learning side, Rumelhart, Hinton, and Williams demonstrated in 1986 that neural networks could be trained through backpropagation – propagating errors backward through layers to adjust weights – solving the fundamental training problem that had stalled neural network research for a generation.3 On the reasoning side, Rao and Georgeff formalized the Belief-Desire-Intention (BDI) model, giving AI researchers a structured way to build agents that maintain beliefs about the world, form goals, and commit to plans of action.4 The BDI architecture found its way into practical systems including NASA’s space shuttle fault diagnosis program – early evidence that autonomous agents could operate in high-stakes, real-world environments.
The third era (2010s–present) brought these threads together. Deep learning scaled perception and language understanding to levels that would have seemed improbable a decade earlier. Large language models demonstrated that a single system could reason across domains, generate coherent text, and – critically – follow complex instructions. ChatGPT made this capability visible to hundreds of millions of people almost overnight. But the step from chatbot to agent required one more development: in 2022, Yao et al. published the ReAct framework, demonstrating that language models could interleave reasoning with action – thinking about what to do, doing it, observing the result, and adjusting.5 That paper is now widely regarded as the foundational work of agentic AI as we know it. Standardization followed quickly: Anthropic’s release of the Model Context Protocol (MCP) in late 2024 established an open standard for connecting AI systems to external tools and data sources, providing the integration layer that agentic systems need to operate across real enterprise environments.6
What Makes Agentic AI Fundamentally Different
A conventional chatbot, however capable, operates within a single turn: it receives a prompt, produces a response, and waits. The human reads the output, decides what to do with it, and performs the work. The AI advises. The human acts.
Agentic AI inverts this relationship. An agentic system receives a goal – not a question – and works toward it across multiple steps. It decides which tools to use, retrieves information from external systems, makes intermediate decisions, handles exceptions, and delivers a completed outcome. The human defines what needs to happen. The AI determines how to make it happen and does so.
This distinction matters because it changes where the productivity gain occurs. A chatbot makes the human faster at individual subtasks. An agentic system can eliminate entire sequences of subtasks from the human’s workload altogether – the research, the cross-referencing, the drafting, the formatting, the data entry, the follow-up – leaving the human to focus on judgment, oversight, and the decisions that actually require human context.
It is also important to distinguish agentic AI from traditional workflow automation. Robotic process automation (RPA), scripted pipelines, and rule-based engines have existed for years. They follow predetermined paths: if condition A, then action B. They are effective when processes are stable and predictable. They break when conditions change. Agentic AI differs because it reasons about what to do next based on the current state of the task, not a fixed script. It can handle ambiguity, adapt when an initial approach does not work, and coordinate across systems that were never designed to talk to each other.
This distinction is worth emphasizing because the market has noticed. As agentic AI has gained attention, some vendors have relabeled existing deterministic workflows – sequences with no genuine reasoning or adaptability – as “agentic.” Organizations evaluating these solutions should look for the capabilities that define genuine agency: goal-directed planning, dynamic tool selection, multi-step execution, and the ability to recover and adjust without returning to the human at every decision point. If a system can only follow the path it was built to follow, it is automation. If it can reason about a new path when the original one fails, it is beginning to be agentic.
Where Is It Going
The trajectory is becoming clearer as major research and advisory firms converge on similar findings. McKinsey’s 2025 State of AI report found that 23% of surveyed organizations are already scaling agentic AI systems, with an additional 39% experimenting, and estimated that generative and agentic AI could unlock between US$2.6 and US$4.4 trillion in annual economic value.7 Gartner predicts that 40% of enterprise applications will embed task-specific AI agents by 2026, up from less than 5% in 2025, and that at least 15% of day-to-day work decisions will be made autonomously by agentic AI by 2028.8
These projections come with a sobering caveat. Deloitte’s Tech Trends 2026 report found that only 14% of organizations currently have agentic solutions ready for deployment, with 42% still developing their strategy. Deloitte’s central insight is that organizations automating existing processes with agentic AI see limited returns, while those redesigning their operations around agentic capabilities see transformative ones.9 The technology is not the bottleneck. The thinking is.
For individuals, the research points to a counterintuitive finding. While early studies of chatbot-level AI suggested the greatest productivity gains accrued to less experienced workers, more rigorous research tells a different story when the work is complex. A Harvard Business School field experiment with 758 BCG consultants found that participants who uncritically delegated to AI were 19% less likely to produce correct solutions on tasks outside the model’s reliable capability – they effectively stopped thinking and followed the AI’s lead. The consultants who performed best were those with enough domain expertise to recognize where AI was reliable and where it was not, leveraging it deliberately rather than deferring to it wholesale.10 In an agentic context, where the AI is not just drafting a response but executing multi-step work across systems, this dynamic is amplified. The professional who understands the domain well enough to frame the right goal, evaluate intermediate outputs, and intervene when the agent’s reasoning drifts is the one who extracts the most value. Agentic AI does not replace the need for expertise. It raises the return on it.
At the organizational level, the value proposition compounds: shorter cycle times, lower error rates, faster decision-making, and the ability to scale operations without proportional increases in headcount. Culturally, the shift is subtler but no less significant – organizations will need to develop governance frameworks, trust models, and management practices for a workforce that increasingly includes digital participants alongside human ones. Deloitte has described this as the emergence of a “silicon-based workforce,” and the metaphor, while imperfect, captures something real about the organizational change ahead.9
Looking Ahead
The path from research concept to operational capability has taken seven decades, but the last three years have compressed what once seemed like theoretical ambition into practical reality. Agentic AI is not a distant prospect. It is here, it is maturing rapidly, and the organizations that engage with it thoughtfully – beginning with strategy and process understanding, building genuine technical capability, integrating with existing enterprise systems, developing internal competence, and equipping leadership to guide the transition – are the ones positioned to realize its value.
B-Sharp AI was built for exactly this moment – by a team that has been working at the intersection of AI research and enterprise operations since long before the current wave of attention. Our services span the full adoption curve, from initial strategy through to application development, enterprise integration, capability building, and executive advisory. If your organization is working through what agentic AI means for its operations, we would welcome the conversation.
1 Newell, A., Shaw, J.C., & Simon, H.A. (1959). Report on a General Problem-Solving Program. Proceedings of the International Conference on Information Processing, UNESCO. Carnegie Mellon University Library.
2 Fukushima, K. (1980). Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biological Cybernetics, 36, 193–202. Springer Nature Link.
3 Rumelhart, D.E., Hinton, G.E., & Williams, R.J. (1986). Learning representations by back-propagating errors. Nature, 323, 533–536. Nature.
4 Rao, A.S. & Georgeff, M.P. (1995). BDI Agents: From Theory to Practice. Proceedings of the First International Conference on Multi-Agent Systems (ICMAS-95). AAAI.
5 Yao, S., Zhao, J., Yu, D., Du, N., Shafran, I., Narasimhan, K., & Cao, Y. (2022). ReAct: Synergizing Reasoning and Acting in Language Models. ICLR 2023. arXiv.
6 Anthropic (2024). Introducing the Model Context Protocol. Anthropic.
7 McKinsey & Company (2025). The State of AI in 2025: Agents, Innovation, and Transformation. McKinsey QuantumBlack.
8 Gartner (2025). Gartner Predicts 40% of Enterprise Apps Will Feature Task-Specific AI Agents by 2026. Gartner Newsroom.
9 Deloitte (2026). The Agentic Reality Check: Preparing for a Silicon-Based Workforce. Tech Trends 2026. Deloitte Insights.
10 Dell’Acqua, F., McFowland, E., Mollick, E., et al. (2023). Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality. Harvard Business School Working Paper No. 24-013. Harvard Business School.