Agentic AI Systems: Opportunities, Challenges, and the Need for Robust Governance
Updated on

Agentic AI Systems: Opportunities, Challenges, and the Need for Robust Governance

AI agents, also known as intelligent agents, are software programs designed to perceive their environment, take actions, and achieve specific goals autonomously. They differ from traditional computer programs in their ability to learn and adapt, make decisions, interact with surroundings, and operate with limited supervision. Agentic AI systems integrate one or more AI agents that collaborate with each other and provide a unified seamless experience and outcome for the end user.

Chatbots have been in existence for a while. Are chatbots AI agents? Or do they differ from AI agents?

AI agents and chatbots share similarities in their ability to interact with users, but differ significantly in their capabilities and underlying technologies:

Chatbots AI Agents
Focus
Predefined tasks, scripted responses, and simple interactions Complex tasks, dynamic decision-making, and collaboration with other agents
Technology
Often rule-based, relying on keyword matching and pre-programmed responses Built on AI techniques like machine learning, natural language processing, natural language understanding, knowledge representation, and basic levels of causal reasoning
Capabilities
  • Follow scripts: Provide predefined responses based on user input, keywords, or decision rules
  • Limited adaptivity: Cannot learn or adapt significantly beyond their initial programming
  • Independent: Function individually and typically do not collaborate with other agents.
  • Learn and adapt: Can continuously learn from data, user interactions, and improve their performance over time
  • Reason and plan: Can understand context, reason through problems, and make informed decisions based on their goals
  • Collaborate: Can work together with other agents to achieve complex goals that require coordination and communication
Example
An example of a chatbot is that of a customer service bot that uses a predefined script, provides basic information, and directs you to appropriate resources based on predefined options. An example of an AI Agent is a highly trained personal assistant who can understand your needs, learn your preferences, and take initiative in helping you achieve your goals.

 

In essence, AI agents are like general-purpose tools that can be adapted to various tasks requiring intelligence, decision-making, and collaboration. Chatbots, in contrast, are like specialized tools that excel at specific, well-defined tasks but lack the flexibility and adaptability of AI agents.

How to build Agentic AI systems?

AI Agent Frameworks for Building Collaborative Intelligence

AI agent frameworks are a combination of libraries and workflows that facilitate the creation and management of intelligent software agents. Following are some of the latest frameworks to build AI agents:

AutoGen

AutoGen

AutoGen from Microsoft provides a multi-agent conversation framework as a high-level abstraction. It is an open-source library for enabling next-generation LLM applications where users can build LLM workflows with multi-agent collaborations and personalization. The agent modularity and conversation-based programming simplify development and enable reuse for developers.

Use case: An enterprise knowledge management system with a conversational interface using a knowledge base agent, retrieval agent, and dialogue agent.

CrewAI

CrewAI

CrewAI is an open-source framework built on top of LangChain for creating and managing collaborative AI agents. It enables developers to build cohorts of specialized AI agents that can work together to achieve complex tasks.

Use Case: A marketing team could use CrewAI to create a series of agents –  one to gather customer data from social media, another to analyze sentiment, and a third to generate targeted marketing campaigns based on the insights.

LangGraph

LangGraph

LangGraph is also another open-source framework built on top of LangChain. It helps represent multiple agents in a graph network and ensures seamless integration and collaboration.

Use Case: An AI Research assistant that comprises a research content web scraping agent, a processing agent that identifies relevant content default behavior and synthesizes and stores curated content, and a generating agent that crafts initial drafts of research papers based on user goals and objectives

Challenges of Adopting Agentic AI

Despite the significant advancements in Agentic AI, there are several key challenges that still need to be addressed, such as:

  • Unforeseen consequences: Agentic AI systems, due to their adaptability and ability to learn, can potentially engage in unforeseen actions or decisions, leading to unintended consequences.
  • Limited understanding of internal workings: The complex decision-making processes within these systems can be opaque. This can make it difficult to identify the root cause of errors or failures.
  • Transparency in data usage and processing: Concerns exist regarding potential misuse of user data by agentic AI systems and the need for transparent practices in data collection, storage, and utilization.
  • Unmitigated bias: Training data and algorithms can contain inherent biases that agentic AI systems may learn and perpetuate, leading to discriminatory or harmful outcomes.
  • Understanding decision-making: It’s often challenging to understand how agentic AI systems arrive at specific decisions, hindering user trust and hampering troubleshooting or improvement efforts.

Guide to Successful Development & Implementation of Agentic AI

As agentic AI systems get mainstream with their ability to accomplish complex goals, there is a need for a robust governance framework to overcome the challenges. A recent paper from OpenAI titled “Practices for Governing Agentic AI Systems” outlines some guidelines for safe and responsible development and deployment of such systems. Following are some key insights from the paper that would enable the responsible development and adoption of such systems:

  • Defining Responsibilities:
    • Clear roles and liabilities: Clearly define who is responsible for the actions of agentic AI systems throughout their lifecycle, including developers, deployers, and users. This promotes accountability and mitigates potential harm.
    • Attributability: Provide a unique identifier to AI agents so that it is possible to trace the source of error when required.
  • Ensuring Safety:
    • Robust safety measures: Implement safeguards like regular audits, human oversight for critical decisions, and clear guidelines for acceptable actions to minimize potential risks and unintended consequences.
    • Constrain the action space and seek approval: In some cases, prevent agents from taking specific actions entirely to ensure safe operation. It is prudent to have human-in-the-loop for review and approval when the cost of wrong decisions and actions can be catastrophic.
    • Timeouts: Implement mechanisms to periodically pause the agent operation and require human review and reauthorization, preventing unintended harm from continuous unsupervised operation.
    • Setting the Agent’s default behavior: Reduce the likelihood of the agentic system causing accidental harm by proactively shaping the model’s default behavior that reiterates user preferences and goals to steer toward actions that are the least disruptive ones possible, while still achieving the agent’s goal.
  • Transparency and Explainability: Ensure the reasoning and decision-making processes of agentic AI systems are clear and understandable to the extent possible. This fosters trust and allows for identification of potential biases or flaws.
  • Automatic Monitoring: Set up a Monitoring AI system that automatically reviews the primary agentic system’s reasoning and actions to check that they are in line with the user’s goals and expectations.
  • Ad hoc Interruption and Maintaining User Control: User should always be able to activate a graceful shutdown procedure for its agent at any time, both for halting a specific category of actions and for terminating the agent’s operation more generally.
  • Ethical Considerations: Ensure that the development and deployment of agentic AI systems adhere to ethical principles and societal values. This includes promoting fairness, non-discrimination, privacy, and overall human well-being.
  • Public Dialogue and Participation: Encourage open discussions and collaboration between experts, policymakers, and the public to shape responsible AI development and ensure it aligns with societal values.

It’s important to note that OpenAI’s framework is just a starting point, and ongoing research and discussion are crucial in developing comprehensive and effective governance models for agentic AI systems.

The Road Ahead

Real-world systems involve amalgamation of multiple capabilities, which warrants the design of Agentic AI systems that use multiple AI agents. We are seeing the emergence of such design patterns, given the limitations of large language models (LLMs) in producing outputs with just one API call for complex tasks. Since the quality of output of a system is multiplicative of the individual output quality from each subsystem, each subsystem would need its own output verification, validation, and feedback loop to ensure reliable and trustworthy outcomes. Governance of agentic AI is an ongoing process that requires continuous adaptation and improvement as the technology evolves.

By fostering collaboration, promoting transparency, and prioritizing ethical considerations, we can navigate the development and deployment of agentic AI responsibly and reap its benefits for the betterment of society. Agentic AI systems offer immense potential and are going to be a game changer in the coming days.


Jayachandran Ramachandran
Jayachandran Ramachandran
Jayachandran has over 25 years of industry experience and is an AI thought leader, consultant, design thinker, inventor, and speaker at industry forums with extensive...
Read More