top of page

Key Takeaway:

 

This training deepened my understanding of how to design AI agents with modular architecture, feedback loops, and governance in mind.

 

It also reinforced the importance of balancing innovation with safety—making agents not just powerful but also trustworthy.

📘 Demystifying AI Agents – Microsoft Training (Cihan Cinar, 5/21/2025)

​

Overview
This session unpacked what AI agents are, how they differ from simple models, and how Microsoft tools support building and deploying them. Cihan Cinar (Software & AI Architect) emphasized both the architecture of agents and the practical risks and opportunities they bring.

​

Key Concepts

  • Definition: AI Agents are systems that perceive inputs, make decisions, take actions, and adapt through feedback.

  • Core Components:

    • Perception (data, sensors, user input)

    • Reasoning (rules, LLMs, planning, ML models)

    • Action (executing tasks, tool/API integration)

    • Learning (adapting via reinforcement or supervised learning)

  • Agent Architectures: modular design with pipelines, memory, and orchestration. Multi-agent systems allow agents to collaborate.

  • Microsoft Integration: Azure AI Agent Service and orchestration frameworks for building scalable, tool-using, and governed agents.

​

Challenges & Risks

  • Hallucinations and ambiguity in LLM-based agents.

  • Trust, transparency, and the need for observability/logging.

  • Security, privacy, and bias concerns.

  • Importance of clear scope and responsible autonomy.

​

Best Practices

  • Define the agent’s purpose clearly and limit scope.

  • Build a minimal viable agent first, then expand capabilities.

  • Include monitoring, feedback, and human-in-the-loop safeguards.

  • Use sensitivity labeling, policy enforcement, and compliance frameworks for responsible deployment.

bottom of page