Agent Architecture Review Sample (Azure Samples)

Official Azure Sample that reviews architecture inputs, detects risks, and generates interactive Excalidraw + PNG diagrams with CLI, API, and hosted agent deployment paths.

Agent Architecture Review Sample (Azure Samples)

2026 | Python, Agent Framework, Azure OpenAI, FastAPI + React | GitHub

This project is now part of the official Azure Samples ecosystem, and I maintain it as a practical reference for architecture review workflows.

Instead of forcing teams to manually redraw system diagrams before every review, this sample accepts architecture context in almost any format and produces a complete, review-ready package:

  • inferred or parsed architecture model,
  • severity-grouped risks,
  • concrete recommendations,
  • and editable + exportable visuals.

Motive / Why I Built This

I built this because architecture review quality is often limited by documentation friction, not engineering capability.

Across real projects, I repeatedly saw the same bottlenecks:

  • design intent spread across YAML, markdown, code comments, and meeting notes,
  • delayed review cycles because visuals had to be recreated manually,
  • inconsistent risk analysis depth across teams,
  • and weak handoff from local experimentation to hosted deployment.

The goal was simple: meet engineers where they already work, then convert raw architecture context into something immediately reviewable.


Evolution: StructureIQ → Official Azure Sample

This sample evolved from my earlier project:

The Azure Samples version significantly expanded the scope:

  • from a primarily local tooling flow to multi-experience delivery (CLI, Web App, Hosted Agent),
  • from project-only docs to sample-grade onboarding + deployment guidance,
  • from single output path to report + diagram + export bundles,
  • from “works on my machine” to enterprise-friendly deployment + RBAC clarity.

What I Built

I implemented a full architecture-review pipeline designed for practical engineering use:

  1. Smart parsing for structured input (YAML/Markdown/plaintext arrows).
  2. LLM inference fallback for unstructured documents.
  3. Risk analysis engine with both deterministic rules and context-aware AI insights.
  4. Diagram generation via Excalidraw MCP + PNG export.
  5. Structured reporting (summary, risks, recommendations, component map).

I also exposed the same core engine through three usage models:

  • Local CLI (run_local.py)
  • REST API + Web UI (api.py + frontend/)
  • Hosted Agent mode (main.py + agent.yaml) for Foundry deployment

Core Capabilities

  • Input adaptability: supports YAML, markdown, plaintext chains, and free-form docs
  • Risk intelligence: identifies SPOF, scalability, security, and anti-pattern concerns
  • Visual artifacts: interactive .excalidraw + high-resolution .png
  • Automation-ready outputs: structured review_bundle.json
  • Deployment Flexibility:
    • Web App on Azure App Service (FastAPI + React)
    • Hosted Agent on Microsoft Foundry (OpenAI Responses-compatible)

Internal Review Flow

At runtime, the experience is designed to stay predictable:

  1. Parse known structure when available for fast deterministic results.
  2. Fall back to model inference when architecture intent is embedded in prose/code/docs.
  3. Build a normalized component + connection model.
  4. Run risk passes and create prioritized recommendations.
  5. Render diagrams and return exportable artifacts.

This keeps the workflow useful for both formal architecture specs and “messy but real” engineering notes.


Scenario Walkthrough (What You See End-to-End)

Reference scenario:

Typical flow:

  1. Upload or paste architecture input
  2. Agent parses or infers components + connections
  3. Risk engine produces severity-grouped findings
  4. Diagram renders in Excalidraw format
  5. Recommendations and downloadable artifacts are generated

Generated outputs:

  • architecture.excalidraw
  • architecture.png
  • review_bundle.json

Screenshots & Demo

Architecture overview

Architecture Overview

UI walkthrough screenshots

Demo video

These assets are intentionally included so users can quickly validate UI behavior and output quality without first deploying the full stack.


Deployment Paths

Option A — Web App (Azure App Service)

  • FastAPI backend + React frontend
  • custom REST endpoints for integration workflows
  • drag-and-drop UI and downloadable outputs

Option B — Hosted Agent (Microsoft Foundry)

  • deploy via VS Code Foundry extension
  • managed identity + platform scaling + conversation state
  • publish to Teams / M365 Copilot / stable endpoint

Detailed guide:


Technical Stack

  • Microsoft Agent Framework (azure-ai-agentserver-agentframework)
  • Azure OpenAI (GPT-4.1 recommended in sample docs)
  • Excalidraw MCP Server
  • FastAPI + React
  • PyYAML, Pillow, Rich

Engineering Decisions

  • Dual-path risk engine: rule-based for consistency, LLM-based for context depth.
  • Editable-first visual output: Excalidraw chosen so teams can continue design discussion after generation.
  • Multi-surface delivery: same core logic powers CLI, API, and hosted agent paths.
  • Deployment parity: supports both infra-controlled (App Service) and managed-agent (Foundry) operating models.

Challenges I Solved

  • Input variability: designed parser + LLM fallback so unstructured input still works
  • Actionability gap: made outputs practical (risk buckets + recommendations + exportable artifacts)
  • Visualization friction: generated editable Excalidraw plus PNG for docs/presentations
  • Deployment complexity: supported both custom web path and managed hosted-agent path

Where This Helps Most

  • architecture review preparation for engineering/design councils,
  • modernization planning across service-heavy systems,
  • early risk surfacing before release gates,
  • and faster communication between platform, security, and app teams.

Why This Matters

Many sample repos show isolated features; this one focuses on end-to-end review outcomes.

It emphasizes:

  • how pieces fit together in production-like flows,
  • what output users should expect,
  • and how to transition from local experimentation to hosted deployment.

That makes it a stronger sample for learning, demos, and enterprise discussions.


Maintainer Note

I’m actively maintaining and improving:

  • scenario quality,
  • architecture clarity and review depth,
  • deployment reliability and docs,
  • and UX quality across CLI + Web + Hosted Agent flows.

If you’re building hosted Foundry agents, this repo is intended to be a practical starting point.