Stop Drawing Architecture Diagrams Manually: Meet the Open Source AI Architecture Review Sample
Microsoft Tech Community | February 21, 2026
Motive / Why I Wrote This
Architecture review should improve design quality. In practice, it often gets blocked by preparation work: gathering context, redrawing diagrams, and translating scattered notes into something reviewable.
I wrote this article to document a different approach:
- accept architecture input as teams already produce it,
- generate a useful first-pass review quickly,
- and return artifacts that can be reused in design meetings, docs, and follow-up tasks.
The core motivation was reducing review friction without lowering review rigor.
Problem Context: Why Existing Workflows Break
Across projects, architecture intent is rarely stored in one clean format. It usually exists as a mix of:
- partial YAML and config files,
- markdown design notes,
- issue discussions,
- service diagrams from older versions,
- and ad-hoc plain text descriptions.
When teams manually normalize this for review, they lose time and often lose detail. That delay is exactly where important risks get missed.
What I Built
The Architecture Review Agent is an end-to-end architecture analysis pipeline that:
- accepts architecture input in structured or unstructured form,
- parses or infers components and relationships,
- runs severity-grouped risk analysis,
- renders an interactive Excalidraw diagram,
- exports production-useful artifacts (
.png,.excalidraw,.json).
I intentionally supported three execution surfaces:
- CLI mode for fast local iteration,
- Web App mode (FastAPI + React) for collaborative UI-based review,
- Hosted Agent mode on Microsoft Foundry for managed enterprise deployment.
Evolution: StructureIQ → Official Azure Sample
This did not begin as an official sample.
It evolved from my original project:
Then expanded into:
The evolution required more than a rename. It involved:
- moving from one usage path to CLI + Web + Hosted Agent experiences,
- improving outputs from “diagram generation” to review-oriented analysis + recommendations,
- producing clearer setup, deployment, and operational guidance,
- and aligning the repository with expectations for official sample consumption.
Why This Matters in Real Teams
Architecture communication often fails at handoff points: between platform teams, app teams, and review stakeholders.
This sample addresses that by making output immediately actionable:
- severity-grouped risks,
- structured recommendations,
- component mapping,
- and editable diagrams teams can refine instead of recreating.
What Readers and Builders Get (Practically)
- A realistic scenario walkthrough with expected review outcomes
- Deployment guidance (
deployment.md) for hosted execution - Clear guidance for both CLI-first and UI-first workflows
- A reusable review model that can fit multiple architecture styles
Scenario Walkthrough Used in the Sample
Primary scenario:
This scenario was chosen because it combines complexity and realism:
- multiple services,
- event-driven flows,
- critical data stores,
- compliance-sensitive transaction paths.
It is useful for demonstrating both parsing quality and risk analysis depth under enterprise-like constraints.
Screenshots & Demo Artifacts
Architecture overview

Product UI screenshots
Demo video
These media assets are included to make evaluation easier for new users before they commit to setup or deployment.
Deployment Story (Why Two Paths)
I kept both deployment paths intentionally because organizations optimize for different constraints.
Option A — Web App on Azure App Service
- FastAPI + React UI
- custom REST endpoints (
/api/review,/api/infer) - ideal for teams that want API ownership and custom UI control
Option B — Hosted Agent on Microsoft Foundry
- managed infra, managed identity, managed scaling
- OpenAI Responses-compatible endpoint
- channel publishing potential (Teams / M365 Copilot)
- ideal for teams that want lower operational overhead and managed deployment behavior
Deployment guide:
Design Principles Behind the Sample
- Input realism over template purity: handle what teams actually provide.
- Actionability over novelty: prioritize outputs that change engineering decisions.
- Editable outputs by default: generated artifacts should be starting points, not dead ends.
- Deployment optionality: one core engine, multiple production paths.
GitHub Stats Snapshot
Live repository growth can be tracked directly on the linked project page:
Key Takeaways
- Architecture review tools should adapt to mixed-quality, mixed-format input.
- Quality is improved when outputs are structured, shareable, and editable.
- The combination of deterministic parsing + LLM fallback is practical in real teams.
- Deployment flexibility is essential for adoption across different operating models.
Links
- Official sample repo: Azure-Samples/agent-architecture-review-sample
- Origin project: StructureIQ
- Project detail page: Agent Architecture Review Sample