Skip to main content
← Back to projects HEROw Marketing Agent

HEROw Marketing Agent

Python LangChain/LangGraph

Overview

HEROw Marketing Agent started from a very practical pain: producing marketing content for several platforms and multiple clients was consuming dozens of hours per week just in manual configuration. Each project had its own voice, its own platform rules, and its own set of specialized agents — something generic no‑code tools couldn’t model precisely enough.

To fix this, I built a Python CLI that orchestrates multiple specialized LLM agents using LangGraph. All projects, agents, and platforms are configured via Markdown files with YAML frontmatter, versioned in git. The tool reads this Markdown “brain”, decides which agents should act, runs tools in parallel, and returns per‑platform content in a single CLI session.

This was a solo project: I designed the architecture, implemented the LangGraph orchestrator, created the provider abstraction (Anthropic, Gemini, OpenAI), integrated external tools (web search, MCP), and shaped the command‑line UX with questionary and Rich. The result is a consistent ~20 hours/week reduction in internal content-creation time at HEROw, serving multiple projects and up to 9 different platforms without manual reconfiguration.

Key Differentiator

What makes HEROw Marketing Agent stand out is not “yet another content automation”, but the combination of three engineering decisions:

  1. Full provider abstraction via Protocol: the orchestrator never talks directly to Anthropic, Gemini, or OpenAI SDKs. Every response goes through a single LLMProvider interface and a coerce_content_blocks() function that normalizes content blocks and tool calls into a stable internal format. This makes adding or switching providers possible without touching orchestration logic.
  2. Markdown + YAML as configuration layer: instead of no‑code dashboards, each project and sub‑agent is a .md file with YAML frontmatter (context, voice, specialties). Adjusting behavior becomes editing text in git, not clicking through UIs. This makes the system more traceable, auditable, and maintainable for engineering teams.
  3. LangGraph for tool loops and state: instead of a hand‑rolled while loop with sprawling local variables, execution is modeled as a typed StateGraph with dedicated nodes (call_modelexecute_toolsask_user_interactive) and conditional edges. Tools run in parallel via asyncio.gather, and the state is an immutable TypedDict — easier to test, debug, and extend.

Architecture

  • CLI (questionary + Rich): interactive terminal interface; project and platform selection with recency‑based ordering, progress spinners, and formatted display of generated content.
  • Orchestrator (LangGraph StateGraph): execution graph with call_modelexecute_tools, and ask_user_interactive nodes plus a should_continue function that decides between ending, calling tools, or asking the user.
  • Agent Router: module that computes a lexical score between the user briefing and agent descriptions using lightweight tokenization + stemming; ranks candidates and injects routing hints into the system prompt.
  • Parallel Sub‑Agent Prefetch: async layer that runs high‑scoring sub‑agents with asyncio.gather before the first graph step, feeding their outputs as initial context to the orchestrator.
  • Providers Layer (****LLMProvider Protocol): provider‑specific implementations for Anthropic, Gemini, and OpenAI, all exposing create_messageget_model_capabilities, and format_tools, integrated via coerce_content_blocks() to normalize {type, ...} blocks.
  • Tools Layer: HTTP and scraping via httpx + lxml, DuckDuckGo search via ddgs, and MCP support, all wired into LangGraph tools with concurrent execution through asyncio.gather.
  • Config Engine (Markdown + python-frontmatter****): parser for project and agent .md files, extracting YAML frontmatter (metadata) and body (system context) to assemble prompts dynamically.
  • Testing & Quality: Pytest suite covering orchestrator, providers, sub‑agents, CLI, and tools; Ruff for linting and Black for formatting.

Technical Highlights

  • Abstracted three different SDKs (Anthropic, Gemini, OpenAI) behind a single LLMProvider Protocol and a coerce_content_blocks() function, eliminating provider‑specific conditionals in the orchestrator.
  • Replaced a custom tool‑loop while with a LangGraph StateGraph using typed state and conditional edges, enabling robust parallel tool execution and mid‑flow user interaction.
  • Implemented lightweight lexical agent routing (stemming + weighted field scoring) to pre‑select sub‑agents and reduce latency, injecting routing hints into the model’s system prompt.
  • Modeled projects and agents as Markdown + YAML frontmatter, parsed via python-frontmatter, so behavior can be controlled entirely through git‑versioned text files.
  • Built a Python 3.11 CLI with questionary + Rich, including recency‑aware project selection, progress feedback, and a guided 5‑step flow (project → platforms → briefing → review → follow‑up).
  • Integrated HTTP fetching, basic scraping, DuckDuckGo search (ddgs), and MCP drivers as LangGraph tools with concurrent execution via asyncio.gather.
  • Achieved an estimated ~20 hours/week reduction in internal marketing content workload at HEROw, while serving multiple independent project contexts with zero manual reconfiguration between runs.