Zum Inhalt springen
>_<
AI EngineeringWiki

Tools

n8n AI Workflow Bundle v3

14 enterprise workflows for local automation with error handling, dual-LLM fallback and structured logging.

Updated: March 2026n8n 2.xv3.0
πŸ“‹ At a Glance

14 production-ready n8n workflows in 5 categories: Email, Social Media, Revenue, Infrastructure and Lead Generation. Every workflow includes a built-in error handler, dual-LLM fallback (Ollama + Cloud) and structured logging. All data stays local β€” GDPR compliant.

What are AI Workflows in n8n?

An AI workflow in n8n is a chain of nodes that automates a business process while using a Large Language Model (LLM) for text processing. The difference to classic n8n workflows: at least one node calls an LLM β€” locally via Ollama or through a cloud API.

Typical use cases: email summarization, content generation, lead qualification, support responses and data extraction from unstructured text. The LLM handles tasks that rule-based automation cannot solve β€” understanding context, tone and intent.

Workflow Categories

Email (3 Workflows)

  • Daily Digest β€” summarize emails and post to chat
  • Auto-Responder β€” answer common inquiries with AI
  • Lead Capture β€” extract contact data from emails

Social Media (3 Workflows)

  • Content Generator β€” create posts from RSS feeds
  • Post Scheduler β€” schedule and publish posts
  • Engagement Monitor β€” track mentions and comments

Revenue (3 Workflows)

  • Stripe Payment Pipeline β€” webhook to download link
  • Weekly Report β€” aggregate revenue data and report
  • Subscription Lifecycle β€” trial, cancellation, upgrade

Infrastructure (3 Workflows)

  • Health Check β€” monitor containers, Ollama, disk usage
  • Backup Monitor β€” verify backup status and alert
  • Service Restart β€” auto-restart failed services

Lead Generation (2 Workflows)

  • Lead Qualification β€” AI scoring by company size, industry, inquiry quality
  • Lead Nurture Sequence β€” personalized follow-up emails based on score

Workflow Architecture

All 14 workflows follow the same architecture. The error handler is not a separate workflow but integrated as a branch into every workflow.

Diagramm wird geladen...

Error Handling Pattern

Error handling in v3 is based on the n8n Error Trigger node. This node fires automatically when any node in the workflow throws an error β€” whether it is an HTTP timeout, Ollama failure or invalid data.

What the Error Handler Does

  1. 1. Structured logging β€” workflow name, node name, error message, timestamp as JSON
  2. 2. Send notification β€” Team-Chat, Slack or email (configurable)
  3. 3. Trigger retry (optional) β€” configurable delay (default: 30 seconds) and max retry count (default: 3)
  4. 4. Escalation β€” after reaching retry limit, an escalation message is sent
πŸ’‘ Configuring the Error Handler

The error handler uses the same notification credentials as the main workflow. You only need to set up credentials once β€” the error handler picks them up automatically.

Dual-LLM Pattern

Workflows with text generation use two LLM sources in a fallback chain. The primary call goes to Ollama (local). Only when Ollama is unreachable or returns an error does the cloud fallback activate.

Diagramm wird geladen...

Configuration

  • Ollama URL β€” default: http://localhost:11434, configurable per workflow
  • Model β€” freely selectable (Mistral, Llama, Qwen, etc.)
  • Cloud provider β€” OpenAI or Anthropic (credential stored in n8n)
  • Local-only mode β€” disable cloud branch, error handler reports Ollama outages
⚠️ GDPR Note

If you enable the cloud fallback, data is sent to external servers. Verify whether this is GDPR compliant for your use case. In local-only mode, no data leaves your network.

Best Practices for n8n AI Workflows

1. Never hardcode credentials in nodes

Use n8n Credentials for all external services. This simplifies rotation, auditing and prevents API keys from ending up in exported workflows.

2. Test mode before activation

Test every workflow in manual mode before activating it. Especially test the error handler β€” trigger an error deliberately (e.g., wrong Ollama URL) and verify the notification arrives.

3. One workflow per task

Do not build mega-workflows that do everything. Each workflow in the bundle has exactly one responsibility. This simplifies debugging, monitoring and independent activation/deactivation.

4. PostgreSQL over SQLite

For production n8n instances: use PostgreSQL as backend database. SQLite has locking issues with parallel workflow executions and does not scale beyond a single node.

5. Mind n8n 2.x expression syntax

In n8n 2.x, expressions must start with an = sign: ={{ $json.name }} instead of just {{ $json.name }}. Date formatting uses Luxon syntax: yyyy-MM-dd, not YYYY-MM-DD.

Requirements

ComponentMinimumRecommended
n8n2.0+2.x (latest version)
Ollama0.3+Latest version with GPU support
DatabaseSQLite (built-in)PostgreSQL 15+
DockerDocker ComposeDocker Swarm for multi-node
GPU (for Ollama)6 GB VRAM24 GB VRAM (RTX 3090/4090)

Sources

Related articles: n8n for Beginners Β· RAG Guide

For implementation support, find resources at ai-engineering.at.

Next step: ship workflows that stay operable

Use proven n8n patterns, templates and integrations for workflows that stay local, documented, and auditable.

Why AI Engineering
  • Local and self-hosted by default
  • Documented and auditable
  • Built from our own runtime
  • Made in Austria
Not legal advice.