AI Papers — Key Research Made Accessible
The most important papers on LLMs, RAG, Agents and AI Safety — summarized and explained.

Attention Is All You Need (2017)
The Transformer paper: Why Self-Attention changed the entire AI landscape.
Retrieval-Augmented Generation (2020)
RAG explained: How LLMs become better and more reliable through external knowledge sources.
LoRA: Low-Rank Adaptation (2021)
Parameter-efficient fine-tuning: Adapting large models without retraining everything.
ReAct: Reasoning and Acting (2022)
The ReAct agent pattern: How LLMs solve tasks by alternating between thinking and acting.
Constitutional AI (2022)
AI Safety by Anthropic: How to align AI systems through principles instead of human feedback alone.
Next step: move from knowledge to implementation
If you want more than theory: setups, workflows and templates from real operations for teams that want local, documented AI systems.
Why AI Engineering
- Local and self-hosted by default
- Documented and auditable
- Built from our own runtime
- Made in Austria
Not legal advice.