Infrastructure doesn't stop at OpenAI. Learn how to extend NodeLLM to support proprietary gateways like Oracle Cloud's Generative AI Service using a clean, zero-dependency, interface-driven approach.
NodeLLM 1.7.0 introduces a unified interface for reasoning-focused models. Standardize how you handle Claude 3.7 Thinking, DeepSeek R1 Reasoning, and OpenAI o1/o3 Effort with a single, fluent API.
AI Agents are more than chatbots—they're systems that can reason, plan, and act. This post breaks down what an agent actually is, how the execution loop works under the hood, and gives you a copy-paste-ready agent using NodeLLM.
Every time I start a new AI project, I find myself redesigning the same four database tables. Here is why we need a universal persistence layer for AI, and how @node-llm/orm brings Rails-like sanity to the Node.js AI ecosystem.
Building a chatbot that lists models is easy. Building one that remembers users, persists tool calls, and handles real-time streaming in a production Next.js app is hard. Here is how we do it with @node-llm/orm.