Claude Code Routines: A Practical Guide to Automating Your AI Coding Workflow

A practical guide to Claude Code Routines from someone who has been automating AI coding workflows for months. Real workflow examples, real limits, and where to layer in additional tooling.

Karl Wirth ·
Claude Code Routines: A Practical Guide to Automating Your AI Coding Workflow

Yesterday, Anthropic released Routines for Claude Code. The feature lets you package a prompt, a repository, and a set of connectors into a configuration that runs on a schedule, responds to API calls, or triggers on GitHub events. It runs on Anthropic’s cloud infrastructure, so your laptop can be closed.

I’ve been building automation workflows for AI coding sessions for several months now, using Nimbalyst’s automations feature. Routines formalizes something I’ve been doing informally: taking repetitive Claude Code tasks and turning them into reliable, repeatable processes. This is a guide to what Routines can actually do, where they shine, and where you’ll want to layer in additional tooling.

What Routines Actually Are

A routine is a saved Claude Code configuration. You specify three things:

  1. A prompt that tells Claude what to do
  2. One or more repositories for context
  3. Connectors for external services (Slack, Linear, etc.)

Once configured, a routine can be triggered three ways:

  • Scheduled: Hourly, daily, weekdays, or weekly
  • API: POST to a per-routine HTTP endpoint with a bearer token
  • GitHub: Automatically on repository events (pull requests, releases, pushes)

Each routine run creates a full Claude Code cloud session. No permission prompts, no approval flows during execution. The session can run shell commands, use skills from the repo, and call any connectors you’ve included.

The Workflows That Make Sense

After automating similar patterns for months, I’ve found three categories where scheduled AI agent work consistently delivers value.

Backlog and Issue Triage

Every morning, new issues and bug reports pile up. A routine can scan new issues nightly, apply labels based on content analysis, estimate priority based on affected components, and post a summary to Slack or Linear.

The practical version of this: set up a nightly routine that reads new GitHub issues, cross-references them against your codebase to identify affected modules, and posts a structured triage summary. The AI doesn’t just categorize, it reads the code and makes informed assessments about severity.

I run a version of this daily with Nimbalyst’s automations. The difference between doing this manually and having it automated is substantial. You walk into your morning with a pre-triaged backlog instead of an unstructured list.

Documentation Drift Detection

Docs go stale. Everyone knows this, and nobody has time to systematically check. A weekly routine can scan merged PRs, identify docs that reference changed APIs or modified interfaces, and open update PRs for the affected documentation.

This is one of the most practical applications because the cost of not doing it is invisible until someone hits outdated instructions. A routine that checks this weekly catches drift before it becomes a support burden.

Deploy Verification and Smoke Testing

After a deploy, a routine can run smoke checks against the new build, scan error logs and monitoring dashboards for regressions, and post a go/no-go assessment to your release channel.

This works best when you give the routine access to your monitoring connectors. It’s not replacing your test suite. It’s adding an AI-powered review layer that reads logs and metrics with more context than a simple threshold check.

Where Routines Have Limits

The current research preview has real constraints worth understanding before you build workflows around them.

Usage Caps

Pro users get 5 routine runs per day. Max users get 15. Team and Enterprise get 25. For a nightly triage routine, that’s fine. For an hourly monitoring routine on a Pro plan, you’ll burn through your daily allocation in 5 hours.

If you’re on a Pro plan, think carefully about which workflows justify a routine versus which ones you can run manually. The sweet spot is daily or weekly cadences.

No Interactive Feedback Loop

Routines run autonomously. There’s no mid-run approval step, no “pause and ask the developer” capability. This means they’re best suited for tasks where the output is a report, a PR, or a message, not tasks where the AI needs human judgment mid-process.

For tasks that require back-and-forth collaboration, you still want an interactive session. Routines are for the work you’d otherwise forget to do or deprioritize because it’s repetitive.

Cloud-Only Execution

Routines run on Anthropic’s infrastructure, which means they clone your repo and work in that environment. If your workflow depends on local tooling, local environment variables, or services only accessible from your network, a routine won’t have access to those.

This is where a local automation tool adds value. Nimbalyst’s automations, for example, run on your machine with full access to your local environment, extensions, and workspace context. The two approaches complement each other: Routines for cloud-native tasks, local automations for environment-dependent work.

Setting Up Your First Routine

Visit claude.ai/code/routines or type /schedule in the Claude Code CLI.

Start with something low-risk and high-value. My recommendation: documentation drift detection on a weekly schedule. Write a prompt like:

Scan all pull requests merged in the past 7 days. For each PR, identify any documentation files (README, docs/, wiki) that reference modified functions, APIs, or configuration options. If the documentation is outdated relative to the code changes, open a PR with suggested updates.

Connect it to your repository, set a weekly schedule, and let it run. Review the first few outputs carefully to calibrate the prompt, then let it operate.

Where Nimbalyst Automations Fit In

Full disclosure: I build Nimbalyst, which also has an automations feature. The two approaches solve different problems, and most teams will use both once Routines is generally available. A quick breakdown of when to reach for which:

  • Reach for Claude Code Routines when the work is repo-centric, runs on a schedule or GitHub event, and doesn’t need your local environment. Nightly issue triage, weekly doc drift, PR-triggered code review.
  • Reach for Nimbalyst automations when the work spans your full workspace (not just code), needs access to local tooling or extensions, or involves non-engineering artifacts like mockups, diagrams, data models, or planning docs.
  • Cadence: Routines cap out at 5-25 runs per day depending on your Claude plan. Local automations have no such cap.
  • Context: Routines get a clean cloned repo. Nimbalyst automations get the live workspace state, including open tabs, pending reviews, and extension data.
  • Handoff: Routines output to GitHub, Slack, or API endpoints. Nimbalyst automations can output directly into your workspace, including opening artifacts for review.

Neither replaces the other. Routines covers the cloud-native automation layer cleanly. Nimbalyst covers workspace-level automation that includes non-code work.

Building a Layered Automation Strategy

Routines are one layer of a broader automation approach. The most productive setup I’ve found uses multiple layers:

Layer 1: Event-driven routines for immediate response tasks. GitHub triggers that run on new PRs (code review, label application, checklist verification).

Layer 2: Scheduled routines for periodic maintenance. Nightly triage, weekly doc checks, daily dependency scanning.

Layer 3: Local automations for environment-specific work. Tasks that need access to your local workspace, custom tooling, running dev server, or project-specific extensions.

Layer 4: Interactive sessions for complex, judgment-heavy work. Architecture decisions, feature implementation, debugging sessions where the AI needs your input.

Each layer handles different types of work. Trying to push everything through one approach, whether that’s fully automated routines or fully interactive sessions, leaves gaps.

What This Means for AI-Assisted Development

The release of Routines is part of a broader pattern. In the first two weeks of April alone, Cursor shipped an agent-first workspace (Cursor 3), Windsurf launched an Agent Command Center with cloud Devin integration (Windsurf 2.0), and Anthropic redesigned Claude Code’s desktop app around parallel sessions with Routines.

All three are converging on the same idea: AI coding is moving from “assistant that helps you write code” to “agents you orchestrate and manage.” The developer’s role is shifting toward architecture, review, and coordination.

Routines represent the automation edge of this shift. Instead of manually starting a Claude Code session every morning to triage issues, you configure it once and let it run. The time you save compounds. A 15-minute daily triage routine saves you roughly 60 hours per year. A weekly documentation check prevents the slow accumulation of tech debt that eventually costs days of developer time.

The practical takeaway: start with one routine for your most repetitive, well-defined task. Run it for two weeks. Calibrate the prompt based on results. Then add a second. Build your automation layer incrementally rather than trying to automate everything at once.