← Back to blog

How to connect your RSS feeds to AI agents

Why RSS plus AI agents works

Most AI agents that need to read the web start by scraping pages. That means dealing with HTML parsing, JavaScript rendering, rate limiting, CAPTCHAs, and layouts that change every time a site redesigns.

SereneReader already does the hard work: it fetches RSS feeds, parses the XML, and stores everything as clean, structured data. The API gives your agent that data as JSON. No scraping, no parsing, no brittle page selectors.

Clean JSON, not raw feeds. Your agent never touches RSS or XML. SereneReader has already consumed the feeds, decoded HTML entities, fixed double-encoded content, and normalized all the encoding quirks that make raw feed data messy. The agent just queries the API and gets clean, structured JSON back.

Human-curated sources. You pick the feeds. You decide what the agent has access to. It's not a general web crawl; it's a focused information stream you control.

No scraping, no blocking. Your agent talks to the SereneReader API, not to the source websites. No robots.txt issues, no anti-bot measures, no legal gray areas.

Chronological and deduplicated. Articles arrive once, in order. The agent doesn't need to track what it's already seen if you filter by publish date.

How it works with OpenClaw

OpenClaw is an AI agent platform that lets agents call external APIs as tools. To connect it with SereneReader:

  1. Generate an API key in your SereneReader settings.
  2. Define the SereneReader API as a tool in your OpenClaw agent configuration. The endpoint docs in your settings page show the agent exactly what to call.
  3. Give the agent a task. Something like: "Check my RSS feeds every morning, summarize anything related to infrastructure security, and save a summary to my Obsidian vault."

The agent handles scheduling, filtering, and summarization. SereneReader handles feed fetching and article storage.

Example agent prompt

Here's roughly what the agent instructions might look like:

You have access to the SereneReader API. Every day at 8am, fetch articles published in the past 24 hours. For each article, determine if it relates to cybersecurity, infrastructure, or DevOps. If it does, add it to a daily summary. Save the summary as a new note in my Obsidian vault under the Daily Digests folder.

The agent calls the API, processes the results, and takes action. No custom code, no cron jobs, no glue scripts.

Example use cases

News monitoring agent

Subscribe to 20-30 industry feeds in SereneReader. Connect an agent that scans for mentions of your company, competitors, or specific topics. Get a daily briefing instead of manually checking each feed.

Research digest

Follow academic preprint feeds (arXiv, bioRxiv) and technical blogs. Have an agent categorize new papers by topic, flag anything relevant to your current projects, and compile a weekly reading list.

Content pipeline

Subscribe to feeds in your niche. Let an agent identify trending topics, extract key points, and draft content briefs. You still write the content, but the research step is handled.

Competitive intelligence

Track competitor blogs, changelog feeds, and press releases. An agent can spot product launches, pricing changes, and feature announcements, then log them in a structured format.

Works with any LLM tool

The SereneReader API returns standard JSON over HTTP. Any tool that can make authenticated API requests can use it:

  • OpenClaw for autonomous agent workflows
  • LangChain / LlamaIndex for retrieval-augmented generation
  • Custom scripts with any LLM API via OpenRouter (see example below)
  • Zapier / Make for no-code automation with AI steps
  • n8n for self-hosted workflow automation

The pattern is always the same: fetch articles from SereneReader, pass them to an LLM for processing, route the output wherever it needs to go.

Example: daily digest script with OpenRouter

A Node.js script that fetches yesterday's articles from SereneReader, summarizes them with any LLM via OpenRouter, and writes the result to a local file. Run it with node digest.mjs or schedule it with cron.

// digest.mjs
// Requires: SERENE_API_KEY, SERENE_API_URL, and OPENROUTER_API_KEY env vars.
// Your API settings page in SereneReader has the base URL and endpoint paths.

const SERENE_API_KEY = process.env.SERENE_API_KEY;
const SERENE_API_URL = process.env.SERENE_API_URL; // from your API settings page
const OPENROUTER_API_KEY = process.env.OPENROUTER_API_KEY;

import { writeFileSync } from "node:fs";

// 1. Fetch yesterday's articles from SereneReader
const yesterday = new Date(Date.now() - 86400000).toISOString();
const res = await fetch(`${SERENE_API_URL}?since=${yesterday}`, {
  headers: { Authorization: `Bearer ${SERENE_API_KEY}` },
});

if (!res.ok) {
  console.error(`SereneReader API error: ${res.status}`);
  process.exit(1);
}

const { articles } = await res.json();

if (articles.length === 0) {
  console.log("No new articles since yesterday.");
  process.exit(0);
}

// 2. Build a prompt from article titles and snippets
const articleList = articles
  .map((a) => `- ${a.title} (${a.feedName})\n  ${a.snippet}`)
  .join("\n\n");

const prompt = [
  `Here are ${articles.length} articles from my RSS feeds`,
  `published in the last 24 hours:\n\n${articleList}\n\n`,
  `Write a concise daily digest. Group related articles`,
  `by topic. For each topic, summarize the key points in`,
  `2-3 sentences. Include the article titles so I know`,
  `which ones to read in full.`,
].join(" ");

// 3. Send to any model via OpenRouter
const llmRes = await fetch("https://openrouter.ai/api/v1/chat/completions", {
  method: "POST",
  headers: {
    Authorization: `Bearer ${OPENROUTER_API_KEY}`,
    "Content-Type": "application/json",
  },
  body: JSON.stringify({
    model: "anthropic/claude-sonnet-4",
    messages: [{ role: "user", content: prompt }],
  }),
});

if (!llmRes.ok) {
  console.error(`OpenRouter API error: ${llmRes.status}`);
  process.exit(1);
}

const { choices } = await llmRes.json();
const summary = choices[0].message.content;

// 4. Write the digest to a local markdown file
const date = new Date().toISOString().split("T")[0];
const output = `# Daily Digest - ${date}\n\n${summary}\n`;
writeFileSync(`digest-${date}.md`, output);

console.log(`Digest written to digest-${date}.md (${articles.length} articles)`);

No dependencies, no build step. Two API keys, one file, and you've got a daily research digest. Swap the model, change the prompt, or pipe the output to Obsidian, Notion, or Slack instead.

Getting started

API access requires a Pro or Team plan.

  1. Sign up for SereneReader if you haven't already.
  2. Subscribe to the feeds you want your agent to access.
  3. Generate an API key in settings.
  4. Configure your agent to call the SereneReader API.

Full endpoint docs are in your account settings. For more on the API itself, read Building automations with the SereneReader API.