This is a little experiment: I wanted to see if I can build a read-it-later app using no backend, no database, no real frontend even — just GitHub Actions and GitHub Pages. Turns out, yes, you can.

You save articles via a bookmarklet, a GitHub Action runs and extracts the content, then stores it as HTML in your repo. Your saved articles get served as a simple static site with GitHub Pages.


How It Works (The Flow)

  1. You click a bookmarklet in your browser.
  2. It sends a URL to a GitHub webhook.
  3. A GitHub Action picks it up.
  4. It runs a Python script that extracts the article text.
  5. It saves the article as an HTML file in your repo.
  6. It updates the index.html with links to all saved articles.
  7. GitHub Pages shows the result.

No servers. No DBs. No real code hosting. Just… GitHub.


The Bookmarklet

You’ll need a GitHub Personal Access Token with repo and workflow access.

Here’s a sample bookmarklet:

javascript:(()=>{
  const url = encodeURIComponent(location.href);
  fetch('https://api.github.com/repos/YOUR_USER/YOUR_REPO/dispatches', {
    method: 'POST',
    headers: {
      'Accept': 'application/vnd.github.everest-preview+json',
      'Authorization': 'Bearer YOUR_PAT'
    },
    body: JSON.stringify({event_type: 'new-url', client_payload: {url: location.href}})
  }).then(res => alert(res.status === 204 ? 'Saved!' : 'Error: ' + res.status));
})();

Replace YOUR_USER, YOUR_REPO, and YOUR_PAT.


The GitHub Action

In your repo, make a .github/workflows/main.yml file:

name: Publish to GitHub Pages

permissions:
  contents: write

on:
  repository_dispatch:
    types: [new-url]
  push:
    branches: [main]

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-python@v5
        with:
          python-version: '3.10'
      - run: pip install -r requirements.txt
      - run: |
          echo '${{ toJson(github.event) }}' > payload.json
          python extract.py
      - run: |
          mkdir site
          cp index.html site/
          cp -r entries site/
      - uses: peaceiris/actions-gh-pages@v4
        with:
          github_token: ${{ secrets.GITHUB_TOKEN }}
          publish_dir: ./site
          publish_branch: gh-pages

This does the whole job — from extracting the article to deploying to GitHub Pages.


The Python Script (extract.py)

This script reads the payload, downloads the article, saves it, and updates the index.

import json
from newspaper import Article
from datetime import datetime
import os
import re
from pathlib import Path

ENTRIES_DIR = Path("entries")
INDEX_FILE = Path("index.html")

# 1. Read the webhook payload
def load_payload():
    with open("payload.json") as f:
        payload = json.load(f)
    return payload.get("client_payload", {}).get("url")

# 2. Extract content using newspaper3k
def extract_article(url):
    article = Article(url)
    article.download()
    article.parse()
    return article

# 3. Save as HTML

def save_article_html(article, url):
    slug = re.sub(r'[^a-zA-Z0-9]', '-', article.title.lower())[:50]
    timestamp = datetime.now().strftime("%Y-%m-%d-%H%M")
    filename = ENTRIES_DIR / f"{timestamp}-{slug}.html"
    os.makedirs(ENTRIES_DIR, exist_ok=True)
    html = f"""
    <html><head><title>{article.title}</title></head>
    <body>
        <a href='index.html'>← Back</a>
        <h1>{article.title}</h1>
        <p><em>{url}</em></p>
        <div>{article.text.replace(chr(10), '<br><br>')}</div>
    </body></html>
    """
    with open(filename, 'w', encoding='utf-8') as f:
        f.write(html)
    return filename.name

# 4. Update index

def generate_index():
    entries = sorted(ENTRIES_DIR.glob("*.html"), reverse=True)
    links = [f"<li><a href='entries/{e.name}'>{e.name}</a></li>" for e in entries]
    html = f"""
    <html><head><title>Index</title></head><body>
    <h1>Saved Articles</h1>
    <ul>{''.join(links)}</ul>
    </body></html>
    """
    with open(INDEX_FILE, 'w', encoding='utf-8') as f:
        f.write(html)

# 5. Run all

def main():
    url = load_payload()
    if url:
        article = extract_article(url)
        save_article_html(article, url)
    generate_index()

if __name__ == "__main__":
    main()

Hosting with GitHub Pages

Just make sure your repo has GitHub Pages enabled, and points to the gh-pages branch.

That’s it. Every time you click your bookmarklet, GitHub will:

  • Fetch the article
  • Save it
  • Regenerate your index
  • Publish everything live

You now have a private (or public) read-it-later app, powered by GitHub itself.


Notes

  • You can use .md instead of .html if you prefer markdown.
  • You can add tags or folders later.
  • Want to make it collaborative? Accept PRs or shared tokens.
  • GitHub Actions run in order by default, but you can limit concurrency if needed.

What’s Next?

Might add search, RSS, tagging. Or maybe not. The point is: you can build cool things with very little. :)