mcp-registry/website-downloader

    ==================
      
       /// MCP ///
      /// WEB ///
        
    ==================
        
    [server:online]
    [protocol:ready]

website-downloader

by pskill9

MCP server that leverages wget to download entire websites, preserve structure, and convert links for offline use.

121
22
Open Source

01

download_website

Download an entire website using wget, preserving its structure, converting links for local use, and supporting optional output path and depth parameters

Installation

1. Prerequisites
• Node.js ≥18
• npm (bundled with Node) or pnpm/yarn
• Optional: Redis (if you switch the queue adapter to Redis for large crawls)
2. Clone the repository
git clone https://github.com/pskill9/website-downloader.git cd website-downloader
3. Install dependencies
npm install # or pnpm install / yarn
4. Configuration
Copy the sample environment file and adjust settings: cp .env.example .env # .env values (defaults in brackets) PORT=3000 # REST server port OUTPUT_DIR=downloads # where finished sites are stored MAX_DEPTH=5 # crawling recursion depth CONCURRENCY=4 # simultaneous requests #REDIS_URL=redis://localhost:6379 # uncomment to enable Redis queue
5. Run the server
npm run dev # hot-reload (nodemon) for development npm start # production mode
6. Reverse proxy (optional)
When deploying behind nginx or Caddy, proxy /api/* to localhost:3000.
7. Docker (optional)
docker build -t website-downloader . docker run -p 3000:3000 -v $(pwd)/downloads:/app/downloads website-downloader

Documentation

License: Unknown – no license file or license field detected. Verify before using in production.
Updated 7/30/2025