================== /// MCP /// /// SCR /// ================== [server:online] [protocol:ready]
screenpipe
by mediar-ai
AI app-store platform that continuously records a user’s desktop (screen + mic) locally, indexes it, and exposes an API so developers can build context-aware AI desktop apps (“pipes”) in Next.js, publish them and monetise through the built-in store.
15.4k
1.2k
Open Source01
screenpipe
Command-line entry point that starts the ScreenPipe desktop recorder and local API.
02
pipe create
Generate a new ScreenPipe plugin ("pipe") project scaffold.
03
pipe register
Register a plugin in the ScreenPipe store, optionally setting pricing information.
04
pipe publish
Publish a built plugin to the ScreenPipe store so users can install it.
05
screenpipe terminator
Desktop-automation SDK ("Playwright for your desktop") providing fast, OS-level control of the computer.
Installation
1. Prerequisites
• Node.js ≥ 20 and pnpm (recommended) or npm.
• FFmpeg available in PATH (used for muxing screen & audio).
• macOS / Windows: grant screen-capture & microphone permissions.
2. Clone & install
git clone https://github.com/mediar-ai/screenpipe.git
cd screenpipe
pnpm install # or: npm install
3. Build native modules & client
pnpm run build # transpile TypeScript → JS
pnpm run electron:build # package desktop recorder (optional)
4. Run the MCP server locally
# Starts recorder + MCP REST/WS endpoints on http://localhost:6399
pnpm start
5. Docker (headless servers)
docker build -t screenpipe .
docker run --privileged \
-e SCREENPIPE_PORT=6399 \
-v ~/.screenpipe:/data \
screenpipe
6. Configuration (edit `.env` or supply env-vars)
SCREENPIPE_PORT=6399
SCREENPIPE_DATA_DIR=~/.screenpipe
OPENAI_API_KEY=sk-... # if using cloud LLMs
EMBEDDING_MODEL=local/ggml # alternatively use local models
7. Access the web UI
Open http://localhost:6399 in a browser; install/enable extensions or agent plug-ins as prompted.
Documentation
License: MIT License
Updated 7/30/2025