================== /// MCP /// /// MUL /// ================== [server:online] [protocol:ready]
multi-ai-advisor-mcp
by YuChenSSR
MCP server that orchestrates a "council" of local Ollama models, merges their answers and returns a synthesized response (designed to work with Claude Desktop).
58
17
Open Source01
list-available-models
Shows all Ollama models on your system
02
query-models
Queries multiple models with a question
Installation
1. Prerequisites
• Node.js ≥18 and npm (or pnpm/yarn).
• (Optional) Ollama installed locally if you plan to run local LLMs.
• An OpenAI-compatible API key or other model provider credentials, depending on which advisers you wish to wire-in.
2. Clone the repo
git clone https://github.com/YuChenSSR/multi-ai-advisor-mcp.git
cd multi-ai-advisor-mcp
3. Install dependencies
# with npm
npm install
# or with pnpm
pnpm install
4. Build the TypeScript sources (if the project is shipped as TS)
npm run build # runs tsc or vite build depending on repo scripts
5. Configure advisers and model back-ends
• Copy .env.example → .env
• Fill in variables such as:
• Edit config/*.json or src/config.ts if the project exposes per-adviser prompt settings.
OPENAI_API_KEY=<your-key>
OLLAMA_BASE_URL=http://localhost:11434
ADVISERS=finance,marketing,engineering # comma-separated list used by MCP
6. Launch the MCP server
npm start # or: node dist/index.js
# the server usually listens on http://localhost:3000 (check printed log)
7. Consume the API
curl -X POST http://localhost:3000/query -d '{"question":"Should we launch?"}' -H 'Content-Type: application/json'
8. (Optional) Docker
docker build -t multi-ai-advisor-mcp .
docker run -p 3000:3000 --env-file .env multi-ai-advisor-mcp
Documentation
License: MIT License
Updated 7/30/2025