================== /// MCP /// /// MCP /// ================== [server:online] [protocol:ready]
mcp-server-openai
by pierrebrunelle
Query OpenAI models directly from Claude using the MCP protocol.
69
16
Open SourceInstallation
1. Clone the repository
git clone https://github.com/pierrebrunelle/mcp-server-openai.git
cd mcp-server-openai
2. Create and activate a virtual environment (recommended)
python -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
3. Install Python dependencies
pip install -r requirements.txt
4. Provide your OpenAI credentials
export OPENAI_API_KEY="<your-openai-key>"
# Optional – Azure setup
export OPENAI_API_BASE="https://<azure-endpoint>.openai.azure.com/"
export OPENAI_API_VERSION="2024-04-01-preview"
5. Launch the MCP server
python -m mcp_server_openai # or the entry-point script specified in the repo
6. Point your Claude/MCP client at the server’s host:port (defaults to http://localhost:8080 unless changed via --port).
Documentation
# Supercharge Augment Code with Multi-Model AI Access Transform your Augment Code experience by adding direct OpenAI model access through this powerful MCP server. Instead of being limited to a single AI model, you can now query GPT-4, GPT-3.5-turbo, and other OpenAI models directly from your Augment Code interface—giving you the flexibility to choose the right model for each coding task. ## Enhanced Coding Workflows with Augment Code Once integrated with Augment Code, this MCP server unlocks game-changing capabilities. Working on a complex architectural decision? Ask Augment to query GPT-4 for deep analysis. Need quick code generation? Switch to GPT-3.5-turbo for faster responses. Building a code search feature? Use OpenAI's embeddings model to find semantically similar code patterns across your codebase. This isn't just about having more models—it's about having the right model for the right moment in your development workflow. ## Configuration & Immediate Impact Add this server to your Augment Code MCP configuration with your OpenAI API key, and you'll immediately have access to `openai.chat_completions`, `openai.completions`, and `openai.embeddings` tools. This means Augment can now compare different model outputs for the same coding problem, use specialized models for specific tasks (like embeddings for semantic code search), and provide fallback options when one model is rate-limited or unavailable. For teams working on diverse projects, this multi-model approach through Augment Code becomes a significant productivity multiplier—you're no longer constrained by a single AI model's strengths and weaknesses. **Real-world scenario**: While refactoring a legacy codebase, you can ask Augment to use GPT-4 for understanding complex business logic, GPT-3.5-turbo for generating boilerplate code, and embeddings to find similar patterns across your existing code—all from the same interface, all integrated seamlessly into your coding workflow.
License: MIT License
Updated 7/15/2025