
Arize Phoenix MCP Server
Author: Arize AI
Description: MCP server implementation for Arize Phoenix (open-source AI observability & evaluation platform). Connects coding agents (e.g., Cursor/Claude Code) to a Phoenix deployment to access Phoenix capabilities (tracing, datasets/experiments, evals, prompt/playground workflows) through a unified MCP interface. Install/run via Node (npx) and configure with Phoenix baseUrl and an API key; Phoenix itself can be installed via pip/conda or deployed via Docker/Kubernetes/cloud.
Stars: 8.8k
Forks: 742
License: Other (repo metadata). README/"Copyright, Patent, and License" section states: Elastic License 2.0 (ELv2) — see LICENSE.
Category: Enterprise
Overview
Installation
pip install arize-phoenixnpx -y @arizeai/phoenix-mcp@latest --baseUrl https://my-phoenix.com --apiKey your-api-keyFAQs
What are the most common challenges faced when configuring an MCP server?
Beyond Phoenix-specific gotchas, MCP servers struggle with missing dependencies, platform incompatibilities, and OAuth callback misconfigurations. Servers fail when executables aren't on PATH or containers bundle modules incorrectly. Network issues include orphaned authentication listeners and hard-coded localhost URLs. Security risks stem from unsanitized inputs enabling confused deputy exploits and hard-coded credentials bypassing secrets managers. Sandboxing prevents unauthorized file access.
What are the key security features of an MCP server?
MCP servers implement four core security layers: sandboxing via minimal containers with seccomp profiles and network allowlists, scoped token exchange preventing confused deputy attacks, input validation through schema enforcement and content-hash verification, and runtime monitoring with anomaly detection and kill switches. These defenses address risks like command injection, token mismanagement, and exfiltration inherent in exposing powerful tools to AI agents.
How can I optimize the performance of an MCP server?
Optimize MCP server performance through compute autoscaling, GPU fractioning, and model quantization. Use HTTP transport for firewall compatibility. Implement batching, caching, and connection pooling while limiting enabled tools to reduce token costs. Deploy gateway architectures for multi-tenant workloads, enforce OAuth with short-lived tokens, and monitor with health checks and usage dashboards.
How do I configure the Arize Phoenix MCP Server for use with Augment Code or other unlisted MCP clients?
Register the Phoenix MCP server in your client's configuration file using the npx command with baseUrl and apiKey arguments, adapting the JSON structure to your client's format. For Augment Code, add the npx command as a local MCP server entry following the configuration schema, requiring command, args, and environment fields structured like Cursor or VS Code examples in Phoenix documentation.
What are the differences between the mcp.json config format for Cursor/VS Code and the claude_desktop_config.json format for Claude Desktop?
The primary difference is structural: claude_desktop_config.json requires a top-level "mcpServers" wrapper object containing server configurations, while mcp.json for Cursor and VS Code places configurations directly at root level. Both use identical command and args structure underneath, but the nested versus flat organization causes configuration errors when copying settings between clients.
How can I use Phoenix MCP to trace and debug multi-step agent interactions across MCP client-server boundaries?
Phoenix offers openinference-instrumentation-mcp, a Python package creating unified trace views across MCP client-server boundaries. Install it in your agent application to automatically instrument MCP calls. Traces appear in Phoenix, queryable using Phoenix MCP Server tools from your coding agent, instrumenting interactions first then querying trace data.