Austin LangChain AI Middleware Users Group (AIMUG)

Subscribe
Archives
April 23, 2025

AIMUG Update: MCP, A2A & Voice Apps Office Hours Discussion

Dear AIMUG Members,

Thank you to everyone who joined our lively office hours session yesterday! We had a fantastic turnout with six participants exchanging ideas and sharing updates on their latest AI middleware projects. If you missed it, here's a summary of the key discussions:

MCP Architecture & Ecosystem

Joseph kicked things off with insights from his MCP research, demonstrating how straightforward implementation can be. His Tableau LangChain library is gaining significant traction, even receiving acknowledgment from an executive VP at the recent Tableau conference!

Karim sparked an interesting debate about Docker's new MCP registry, questioning whether it truly solves the right problems for enterprise environments. The group identified a clear gap in Kubernetes integration for MCPs, with Karim suggesting a server-sent events (SSE) approach might offer better scalability and security than Docker's current implementation.

We also had a productive discussion about UV for Python dependency management, with many participants praising its efficiency compared to traditional approaches.

LangChain & MCP Coexistence

The question "Do MCPs threaten LangChain?" yielded interesting perspectives. Karim made the compelling case that MCPs actually expand the tool ecosystem rather than replace existing frameworks, noting that MCPs primarily target plugin use cases for desktop environments while LangChain excels at agent-based workflows.

Several members highlighted opportunities for LangChain to integrate with MCP architecture, possibly using LangChain as an MCP tool for rapid prototyping.

Voice Application Development

Karim shared his experience with FastRTC, demonstrating how it dramatically reduces boilerplate code when building voice applications. This led to an engaging discussion about creating unified voice servers that work seamlessly across web, mobile, and telephone interfaces.

Robert and Karim debated the merits of native iOS apps versus web-based applications for voice interactions, with Progressive Web Apps (PWAs) emerging as a compelling middle-ground option.

Agent-to-Agent Protocol (A2A)

Colin introduced the A2A protocol concept for service advertisement and capabilities exchange between agents, comparing it to a "welcome networking protocol" for AI microservices. Karim suggested creating a practical demonstration to showcase real-world applications.

Colin announced an upcoming meeting with Google Cloud to discuss their agent space implementation and A2A protocol — stay tuned for updates!

Voice-to-Text & AI Coding Workflows

Colin demonstrated his workflow using Mac Whisper for voice-to-text transcription to command AI agents, sparking a broader conversation about "vibes coding" and the convergence of no-code tools with coding agents.

The group explored the potential for voice-based coding interfaces that enable development while performing other activities — imagine coding while walking your dog!

Action Items

  • Colin: Following up with Google Cloud about agent space and A2A protocol

  • Karim: Developing a showcase for A2A protocol implementation

  • Karim: Creating a voice agent using WebRTC/FastRTC for AI workflow interaction

  • Robert: Exploring mobile app options for voice interaction with AI agents

  • Ryan: Completing workflows for project management tool integration

Next Office Hours Meeting

Our next office hours session will be held on Discord Tuesday, April 29th at 2:00 PM Central. (Join here) We'll follow up on action items and explore new developments in the AI middleware space.

Have questions or topics you'd like to discuss? Reply to this email or post in our Discord channel!

Building the future of AI middleware together,

Colin McNamara AIMUG Core Team

Don't miss what's next. Subscribe to Austin LangChain AI Middleware Users Group (AIMUG):