First, let’s talk about LLVM, the modular and reusable compiler toolchain:

With LLVM as a compiler backend, we write code in the language of our choice, and compile it to any number of supported backend architectures.
“develop a frontend for any programming language and a backend for any instruction set architecture.”

LLVM’s extensible architecture lets us compile code (using Clang, Rustc, etc.) from any supported language to an intermediate layer. That intermediate can then take care of optimization and compilation to any supported target platform.

Now that we have a deep understanding of LLVM, let’s talk about MCP (for real this time). MCP is just like LLVM, except that it’s totally different. MCP was introduced by Anthropic in 2024 and provides data sources and tool calling over HTTP (or STDIO) for AI assistants.

With MCP, we connect any tool or datasource to any supported LLM, like Claude, LLama, or OpenAI.
“develop a frontend for any tool or data source and a backend for any LLM.”
Pretty much anything can be a data source for MCP. You just have to write the corresponding MCP integration for that data source. Anthropic has some nice examples.

MCP’s extensible architecture lets us fetch data or run tools any supported platform, providing a well-known intermediate layer. That intermediate layer can then be invoked by any supported LLM platform.

Again, fewer lines on the diagram is better. Less bespoke integrations, more plug-and-play functionality.
There’s an MCP server for Oracle Database?
Curious? Try it out. It works with Oracle Database Free, supporting versions 19c, 21c, and 23ai.
Or, watch a demo to see how MCP works in action (with Oracle Database).

Leave a Reply