And where we should take AI with us in the coming year – with LLMs, vector search, MCP, GraphRAG and cybersecurity.
According to Dominik Tomicevic, CEO & Founder, Memgraph, as we look to 2026, we can expect strong interest from builders of practical AI chatbots and enterprise LLMs in using the Model Context Protocol (MCP) as a standard for connecting to external data sources, along with growing momentum behind GraphRAG — while still recognizing that vector search will continue to play an important and useful role.
Here, we share Tomicevik’s expert perspective on AI for the coming year:
Will MCP be a dominant and critical topic in AI discussions in 2026, due to its central role in enabling AI agents to securely and reliably interact with real-world data and systems?
Tomicevic: MCP yes, but it’s not a silver bullet on its own.
The moment you start connecting different types of data from multiple silos — CRM, ordering systems, manufacturing data — anything beyond simple retrieval becomes much more challenging. If you’re just building a documentation bot, basic search works fine. But once you start adding more sources, disappointment grows.
MCP is certainly under the spotlight right now as a potential way to solve the problem by providing a standardized way for an LLM to query different data sources. However, the LLM doesn’t understand your enterprise data, how you operate, your schema, how things are linked, or the implicit knowledge that isn’t documented.
In 2026, CIOs will need to capture as much structured and unstructured implicit knowledge as possible and start thinking about knowledge graphs and how everything ties together. Simply throwing more tools and more data at vectors and LLMs will only lead to worse performance and more hallucinations, making this an increasingly critical enterprise AI challenge.
Should vector search still continue to be in developers’ tool bags?
Tomicevic: While vector search isn’t nearly as comprehensive as GraphRAG, it still performs very well for simple retrieval tasks. We’ll see GraphRAG, and possibly other advanced techniques, used to synthesize data across complex organizational systems, providing LLMs with structured context and helping reduce hallucinations.
However, for many problems, vector search remains a perfectly sensible option and should always be considered. In other words, for simpler use cases, vector-only approaches are entirely adequate; they can get you in the ballpark and deliver good results.
I expect engineers will continue testing vector search on information retrieval cases where they already know the answers, as a way to evaluate their LLMs.
Do you see the rise of GraphRAG in 2026?
Tomicevic: For some teams, knowledge graphs can initially be a challenge if you’re not familiar with graph modeling. But once you have graph technology to structure your LLM, results improve dramatically. MCP gives LLMs significant power, and if you are building agents that can execute complex workflows, many tasks can actually be automated — but you still need good context.
Going forward, I expect many teams will agree that success requires both graphs and a proper flow of internal knowledge for each task.
Essentially, to succeed with generative AI in your business, you need two things: first, model your data and structure it as a knowledge graph to systematize your knowledge; second, use GraphRAG to extract and curate the right knowledge for the task at hand.
What do you think will be the next AI frontier?
Tomicevic: With so much AI-enabled fraud, cybersecurity is huge right now. I expect more and more CISOs will come to recognize that graphs are a natural fit for cybersecurity. Much of the ongoing activity by bad actors, combined with sensor data, naturally forms a graph — this is why many cybersecurity companies using graphs have been so successful.
Graphs don’t just monitor individual behavior; they correlate data across multiple actors and multi-pronged attacks. This is where graphs really shine.
And AI will make this even more powerful. Currently, systems surface insights for analysts, but humans are often overwhelmed by the sheer volume. By building agents to take automated actions, you could have millions of automated ‘bots’ — essentially a massive workforce protecting your organization.
I predict that these agents will initially work alongside human workflows, learning from experts until they’re ready to handle specialized tasks independently.
Will the “AI bubble” burst soon?
Tomicevic: The “AI bubble” isn’t a functionality problem, it’s an equity issue. If the money runs out, expectations will drop. Many AI initiatives are currently subsidized, which is good for consumers and for progress, but paying just $20 or $200 per month for AI is unrealistic without subsidy, as the true costs are far higher. OpenAI and others can’t charge a fair price like $2,000 a month because almost no one would pay it.
What I see playing out is some disruption potentially, but eventually prices will stabilize or efficiencies will improve enough to make the economics work. Hopefully, it doesn’t end up like micro-transportation — all those e-scooters! — where huge funding led to a boom and then everything wound up in the ditch.
and then everything wound up in the ditch.
I think we can agree that LLMs provide far more utility than scooters or even that other source of recent hype, blockchain. There may be a bubble because of the funding frenzy, but the underlying utility is real, and AI progress will continue to bring tangible benefits.