
At the Enterprise Search and Discovery 23 conference, several speakers talked about neural search's power to handle tricky keyword search cases. One excellent business example is the query "acquire" and word order context.
If you know the companies' names, Google will probably provide great results, but trying to find the acquisitions made by a company that may have also been acquired requires a lot of reading and query refinement. The query "red hat acquires" is a great example. All the top results are about IBM's massive acquisition of Red Hat, when the intent was to find what Red Hat itself has acquired.
Keyword search depends on indexing. Like those found at the back of books, indexes relate pages to words. But even a sophisticated index may not retain details like the exact order of the words. An advertising-oriented index may also assume that anyone searching for anything related to Red Hat is looking for information about the company, not about its acquisitions.
Large Language Models are vastly more capable interpreters of information than indexes. They're much denser, meaning they understand at minimum that the order of words is frequently all important. When SWIRL re-ranks the same search results using LLMs, the correct result surfaces first instead of being buried below advertisements and unrelated articles.
Google does much better if the search terms are provided as a phrase. But in the enterprise and within applications, keyword search without LLM-powered re-ranking may tell a very different story. That's why LLMs matter for enterprise search.



