Back to Radiant Blog

Radiant Logic Unveils Its Next-Generation Chatbot

Radiant Logic is proud to announce the launch of our new chatbot—a dynamic solution engineered to revolutionize how customers and internal teams interact with data and knowledge. This post offers a high-level look at the core technology powering the chatbot and previews future innovations that will further streamline support, knowledge sharing, and enterprise data interactions. 

Smarter Answers with Retrieval-Augmented Generation (RAG) 

At the heart of our chatbot lies a retrieval-augmented-generation (RAG) engine that adeptly handles user queries. This means the chatbot doesn’t just generate responses – it also pulls in real information from a curated and constantly updated knowledge base to ensure each. The result? More accurate, complete, and trustworthy responses. 

Cutting-Edge Information Retrieval Techniques 

To make sure users get the best possible answers, we use a blend of modern search techniques: 

  • Semantic Search helps the chatbot understand the meaning behind a question—not just the keywords
  • Keyword-Based Search (BM25) ensures that important exact matches are not overlooked
  • Re-ranking using advanced AI models ensures that the most relevant content is always prioritized

Read more about our document search strategy on Medium. 

Built-In Intelligence for Better Answers 

Even the best AI models can occasionally miss the mark. That’s why we have included a “golden-qa” dataset—a collection of frequently asked questions and high-confidence answers that help fill in any gaps and ensure reliable responses to common or complex queries. 

We also use a ReACT agent to fine-tune questions before they are answered, making sure the chatbot clearly understands what is being asked and can follow up intelligently when needed. 

Optimizing Tabular Data Handling 

Many support questions involve structured data like tables—and that is something traditional chatbots often mishandle. Ours includes a unique approach to processing and presenting tabular data, improving clarity and reducing the risk of AI errors. 

Learn more in our evaluation of LLMs and tables on Medium. 

A Robust and Flexible Architecture 

The backbone of our chatbot is a FastAPI-based answering engine designed for speed, scalability, and integration: 

  • A web-based interface is available to customers at chatbot.radiantlogic.com
  • An internal Slack bot helps our teams get rapid answers and manage content directly using simple commands
  • Built-in feedback tools allow users to rate responses, helping us improve the chatbot over time
  • Users can also access it via the chat icon on the bottom right at developer.radiantlogic.com/ or via the “Documentation Assistant” link on support.radiantlogic.com 

Additionally, our tech support team can manage the chatbot’s content directly in Slack using specialized admin commands:

  • /get_golden_qa_pairs
  • /update_golden_qa_pairs
  • /get_conversations
  • /rebuild_docs 

Embracing Open Source 

In the spirit of collaboration and community, we have open sourced the entire codebase for our chatbot under the Apache 2 license. The project, known as rocknbot, is accessible on GitHub: https://github.com/radiantlogicinc/rocknbot

We warmly invite developers looking to implement a RAG-based chatbot solution in their own organizations to use rocknbot. Your feedback, collaboration, and contributions are highly valued as we continue to push the envelope of what is possible with AI. 

Looking Ahead: Future Enhancements 

We are already hard at work on a number of exciting enhancements, including: 

  • Login functionality with shared conversation history for a more personalized experience
  • Technical discussion summaries that facilitate deeper dives into specific topics
  • Fine-grained citations and curated web search results within generated answers for enhanced transparency
  • Scheduled conversation logs to periodically review and refine answer quality

Join Us on This Exciting Journey 

This chatbot is not just a new feature—it represents a significant leap forward in how advanced retrieval and generation techniques can be harnessed to solve real-world problems.

Whether you are a customer seeking accurate, quick responses or a developer eager to contribute to our open-source project, we invite you to join us on this exciting journey.

Stay tuned for more technical deep dives and updates on our cutting-edge work in AI/ML. The future of conversational AI is here, and we are thrilled to lead the charge. 

References 

  • Deep Dive into the Best Chunking & Indexing Method for RAG – https://medium.com/@carlos-a-escobar/deep-dive-into-the-best-chunking-indexing-method-for-rag-5921d29f138f 
  • Benchmarking Large Language Models on Tabular Data: A Comprehensive Evaluation – https://medium.com/@anshoo.jani/benchmarking-large-language-models-on-tabular-data-a-comprehensive-evaluation-ed3c3c6523a0 
  • rocknbot on GitHub – https://github.com/radiantlogicinc/rocknbot 

Subscribe to receive blog updates

Don’t miss the latest conversations and innovations from Radiant Logic, delivered straight to your in-box.

Name(Required)
Opt-In(Required)
This field is for validation purposes and should be left unchanged.