Sep 26, 2024, 3:00 AM
Sep 26, 2024, 3:00 AM

NEC enhances AI reliability in Heidelberg for business growth

Highlights
  • NEC Laboratories Europe is enhancing its generative AI services with the introduction of LLM Explainer to improve the reliability of large language models.
  • The LLM Explainer technology helps users detect and correct hallucinations in AI-generated outputs by comparing them with original source documentation.
  • This advancement aims to streamline the verification process, ultimately facilitating more accurate and efficient use of AI in business applications.
Story

On September 26, 2024, NEC Laboratories Europe announced advancements in their generative AI services, specifically targeting the Japanese market. The introduction of LLM Explainer aims to enhance the reliability of large language models (LLMs) by addressing the issue of hallucinations—incorrect outputs generated by these models. This technology allows users to compare LLM outputs with original source documentation, facilitating efficient verification and correction of generated text. Dr. Carolin Lawrence, the Manager and Chief Research Scientist at NEC Laboratories Europe, highlighted the challenges organizations face when using LLMs, which often require manual checks for accuracy. The LLM Explainer streamlines this process by detecting discrepancies in meaning and content, thus saving time and reducing costs associated with manual verification. The technology employs advanced natural language processing techniques to identify omissions, duplications, and changes in meaning, providing users with relevant source sentences for comparison. This feature is designed to simplify the correction process, enabling users to adapt LLM-generated text to their specific needs quickly. NEC plans to further enhance the capabilities of LLM Explainer by adding features that will identify hallucinated entities and contradictions. The integration of this service into NEC cotomi generative AI API services is set to begin at the end of October, with an on-premise version also being made available, marking a significant step forward in the reliability of AI applications in business-critical operations.

Opinions

You've reached the end