AI Trends & News
In recent developments, LLMs like GPT-4 and Gemini have been designed to handle much longer contexts, even 1 million tokens. While long-context LLMs introduce unnecessary or irrelevant chunks resulting in a lower precision rate. Standard RAG places the matched chunks in a score-descending order, where as Order-Preserving RAG (OP-RAG) places the chunks based on their original order in document to maintain content structure.
In a few lines of code, llama-deploy provides a simplified method for deploying workflows as scalable microservices. Built-in retry mechanisms and the microservices architecture of llama-deploy ensure your multi-agent AI system remains robust, scalable, and resilient in production environments.
Built on Meta open-source Llama 3.1 - 70B Instruct, Reflection 70B introduces a novel approach called ReflectionTuning, a new paradigm by training the model to reflect on its reasoning before providing the final response.
This library for Speech-to-Speech is a revolution in voice processing and putting those processes into one efficient system. By merging various SOTA (state-of-the-art) models into one modular framework, the research developed a solution that would help overcome privacy challenges and latency. It could lower the latency to as low as 500 milliseconds, an achievement in real-time speech processing.
Traditional approaches relying on pre-trained models necessitate extensive domain-specific data and considerable reward feedback, with their lack of real-time adaptability hindering their effectiveness in dynamic environments. This paper introduces the GRATR framework, leveraging the Retrieval-Augmented Generation (RAG) technique to bolster trustworthiness reasoning in agents.
LlamaRank is a language model specialized for document relevancy ranking. LlamaRank achieves performance that is at least comparable to leading APIs across general document ranking while demonstrating a marked improvement in code search. The model supports up to 8,000 tokens per document, significantly beating the competition like Cohere's reranker.
This benchmarker aims to target how well LLMs can follow the instructions for formatting their output in a particular JSON template. It is essential for processing Generative Feedback Loops that these outputs follow these instructions. The researchers conducted 24 experiments in their methodology, each designed to test the model's ability to follow the specified JSON format instructions.
Agent Q is a significant milestone for Agents combining search, self-critique, and reinforcement learning to create SOTA (state-of-the-art) autonomous self-heal web agents. It innovates by combining Monte Carlo Tree Search and AI self-critique, leveraging reinforcement learning for human feedback (RLFH) methods like the Direct Preference Optimization (DPO) algorithm.
Qwen2-Audio is a large-scale audio-language model capable of accepting various audio signal inputs and performing audio analysis or direct textual responses with regard to speech instructions. Users can freely engage in voice interactions or could provide audio and text instructions for analysis during the interaction.
Improving Large Language Models traditionally relies on costly human data; recent self-rewarding mechanisms have shown that LLMs can improve by judging their own responses instead of relying on human labelers. Meta-Rewarding step to the self-improvement process, where the model judges its own judgements and uses that feedback to refine its judgment skills.
Reliability and traceability are major challenges in the successful implementation of RALM. A novel self-reasoning framework involves constructing self-reason trajectories with three processes: a relevance-aware process, an evidence-aware selective process, and a trajectory analysis process.
STUMPY is a powerful and scalable Python library for Modern Time Series Analysis that efficiently computes something called the matrix profile. What’s important is that once you’ve computed your matrix profile it can then be used for a variety of time series data mining tasks.
A Comprehensive Comparison of Leading AI Models. Each model brings unique capabilities and improvements, reflecting the ongoing evolution of AI technology. Llama 3.1, developed by Meta with its most remarkable context length to 128K, GPT-4o balance versatility and depth in language understanding and generation, Claude 3.5 raise the industry standard for intelligence, emphasizing speed and precision.
Domain-specific and Task-specific LLMs for Medical and Finance industry outperforming general-purpose models like GPT-4 or Claude 3.5 in their respective domains. Palmyra-Med achieved an average of 85.9% across medical benchmarks greater than GPT-4, Med-PaLM 2. Palmyra-Fin passed the CFA Level III exam with a 73% score, a first for any AI model.
Gemma is a family of lightweight, state-of-the-art open models from Google. Gemma 2B is optimized for edge inference and was trained on 2 trillion tokens using knowledge distillation (KD) using a larger Gemma. ShieldGemma is a set of instruction-tuned models for evaluating the safety of text prompt input and text output responses against a set of defined safety policies.
Released a DCLM-7B open-source LLM, weights, training code, and dataset! This model is designed to showcase the effectiveness of systematic data curation techniques for improving language model performance.
A state-of-the-art small 12B model with 128k context length, built in collaboration with NVIDIA, and released under the Apache 2.0 license. For global, multilingual applications, it is trained on function calling, has a large context window, and is particularly strong in English, French, German, Spanish, Italian, Portuguese, Chinese, Japanese, Korean, Arabic, and Hindi.
Workflows to automatically and dynamically improve LLM applications using production data. The framework doesn’t rely on sophisticated tools, ML platforms, or enterprise-grade monitoring systems. Instead, it emphasizes thoughtful metric selection, consistent monitoring, and data-driven improvements.
A Holistic AI Framework for Evaluating User-Facing Performance in LLM Inference Systems such as the fluidity-index and fluid token generation rate. Metron’s fluidity-index metric sets deadlines for token generation based on desired TTFT and TBT values, adjusting these based on prompt length and observed system performance.
A Multi-Agent Workflow Framework for Enhancing Synthetic Data Quality and Diversity in AI Model Training. This agentic framework automates the creation of diverse and high-quality synthetic data using raw data sources like text documents and code files as seeds.