As most AI models suggest that they are not a 100% accurate in all instances and could give out the wrong response at times, there is a need for a moderator that can fact-check and provide feedback to these large language models (LLMs). That’s exactly where Meta’s newest AI tool – Shepherd, comes into the picture. What better way to monitor an AI tool than with another AI tool? Through Shepherd, Meta’s objective is to overcome the problem of generative AI tools producing inaccurate or misleading responses, by, ironically, using AI itself.
Launched on August 16 2023, the Shepherd LLM is designed to critique other LLMs’ responses and suggest refinements in order to produce more accurate generative AI outputs. The objective is to develop automated moderation techniques to tackle inaccuracy and misleading information, logical errors, coherence, and alignment generated by large language models that run generative AI tools. The research team behind Shepherd leveraged feedback from online communities to improve the model output significantly. The newly developed AI tool uses a natural voice to give its feedback.
Meta shared an overview of Shepherd in a research paper – Shepherd: A Critic for Language Model Generation. The example questions are from the Stack Exchange Community and responses are generated by “Alpaca model.” Shepherd has the ability to review and critique Alpaca generations by either identifying errors or providing constructive feedback, as it does here:
Feedback from two online communities namely, Stack Exchange and the Pushshift Reddit Dataset was referred to generate better output from Shepherd. Pushshift is a social media data collection, analysis, and archiving platform since 2015, that has collected data from Reddit and made it available to researchers. Pushshift’s Reddit dataset is updated in real-time and includes historical data that dates back to Reddit’s inception.
Stack Exchange is a network of question-and-answer websites on topics in diverse fields, each site covering a specific topic, where questions, answers, and users are subject to a reputation award process.
The data from both sources was structured into a question-answer-critique format. The team believes that by incorporating diverse feedback from these platforms, Shepherd can offer a wide range of critiques that reflect real-world user perspectives. To ensure the quality of the critiques, the team employed various data refinement techniques like keyword filtering, analysing user edit histories, and incorporating community vote scores. The researchers explained that these methods helped in identifying critiques that led to the refinement of original answers.
The LLaMA-7B model was used as a base for Shepherd’s training. LLaMA 7B is the smallest LLM variant from Meta, which was trained on one trillion tokens. When compared with other models like Alpaca 7B, SelFee-7B, and ChatGPT, Shepherd’s performance was notable. The research paper mentioned, “Shepherd achieved a win-rate of 53-87% against competitive alternatives.” This indicates that while Shepherd has a smaller size of 7B parameters, its critiques are on par or even preferred over established models like ChatGPT from OpenAI.
The formulation of Shepherd raises a pertinent question, “Why not just build this software into the main AI model of the other tools and produce better results without this middle step?” This sure seems likely to be Meta’s end goal – to facilitate better responses by pushing generative AI systems to re-assess their incorrect or misleading answers, in order to pump out better replies to the user’s queries. OpenAI says that its GPT-4 model, used in the current version of ChatGPT, is already producing far better results than the current commercially available GPT systems.
Some platforms are also experiencing good results from using GPT-4 as the code base for moderation tasks, often rivalling human moderators in performance. That could lead to some significant advances in AI usage by social media platforms. And while such systems will never be as good as humans at detecting nuance and meaning, the platform users could soon be able to operate with a lot more automated moderation within their posts. And for general queries, having additional checks and balances like Shepherd will also help to refine the results provided, or it’ll help developers in building better models to meet demand.
The introduction of Shepherd highlights the potential of refining AI outputs using community feedback. As the research paper suggests, “Shepherd’s strength lies in its ability to leverage a high-quality feedback dataset curated from community feedback and human annotations.” Meta´s new approach could pave the way for future models that prioritise real-world feedback to enhance their outputs. In the end, the push will see these tools getting more intelligent and better at understanding each of our queries. With generative AI being impressive in what it can provide already, it’s getting closer to being more reliable as an assistive tool, personally and professionally.
It's that time of the year when you open Instagram and BOOM, someone you know…
India has taken an extraordinary leap in the world of content creation in the past few years,…
Gaming enthusiasts, assemble! Welcome to your guide to one of the biggest esports events in…
Mumbai, India – November 27, 2024: Zouk, India’s proudly Indian, vegan, and cruelty-free bags and…
Content creators hold a very influential place in today’s world. The creator economy is growing…
Artiste First brings a song that is an exploration of inner turmoil, fear, freedom, and…
Leave a Comment