Overview
Welcome to the METIS project website. This research explores computational techniques for quantifying artificial metacognition, consisting of self-monitoring and self-regulation, in ensembles of Large Language Models (LLMs).
📰 News
Research Overview
Large Language Models (LLMs) struggle to assess their own uncertainty, detect knowledge conflicts, or recognize when problems exceed their expertise, limitations that undermine reliability and trust. We present a metacognitive framework for LLM ensembles that addresses these challenges through explicit self-monitoring and control.
Our system computes a Metacognitive State Vector (MSV) quantifying five dimensions derived from cognitive psychology research. MSV values automatically trigger System 1 (fast, single-node) or System 2 (deliberative, multi-node) processing based on query complexity and metacognitive needs.
Beyond simple routing, the MSV enables three key innovations: First, role transition dynamics allow ensemble nodes to dynamically assume specialized roles (expert, critic, synthesizer, etc.) driven by real-time metacognitive signals: when uncertainty spikes or conflicts emerge, the system reorganizes itself accordingly. Second, this framework advances explainable AI (xAI) by making the reasoning process transparent; users can see why the system chose a particular processing strategy, what triggered deliberation, and how confidence evolved. Third, we extend traditional teacher-student knowledge distillation by conditioning the transfer on metacognitive state; the student model learns not just what to think, but when to think harder, creating GenAI systems that inherit both knowledge and some semblance of cognitive self-awareness.
Please click below to: