A key question about LLMs is whether they solve reasoning tasks by learning transferable algorithms or simply memorizing training data. This distinction matters: while memorization might handle ...
This repository contains the code for ACL 2024 paper: An Investigation of Neuron Activation as a Unified Lens to Explain Chain-of-Thought Eliciting Arithmetic Reasoning of LLMs. How to Cite: If you ...
Transformer models have significantly advanced machine learning, particularly in handling complex tasks such as natural language processing and arithmetic operations like addition and multiplication.
Large Language Models (LLMs), excel in natural language understanding, but their capability for complex mathematical reasoning with an amalgamation of structured tables and unstructured text is ...
Researchers have conducted a systematic analysis of large language models' capabilities for inductive and deductive reasoning. The study reveals both surprising strengths and clear limitations of ...
The purpose of this study was to determine if a student's accuracy in arithmetic could be maintained and assignment completion in arithmetic increased by providing free time contingent on such ...