FinToolBench: Evaluating LLM Agents for Real-World Financial Tool Use
Abstract
FinToolBench presents the first real-world benchmark for evaluating financial tool learning agents, featuring 760 executable tools and comprehensive evaluation criteria beyond simple execution success.
The integration of Large Language Models (LLMs) into the financial domain is driving a paradigm shift from passive information retrieval to dynamic, agentic interaction. While general-purpose tool learning has witnessed a surge in benchmarks, the financial sector, characterized by high stakes, strict compliance, and rapid data volatility, remains critically underserved. Existing financial evaluations predominantly focus on static textual analysis or document-based QA, ignoring the complex reality of tool execution. Conversely, general tool benchmarks lack the domain-specific rigor required for finance, often relying on toy environments or a negligible number of financial APIs. To bridge this gap, we introduce FinToolBench, the first real-world, runnable benchmark dedicated to evaluating financial tool learning agents. Unlike prior works limited to a handful of mock tools, FinToolBench establishes a realistic ecosystem coupling 760 executable financial tools with 295 rigorous, tool-required queries. We propose a novel evaluation framework that goes beyond binary execution success, assessing agents on finance-critical dimensions: timeliness, intent type, and regulatory domain alignment. Furthermore, we present FATR, a finance-aware tool retrieval and reasoning baseline that enhances stability and compliance. By providing the first testbed for auditable, agentic financial execution, FinToolBench sets a new standard for trustworthy AI in finance. The tool manifest, execution environment, and evaluation code will be open-sourced to facilitate future research.
Community
We introduce FinToolBench, a benchmark for evaluating LLM agents in realistic financial tool-use scenarios. It focuses not only on tool-calling capability, but also on finance-specific requirements such as timeliness, intent alignment, and domain compliance.
We release a runnable benchmark with real-world financial tools, evaluation protocols, and a finance-aware baseline (FATR).
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- CryptoAnalystBench: Failures in Multi-Tool Long-Form LLM Analysis (2026)
- Learning to Rewrite Tool Descriptions for Reliable LLM-Agent Tool Use (2026)
- One-Eval: An Agentic System for Automated and Traceable LLM Evaluation (2026)
- SciAgentGym: Benchmarking Multi-Step Scientific Tool-use in LLM Agents (2026)
- MCP-Atlas: A Large-Scale Benchmark for Tool-Use Competency with Real MCP Servers (2026)
- ToolMATH: A Math Tool Benchmark for Realistic Long-Horizon Multi-Tool Reasoning (2026)
- ReLE: A Scalable System and Structured Benchmark for Diagnosing Capability Anisotropy in Chinese LLMs (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper