Agent 与自动化 3.0 · 值得看 2026-03-23 · 论文

LLM Agent

论文系统梳理基于大语言模型(LLM)的智能 Agent 系统,从方法论、应用和挑战三个维度构建统一分类体系,揭示 Agent 设计原则与复杂环境中涌现行为之间的基本联系。 创新点 方法论中心的分类法:提出 Build-Collaborate-Evolve 三维框架,系统解构 Agent 的构建、协作和演进机制 统一架构视角:连接角色定义、记忆机制、规划能力和行动执行四大核心组件,揭示设计原则与涌现行为的联系 前沿应用与真实挑战:涵盖安全、隐私、伦理等现实问题...

打开原文回到归档

来源: https://arxiv.org/abs/2503.21460

[2503.21460] Large Language Model Agent: A Survey on Methodology, Applications and Challenges

Skip to main content

Learn about arXiv becoming an independent nonprofit.

We gratefully acknowledge support from the Simons Foundation, member institutions, and all contributors. Donate

> cs > arXiv:2503.21460

Help | Advanced Search

All fields Title Author Abstract Comments Journal reference ACM classification MSC classification Report number arXiv identifier DOI ORCID arXiv author ID Help pages Full text

Search

open search

GO

open navigation menu

quick links

Login Help Pages About

-->

Computer Science > Computation and Language

arXiv:2503.21460 (cs)[Submitted on 27 Mar 2025] Title:Large Language Model Agent: A Survey on Methodology, Applications and Challenges Authors:Junyu Luo, Weizhi Zhang, Ye Yuan, Yusheng Zhao, Junwei Yang, Yiyang Gu, Bohan Wu, Binqi Chen, Ziyue Qiao, Qingqing Long, Rongcheng Tu, Xiao Luo, Wei Ju, Zhiping Xiao, Yifan Wang, Meng Xiao, Chenwu Liu, Jingyang Yuan, Shichang Zhang, Yiqiao Jin, Fan Zhang, Xian Wu, Hanqing Zhao, Dacheng Tao, Philip S. Yu, Ming Zhang View a PDF of the paper titled Large Language Model Agent: A Survey on Methodology, Applications and Challenges, by Junyu Luo and 25 other authors View PDF HTML (experimental)

Abstract:The era of intelligent agents is upon us, driven by revolutionary advancements in large language models. Large Language Model (LLM) agents, with goal-driven behaviors and dynamic adaptation capabilities, potentially represent a critical pathway toward artificial general intelligence. This survey systematically deconstructs LLM agent systems through a methodology-centered taxonomy, linking architectural foundations, collaboration mechanisms, and evolutionary pathways. We unify fragmented research threads by revealing fundamental connections between agent design principles and their emergent behaviors in complex environments. Our work provides a unified architectural perspective, examining how agents are constructed, how they collaborate, and how they evolve over time, while also addressing evaluation methodologies, tool applications, practical challenges, and diverse application domains. By surveying the latest developments in this rapidly evolving field, we offer researchers a structured taxonomy for understanding LLM agents and identify promising directions for future research. The collection is available at this https URL.

Comments: 329 papers surveyed, resources are at this https URL

Subjects:

Computation and Language (cs.CL)

Cite as: arXiv:2503.21460[cs.CL]

  (or arXiv:2503.21460v1[cs.CL] for this version)

  https://doi.org/10.48550/arXiv.2503.21460

Focus to learn more

arXiv-issued DOI via DataCite

Submission history From: Junyu Luo[view email][v1] Thu, 27 Mar 2025 12:50:17 UTC (814 KB)

Full-text links: Access Paper:

View a PDF of the paper titled Large Language Model Agent: A Survey on Methodology, Applications and Challenges, by Junyu Luo and 25 other authorsView PDFHTML (experimental)TeX Source

view license

Current browse context: cs.CL

< prev

  |   next >

new | recent | 2025-03

Change to browse by:

cs

References & Citations

NASA ADSGoogle Scholar Semantic Scholar

export BibTeX citation Loading...

BibTeX formatted citation ×

loading...

Data provided by:

Bookmark

Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle

Bibliographic Explorer (What is the Explorer?)

Connected Papers Toggle

Connected Papers (What is Connected Papers?)

Litmaps Toggle

Litmaps (What is Litmaps?)

scite.ai Toggle

scite Smart Citations (What are Smart Citations?)

Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle

alphaXiv (What is alphaXiv?)

Links to Code Toggle

CatalyzeX Code Finder for Papers (What is CatalyzeX?)

DagsHub Toggle

DagsHub (What is DagsHub?)

GotitPub Toggle

Gotit.pub (What is GotitPub?)

Huggingface Toggle

Hugging Face (What is Huggingface?)

Links to Code Toggle

Papers with Code (What is Papers with Code?)

ScienceCast Toggle

ScienceCast (What is ScienceCast?)

Demos

Demos

Replicate Toggle

Replicate (What is Replicate?)

Spaces Toggle

Hugging Face Spaces (What is Spaces?)

Spaces Toggle

TXYZ.AI (What is TXYZ.AI?)

Related Papers

Recommenders and Search Tools

Link to Influence Flower

Influence Flower (What are Influence Flowers?)

Core recommender toggle

CORE Recommender (What is CORE?)

Author Venue Institution Topic

About arXivLabs

arXivLabs: experimental projects with community collaborators arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)

mathjaxToggle();

About Help

contact arXivClick here to contact arXiv Contact

subscribe to arXiv mailingsClick here to subscribe Subscribe

Copyright Privacy Policy

Web Accessibility Assistance

arXiv Operational Status