| ESP Journal of Engineering & Technology Advancements |
| © 2025 by ESP JETA |
| Volume 5 Issue 2 |
| Year of Publication : 2025 |
| Authors : Vishnu Lakkamraju |
:10.56472/25832646/JETA-V5I2P115 |
Vishnu Lakkamraju, 2025. "Goal Decomposition & Self-Planning in Agentic AI with LLM Backends ", ESP Journal of Engineering & Technology Advancements 5(2): 141-154.
Large Language Models (LLMs), such GPT-4 and its successors, which have let machines understand, generate, and respond to natural language with hitherto unheard-of fluency and contextual awareness, have greatly driven the fast development of Artificial Intelligence (AI). These models are now being included into agentic artificial intelligence systems, which are autonomous agents meant to run with minimum human intervention, therefore changing the way machines understand and act upon complicated directions. Goal decomposition and self-planning capabilities are a major development in this area since they enable agentic artificial intelligence to divide abstract goals into doable tasks and carry out plans with adaptability and flexibility in dynamic surroundings.The principles supporting goal decomposition and self-planning in LLM-powered agentic artificial intelligence are thoroughly explored in this work. We discuss several strategic techniques including recursive task modelling, interleaved planning-execution cycles, and decomposition-first approaches. We also look at innovative models as ADaPT (As-needed Decomposition and Planning), AdaPlanner, and GoalAct that let agents decide on their own when and how to break out chores and adjust their plans in response to environmental changes or real-time data. These technologies not only improve operational effectiveness but also enable agents to show cognitive abilities including introspection, reason, and learning from past mistakes.The study also examines architectural trends such Tree of Thought (ToT), Chain of Thought (CoT), and self-reflective models like ReAct and Reflexion, which taken together form the backbone of contemporary self-planning agents. These patterns give agents organised thinking processes so they may assess several approaches to accomplish tasks and pick the most practical one. We also investigate the translation of goals into machine-readable plans and the integration of LLMs with external symbolic planners, therefore augmenting the value of agentic artificial intelligence systems in challenging fields such legal reasoning, education, and virtual environments.This work shows how goal decomposition and self-planning provide scalable, interpretable, and strong AI agent architectures by means of a synthesis of present research and empirical performance benchmarks. By doing this, it prepares the way for next developments that will challenge artificial agents' autonomy and intelligence limit.
[1] Ahn, Y., Choi, E., & Singh, S. (2023). Language models as zero-shot planners: Extracting actionable plans from LMs (arXiv:2302.05022). arXiv. https://arxiv.org/abs/2302.05022
[2] Bai, Y., Kadavath, S., & Kundu, S. (2023). Constitutional AI: Harmlessness from AI feedback (arXiv:2306.04762). arXiv. https://arxiv.org/abs/2306.04762
[3] Chen, J., Li, H., Yang, J., et al. (2025). Enhancing LLM-based agents via global planning and hierarchical execution (arXiv:2504.16563). arXiv. https://arxiv.org/abs/2504.16563
[4] Chen, Y., Li, Z., Zhang, H., & Xu, D. (2022). Planning with large language models for open-ended tasks (arXiv:2212.09611). arXiv. https://arxiv.org/abs/2212.09611
[5] Dohan, D., Gehrmann, S., & Gilmer, J. (2022). Language model cascades (arXiv:2207.10342). arXiv. https://arxiv.org/abs/2207.10342
[6] Dziri, N., Alabbas, M., & Chang, K. (2023). Faithfulness in chain-of-thought reasoning (arXiv:2302.03494). arXiv. https://arxiv.org/abs/2302.03494
[7] Gao, X., Liu, J., & Liu, B. (2023). PAL: Program-aided language models (arXiv:2211.10435). arXiv. https://arxiv.org/abs/2211.10435
[8] Ge, T., Li, Z., & Xu, D. (2023). Toolformer: Language models can teach themselves to use tools (arXiv:2302.04761). arXiv. https://arxiv.org/abs/2302.04761
[9] Huang, B., & Wang, J. (2023). Prompting large language models for planning and acting in open worlds (arXiv:2301.04180). arXiv. https://arxiv.org/abs/2301.04180
[10] Hu, M., Zhao, P., Xu, C., et al. (2024). AgentGen: Enhancing planning abilities for large language model-based agents via environment and task generation (arXiv:2408.00764). arXiv. https://arxiv.org/abs/2408.00764
[11] Jiang, Y., Zhang, Q., & Wang, M. (2023). Decision transformer: Reinforcement learning via sequence modeling (arXiv:2106.01345). arXiv. https://arxiv.org/abs/2106.01345
[12] Khot, T., Sabharwal, A., & Clark, P. (2022). Explaining via chains of reasoning. Transactions of the Association for Computational Linguistics, 10, 1340–1356. https://doi.org/10.1162/tacl_a_00508
[13] Kim, B., & Rudin, C. (2021). Interpretable machine learning: A guide for making black box models explainable. Springer.
[14] Kojima, T., Gu, S. S., Reid, M., et al. (2022). Large language models are zero-shot reasoners (arXiv:2205.11916). arXiv. https://arxiv.org/abs/2205.11916
[15] Li, L., & Liang, P. (2023). Planning with multistep reasoning in language models (arXiv:2303.04761). arXiv. https://arxiv.org/abs/2303.04761
[16] Liu, Y., Li, Q., & Lin, Z. (2023). APE: Ask, plan, execute (arXiv:2305.05087). arXiv. https://arxiv.org/abs/2305.05087
[17] Long, D., & Fox, M. (2003). The language of the PDDL2.1 planning competition. Journal of Artificial Intelligence Research, 20, 279–292. https://doi.org/10.1613/jair.1176
[18] Miller, T., & Binns, R. (2022). Explanation in artificial intelligence: Insights from the social sciences. AI Magazine, 43(1), 7–18. https://doi.org/10.1002/aaai.12041
[19] Nye, M., Hewitt, C., & Lake, B. (2021). Skill acquisition through language and planning. In Proceedings of the Annual Meeting of the Cognitive Science Society.
[20] OpenAI. (2023). GPT-4 Technical Report. https://openai.com/research/gpt-4
[21] Prasad, A., Koller, A., Hartmann, M., et al. (2023). ADaPT: As-needed decomposition and planning with language models (arXiv:2311.05772). arXiv. https://arxiv.org/abs/2311.05772
[22] Reimers, N., & Gurevych, I. (2020). Making monolingual sentence embeddings multilingual using knowledge distillation (arXiv:2004.09813). arXiv. https://arxiv.org/abs/2004.09813
[23] Schaeffer, R., Mirhoseini, A., & Bengio, Y. (2023). Visual reasoning with chain-of-thought prompting (arXiv:2304.12789). arXiv. https://arxiv.org/abs/2304.12789
[24] Shinn, N., & Labash, A. (2023). Reflexion: Language agents with verbal reinforcement learning (arXiv:2303.11366). arXiv. https://arxiv.org/abs/2303.11366
[25] Sun, H., Zhuang, Y., Kong, L., et al. (2023). AdaPlanner: Adaptive planning from feedback with language models (arXiv:2305.16653). arXiv. https://arxiv.org/abs/2305.16653
[26] Tamkin, A., Brundage, M., Clark, J., & Ganguli, D. (2021). Understanding the capabilities, limitations, and societal impact of large language models (arXiv:2102.02503). arXiv. https://arxiv.org/abs/2102.02503
[27] Touvron, H., Lavril, T., & Simonyan, K. (2023). LLaMA 2: Open foundation and chat models (arXiv:2307.09288). arXiv. https://arxiv.org/abs/2307.09288
[28] Vaswani, A., Shazeer, N., Parmar, N., et al. (2017). Attention is all you need. Advances in Neural Information Processing Systems (NeurIPS), 30. https://papers.nips.cc/paper_files/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf
[29] Wang, B., Lin, Z., & Liu, P. (2023). ToolLLM: Facilitating tool-use for language models via prompt engineering (arXiv:2305.12047). arXiv. https://arxiv.org/abs/2305.12047
[30] Wang, J., & Li, D. (2023). Code-as-policies: Language model programs for embodied control. In International Conference on Learning Representations (ICLR). https://openreview.net/forum?id=BsQ6w2h-4f
[31] Wei, J., Wang, X., Schuurmans, D., et al. (2022). Chain of thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems (NeurIPS), 35, 24824–24837. https://papers.nips.cc/paper_files/paper/2022/file/9d5609613524fd94be3c2142c4cf4020-Paper-Conference.pdf
[32] Wu, S., Agarwal, M., & Xu, Q. (2023). Zero-shot tool selection with large language models (arXiv:2303.08128). arXiv. https://arxiv.org/abs/2303.08128
[33] Xie, X., & Pu, Y. (2023). Multi-agent planning with LLMs (arXiv:2307.09299). arXiv. https://arxiv.org/abs/2307.09299
[34] Yao, S., Zhao, J., Yu, D., et al. (2023). Tree of Thoughts: Deliberate problem solving with large language models (arXiv:2305.10601). arXiv. https://arxiv.org/abs/2305.10601
[35] Ye, Y., Li, X., & Lin, J. (2023). ToolBench: Evaluating tool use of large language models (arXiv:2305.17126). arXiv. https://arxiv.org/abs/2305.17126
[36] Yu, W., & Wang, Z. (2023). Knowledge-grounded multi-step reasoning. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP). https://aclanthology.org/2023.emnlp-main.101
[37] Zhang, J., & Liu, X. (2023). Prompt-based planning for generalist agents (arXiv:2302.08295). arXiv. https://arxiv.org/abs/2302.08295
[38] Zhao, P., Yu, H., & Xu, C. (2023). Integrating PDDL planners into LLMs: A hybrid symbolic-neural framework (arXiv:2307.01468). arXiv. https://arxiv.org/abs/2307.01468
[39] Zheng, Y., & Chen, B. (2023). Multi-hop symbolic reasoning with LLMs (arXiv:2304.08510). arXiv. https://arxiv.org/abs/2304.08510
[40] Zhou, W., Zhang, R., & Wang, J. (2023). Can large language models solve planning problems? (arXiv:2303.12705). arXiv. https://arxiv.org/abs/2303.12705
Agentic Artificial Intelligence, Large Language Models (LLMs), Goal Decomposition, Self-Planning, Autonomous Agents, Adaptive Planning, Task Decomposition, Artificial Reasoning, Chain of Thought (CoT), Tree of Thought (ToT), Reflexive AI, ReAct, Reflexion, Symbolic Planning, Cognitive AI.