| ESP Journal of Engineering & Technology Advancements |
| © 2026 by ESP JETA |
| Volume 6 Issue 1 |
| Year of Publication : 2026 |
| Authors : Sohith Sri Ammineedu Yalamati |
:10.5281/zenodo.18389063 |
Sohith Sri Ammineedu Yalamati, 2026. "LLM-Enhanced Java APIs for Intent-Driven Backend Invocation in Full-Stack Systems", ESP Journal of Engineering & Technology Advancements 6(1): 37-47.
One of the challenges that has posed the greatest difficulty in the development process of full-stack applications today is the high level of user intent-invoked backend services, particularly as the level of difficulty of the application has been raised. Integration protocols that are default APIs in Java systems are likely to force programmers to directly couple front-end processes to the facilities at the back end. In this way, Java systems are also more expensive to create and slow. As the recent progress of the study of large language models (LLMs) has proven, they can be capable enough to fill the gap between the understanding of natural language and the semantics of code and provide a new chance to define backend functionality when high-level goals are prioritized.In the present paper, the proposed paradigm will be used to optimize Java APIs simultaneously with LLMs so that user intentions can be semantically defined as well as dynamically invoked on pre-existing backend services without hard-coding API routing logic.The proposed solution will be based on already existing literature in the field of code analysis with the aid of LLMs, semantic tracking, and system intent training, and will presuppose the presence of an intent parser and routing engine that can be adapted to various frontend situations in a dynamic mode. The model comes with a full-stack hybrid development on React.js and Spring Boot, which is tested and contrasted against the traditional way of invoking APIs. These include reduced response time, reduced complexity of integration, and enhanced service to match maintenance and precision. The solution proposes an open, repeatable methodology that had been tested on measures such as invocation latency, backend selection, and performance accuracy in varying loads. The solution not only attains an improved task of empowering developers to be more productive, but it also presents a paradigm of scalable LLM organization of large-scale intelligent systems in the future.
[1] Smardas, A., & Kritikos, K. (2025, June). LLM-Enhanced Derivation of the Maturity Level of RESTful Services. In International Conference on Advanced Information Systems Engineering (pp. 277-288). Cham: Springer Nature Switzerland.
[2] Fein, B., Obermüller, F., & Fraser, G. (2025). LitterBox+: An Extensible Framework for LLM-enhanced Scratch Static Code Analysis. arXiv preprint arXiv:2509.12021.
[3] Cheng, J. (2025). Exploring LLM-Based Semantic Representations in a Hybrid Approach for Automated Trace Link Recovery (Master's thesis).
[4] Van Hooren, C., Ricós, F. P., Bromuri, S., Vos, T. E., & Marín, B. (2025, July). LLM-Empowered Scriptless Functional Testing. In 2025 25th International Conference on Software Quality, Reliability and Security (QRS) (pp. 1-12). IEEE.
[5] Franzosi, D. B., Alégroth, E., & Isaac, M. LLM-based Reporting of Recorded Automated GUI-based Test cases. challenge, 13, 14.
[6] Angi, A., Sacco, A., & Marchetto, G. (2025). LLNeT: An Intent-Driven Approach to Instructing Softwarized Network Devices Using a Small Language Model. IEEE Transactions on Network and Service Management.
[7] Tang, N., Meininger, D., Xu, G., Shi, Y., Huang, Y., McMillan, C., & Li, T. J. J. (2025). NaturalEdit: Code Modification through Direct Interaction with Adaptive Natural Language Representation. arXiv preprint arXiv:2510.04494.
[8] Bouloukakis, G., Kattepur, A., Jakovetic, D., Iosifidis, A., Tserpes, K., & Pateraki, M. (2025, November). Unlocking AIoT Efficiency in the Computing Continuum-the PANDORA framework. In 15th International Conference on the Internet of Things (IoT 2025).
[9] Ray, P. P. (2025). A Review on Vibe Coding: Fundamentals, State-of-the-art, Challenges and Future Directions. Authorea Preprints.
[10] Rodrigues, D. N., Rosas, F. S., & Grácio, M. C. C. (2025). Latency vs. Cost Trade-offs in Serverless ETL: A Decision-Theoretic Framework for Architecture Design.
Intent Recognition; Java APIs; Backend Invocation; Large Language Models (LLMs); Full-Stack Systems; Semantic Routing.