IJCOPE Journal

UGC Logo DOI / ISO Logo

International Journal of Creative and Open Research in Engineering and Management

A Peer-Reviewed, Open-Access International Journal Supporting Multidisciplinary Research, Digital Publishing Standards, DOI Registration, and Academic Indexing.
Journal Information
ISSN: 3108-1754 (Online)
Crossref DOI: Available
ISO Certification: 9001:2015
Publication Fee: 599/- INR
Compliance: UGC Journal Norms
License: CC BY 4.0
Peer Review: Double Blind
Volume 02, Issue 04

Published on: April 2026

GENERATING REALISTIC 3D VIEWS FROM AI-POWERED TEXT

Yeludanda Hruthika M Nandini G Sri Sai G GopalaKrishna

M. Hari Krishna

Department of CSE (Data Science) ACE Engineering College  Hyderabad Telangana India

Article Status

Plagiarism Passed Peer Reviewed Open Access

Available Documents

Abstract

Text-to-3D is an emerging deep learning-based approach that converts natural language descriptions into realistic 3D models. The system uses Stable Diffusion to generate a semantically aligned 2D image from the user's text prompt. The image is passed to One-2-3-45, which synthesizes multiple viewpoints to capture spatial and geometric information. These multi-view outputs are processed by TripoSR, which reconstructs a complete 3D polygonal mesh by estimating depth, surface normals, and geometric features. The final model is exported in .obj format for interactive visualization. This pipeline reduces manual modeling effort and offers accessible 3D content generation for gaming, virtual reality, and animation applications.

How to Cite this Paper

Hruthika, Y., Nandini, M., Sai, G. S. & GopalaKrishna, G. (2026). Generating Realistic 3D views From AI-powered Text. International Journal of Creative and Open Research in Engineering and Management, <i>02</i>(04). https://doi.org/10.55041/ijcope.v2i4.095

Hruthika, Yeludanda, et al.. "Generating Realistic 3D views From AI-powered Text." International Journal of Creative and Open Research in Engineering and Management, vol. 02, no. 04, 2026, pp. . doi:https://doi.org/10.55041/ijcope.v2i4.095.

Hruthika, Yeludanda,M Nandini,G Sai, and G GopalaKrishna. "Generating Realistic 3D views From AI-powered Text." International Journal of Creative and Open Research in Engineering and Management 02, no. 04 (2026). https://doi.org/https://doi.org/10.55041/ijcope.v2i4.095.

Search & Index

References


  1. Kim and S. Kim, “Multi-View Fusion and Attention-Guided Optimization for View-Consistent 3D Scene Editing with 3D Gaussian Splatting,” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2025. DOI: 10.1109/CVPR52734.2025.01040

  2. B. K. and U. D. R., “Advances in Text Detection and Recognition in Multi View Image Scenes,” IEEE Conference Proceedings, 2025.DOI: 10.1109/IACIS65746

  3. Ramesh et al., “Zero-Shot Text-to-Image Generation using Multimodal Learning,” Proceedings of the International Conference on Machine Learning (ICML), 2021. DOI: 10.48550/arXiv.2102.12092

  4. Poole, A. Jain, J. T. Barron, and B. Mildenhall, “DreamFusion: Text-to-3D using Diffusion Guidance,” arXiv preprint, 2022.DOI: 10.48550/arXiv.2209.14988

  5. Ho, A. Jain, and P. Abbeel, “Denoising Diffusion Probabilistic Models,” Advances in Neural Information Processing Systems (NeurIPS), 2020.DOI: 10.48550/arXiv.2006.11239

  6. Mildenhall et al., “NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis,” European Conference on Computer Vision (ECCV), 2020.DOI: 10.48550/arXiv.2003.08934

  7. Vaswani et al., “Attention Is All You Need,” Advances in Neural Information Processing Systems (NeurIPS), 2017.DOI: 10.48550/arXiv.1706.03762

  8. Reed et al., “Generative Adversarial Text to Image Synthesis,” Proceedings of the International Conference on Machine Learning (ICML), 2016.DOI: 10.48550/arXiv.1605.05396

  9. Goodfellow et al., “Generative Adversarial Nets,” Advances in Neural Information Processing Systems (NeurIPS), 2014.DOI: 10.48550/arXiv.1406.2661

Ethical Compliance & Review Process

  • All submissions are screened under plagiarism detection.
  • Review follows editorial policy.
  • Authors retain copyright.
  • Peer Review Type: Double-Blind Peer Review
  • Published on: Apr 06 2026
CCBYNC

This article is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. You are free to share and adapt this work for non-commercial purposes with proper attribution.

View License
Scroll to Top