Effective Multi-Class Sentiment Analysis Using Fine-Tuned Large Language Model with KNIME Analytics Platform
Jin-Ching Shen, Nai-Jing Su, Yi-Bing LinThe rapid advancement of large language models (LLMs) has revolutionized natural language processing (NLP), yet fine-tuning these models for domain-specific applications remains a resource-intensive challenge. A novel fine-tuning methodology is odds ratio preference optimization (ORPO), which unifies supervised fine-tuning (SFT) and alignment into a single optimization objective. By circumventing the traditional multi-stage pipeline of base model → supervised fine-tuning (SFT) → reinforcement learning with human feedback (RLHF), ORPO achieves significant reductions in computational complexity while enhancing performance. We demonstrate the efficacy of ORPO through its application to multi-class sentiment analysis, a critical task in sentiment modeling with diverse and nuanced label sets. Using the KNIME analytics platform as an accessible, no-code interface, our approach streamlines and simplifies model development and deployment, making an advanced sentiment analysis tool more usable and cost-effective for enterprises. Experimental results reveal that the ORPO-tuned LLM achieves high accuracy with a classic and publicly available airline dataset, outperforming traditional fine-tuning and NLP methods in both accuracy and efficiency. This work highlights the transformative potential of ORPO in simplifying fine-tuning and enabling scalable solutions for sentiment analysis and beyond. By integrating ORPO with KNIME, it showcases the synergy between innovative methodologies and user-friendly platforms, advancing AI accessibility. The contributions focus on enhancing neutral sentiment analysis, developing an accessible KLSAS system, and providing key resources for easy implementation, all of which promote the practical use and wider adoption of AI in both research and industry.