Abstract views: 134 / PDF downloads: 51
Keywords: Autoencoders; Collabourative Filtering; content based filtering; Reinforcement,
Collabourative filtering is a cornerstone of modern recommendation systems, leveraging user-item interactions to generate personalized recommendations. This study empirically compares the performance of autoencoders and reinforcement learning in collabourative filtering. We evaluate these models on real-world datasets using TensorFlow/Keras, measuring accuracy with precision, recall, and RMSE. Results indicate that while autoencoders excel in capturing latent user preferences, reinforcement learning dynamically adapts to evolving behaviors. The findings reveal that autoencoders consistently outperform reinforcement learning in static environments with sparse datasets due to their robust latent representation capabilities. However, reinforcement learning demonstrates superior adaptability in dynamic scenarios where user preferences shift over time. The results also show that autoencoders achieve higher precision and lower RMSE, making them ideal for applications focused on accuracy. In contrast, reinforcement learning provides better long-term engagement by learning from user feedback in an interactive loop. These insights suggest a hybrid approach could potentially leverage the strengths of both methods for enhanced recommendation outcomes.