The boom of generative models has paved the way for significant advances in recommender systems. For instance, pre-trained generative models offer unprecedented opportunities to improve recommender algorithms for user modeling. This workshop aims to provide a platform for researchers to actively explore and share innovative ideas on integrating generative models into recommender systems, mainly focusing on five key aspects: i) enhancing recommender algorithms, ii)generating personalized content in some scenarios such as micro-videos, iii) changes in the user-system interaction paradigm, iv) boosting trustworthiness checks, and v) evaluation methodologies of generative recommendation. With the rapid development of generative models, a growing number of studies along the above directions are emerging, revealing the timeliness and necessity of this workshop. The related research will bring novel features to recommender systems and contribute to new tasks and technologies in both academia and industry. In the long run, this research direction might revolutionize the traditional recommender paradigm and lead to the maturation of next-generation recommender systems.
Donald Biggar Willett Professor in Engineering
University of Illinois at Urbana-Champaign (UIUC)
Recent years have seen great success of large language models (LLMs) in performing many natural language processing tasks, especially tasks that directly serve users such as question answering, text summarization, and text generation in general. At the same time, concerns such as trustworthiness and halluciation have also raised problems about such models in terms of their actual utility when deployed in applications that serve many users. In this talk, I will systematically examine the opportunities and challenges that LLMs have created for recommender systems. Specifically, I will address the following questions, 1) How can LLMs be leveraged to improve the current recommender systems? 2) What is the potental for LLMs to transform the future recommender system applications? 3) What are the major challenges in applying LLMs to recommender systems? 4) Given the anticipated growth of LLMs, what would recommender systems look like in the future?
Dr. ChengXiang Zhai (http://czhai.cs.illinois.edu/) is a Donald Biggar Willett Professor in Engineering of the Department of Computer Science at the University of Illinois at Urbana-Champaign (UIUC), where he is also affiliated with School of Information Sciences, Department of Statistics, and the Carl R. Woese Institute for Genomic Biology. He received a Ph.D. in Computer Science from Nanjing University in 1990, and a Ph.D. in Language and Information Technologies from Carnegie Mellon University in 2002. He worked at Clairvoyance Corp. as a Research Scientist and a Senior Research Scientist from 1997 to 2000. His research interests are in the general area of intelligent information systems, including specifically information retrieval, data mining, natural language processing, machine learning, and their applications in domains such as biomedical informatics, and intelligent education systems. He has over 400 publications in these areas with over 40,000 citations and an h-index of 92 in Google Scholar. He also holds 5 US patents. He offers two Massive Open Online Courses (MOOCs) on Coursera covering Text Retrieval and Search Engines and Text Mining and Analytics, respectively, and is a key contributor of the Lemur text retrieval and mining toolkit. He is America Editor of the Springer Information Retrieval Book Series and a Senior Associate Editor of ACM Transactions on Intelligent Systems and Technology. Previously, he served as an Associate Editor of journals in multiple areas including, ACM Transactions on Information Systems, Information Processing and Management, BMC Medical Informatics and Decision Making, and ACM Transactions on Knowledge Discovery from Data, and on the editorial board of Information Retrieval Journal. He is a program co-chair of ACM CIKM 2004, NAACL HLT 2007, ACM SIGIR 2009, ECIR 2014, WWW 2015, and ICTIR 2015. He is a general conference co-chair of CIKM 2016, WSDM 2018, and IEEE BigData 2020. He is an ACM Fellow and a member of the ACM SIGIR Academy, and received a number of awards, including ACM SIGIR Gerard Salton Award, multiple best paper awards such as the ACM SIGIR 2004 Best Paper Award, and the ACM SIGIR Test of Time Award (three times), the 2004 Presidential Early Career Award for Scientists and Engineers (PECASE), an Alfred P. Sloan Research Fellowship, multiple research awards from industry (IBM Faculty Award, HP Innovation Research Award, Microsoft Beyond Search Research Award , and Yahoo Faculty Research Engagement Program Award), UIUC Rose Award for Teaching Excellence, and UIUC Campus Award for Excellence in Graduate Student Mentoring. He has graduated 41 PhD students and over 50 MS students.
Huawei Noah’s Ark Lab
Huawei’s vision and mission is to build a fully connected intelligent world. Since 2013, Huawei Noah’s Ark Lab has helped many products to build recommender systems and search engines for getting the right information to the right users. Every day, our recommender systems serve hundreds of millions of mobile phone users and recommend different kinds of content and services such as apps, news feeds, songs, videos, books, themes, and instant services. The big data and various scenarios provide us with great opportunities to develop advanced recommendation technologies. Furthermore, we have witnessed the technical trend of recommendation models in the past ten years, from the shallow and simple models like collaborative filtering, linear model, low rank models to the deep and complex models like neural network, pre-trained language models. Based on the mission, opportunities and technological trends, we have also met several hard problems in our recommender systems. In this talk, we will share ten important and interesting challenges and hope that the RecSys community can get inspired and create better recommender systems.
Dr. Zhenhua Dong is a technology expert of Huawei Noah’s ark lab, he is leading a research team focused on recommender system and causal inference. His team has launched significant improvements of recommender systems for several applications, such as news feeds, app store, instant services and advertising. With more than 40 patents and 50 research articles in TKDE, SIGIR, RecSys, WWW, AAAI, CIKM etc., he is known for research on recommender system, causal inference and counterfactual learning. He is also serving as PC or SPC members of SIGKDD, SIGIR, RecSys, WSDM, CIKM. He received the BEng degree from Tianjin University in 2006 and the PhD degree from Nankai University in 2012. He was a visiting scholar at GroupLens lab in the University of Minnesota during 2010-2011.
University of Science and Technology of China
The advancements of Large Language Models (LLMs) have given rise to a new paradigm in recommendation, known as LLM4Rec. However, due to limitations such as the lack of recommendation related data in the pre-training corpus of LLM, LLMs are not naturally suitable for recommendation. The key to resolving this issue lies in aligning LLMs with recommendations by instruction tuning. Specifically, we will delve into exploring how to unleash the potential of LLMs in the field of the recommendation by focusing on aligning LLMs with both the recommendation task and the recommendation modality. Finally, we shall contemplate the potential concerns posited by LLM4Rec, as well as envision the prospects that LLM4Rec hold.
Dr. Fuli Feng is a professor in University of Science and Technology of China. He received Ph.D. in Computer Science from National University of Singapore in 2019. His research interests include information retrieval, data mining, causal inference, and multi-media processing. He has over 60 publications appeared in several top conferences such as SIGIR, WWW, and SIGKDD, and journals including TKDE and TOIS. He has received the Best Paper Honourable Mention of SIGIR 2021 and Best Poster Award of WWW 2018. Moreover, he has served as the PC member for several top conferences including SIGIR, WWW, SIGKDD, NeurIPS, ICLR, and ACL, and the invited reviewer for prestigious journals such as TOIS, TKDE, and TPAMI. He organized the 1st workshop on Information Retrieval in Finance at SIGIR'20.
The primary aim of this workshop is to foster innovative research centered around the integration of generative models with recommender systems, specifically focusing on five key aspects. First, the issue will encourage active researchers to leverage generative models to improve recommender algorithms for better user modeling. Second, it encourages exploring the possibility of using generative models to produce more diverse content in some scenarios, to supplement human-generated content for meeting users' wide-ranging preferences and information needs. Third, it welcomes major innovations in the way in which users interact with recommender systems, enabled by advanced language understanding and generation abilities of generative models in general and of large language models in particular. Fourth, the workshop will emphasize the importance of trustworthiness when using generative models for recommendation, including but not limited to examining the trustworthiness of generated content, addressing biases in recommender algorithms, and ensuring compliance with emerging ethical and legal standards. Last but not least, the workshop will encourage researchers to design evaluation methodologies to examine the usage of generative models in recommender systems. This involves the development of new evaluation metrics and standards, as well as the establishment of human evaluation paradigms and interfaces.
The workshop will serve as an invaluable platform for researchers to contribute the latest ideas, advances, and breakthroughs in this rapidly evolving field. We invite original submissions on recommender systems with generative models, including but not limited to the following topics:
Submitted papers (.pdf format) must use the template of ACM CIKM 2023. Please remember to add Concepts and Keywords. Submissions can be of varying length from 4 to 8 pages, plus additional pages for the reference pages, i.e., the reference page(s) are not counted to the page limit of 4 to 8 pages. There is no distinction between long and short papers, but the authors may decide on the appropriate length of the paper. All papers will undergo the same review process and review period. Paper submissions must conform to the ``double-blind'' review policy. All papers will be peer-reviewed by experts in the field. Acceptance will be based on relevance to the workshop, scientific novelty, and technical quality.
Submission site: https://easychair.org/conferences/?conf=genrec23. We are also preparing an ACM TOIS special issue on using generative models for recommendation. High-quality submissions will be recommended to submit to this special issue.
|Opening & Introduction
|Keynote 1: Large Language Models and Recommender Systems: Opportunities and Challenges by Dr. Zhai Chengxiang, slide
|Keynote 2: 10 Challenges in Industiral Recommender Systems by Dr. Dong Zhenhua, pdf
|Is ChatGPT a Good Recommender? A Preliminary Study
|Multiple Key-value Strategy in Recommendation Systems Incorporating Large Language Model
|Trustworthy Recommendations through Generative AI: Addressing Bias, Fairness, Privacy, Safety, Authenticity, Legal Compliance, and Identifiability
|RecFusion: A Binomial Diffusion Process for 1D Data for Recommendation
|ReLLa: Retrieval-enhanced Large Language Models for Lifelong Sequential Behavior Comprehension in Recommendation
|Generative Retrieval as A New Paradigm: A Survey
|Instruction Distillation Makes Large Language Models Efficient Pointwise Rankers
|Contrastive Quantization based Semantic Code for Generative Recommendation
|Smart Query Reformulation for Low Inventory Items in E-Commerce, pdf
|Keynote 3: Aligning Large Language Models to Recommendation: Progresses and Future Prospects by Dr. Feng Fuli, slide
Postdoctoral Research Fellow
National University of Singapore
Senior Principal Researcher
Huawei Noah’s Ark Lab, Singapore
University of Science and Technology of China
Huawei Noah's Ark Lab, China