OpenAI released the beta version of ChatGPT at the end of November 2022, which is less than two months ago. Since how we answer these questions directly affects our reviewing process, which in turn affects members of our research community and their careers, we must be careful and somewhat conservative in considering this new technology. However, we do not yet have any clear answers to any of these questions. There is also a question on the ownership of text snippets, images or any media sampled from these generative models: which one of these owns it, a user of the generative model, a developer who trained the model, or content creators who produced training examples? It is certain that these questions, and many more, will be answered over time, as these large-scale generative models are more widely adopted. As we have already seen during the past few weeks alone, there is, for instance, a question on whether text as well as images generated by large-scale generative models are considered novel or mere derivatives of existing work. Such rapid progress often comes with unanticipated consequences as well as unanswered questions. Undoubtedly this is exciting progress in natural language processing and generation. As many, including ourselves, have noticed, LLMs released in the past few months, such as OpenAI’s chatGPT, are now able to produce text snippets that are often difficult to distinguish from human-written text. This progress has not slowed down but only sped up during the past few months. We expect this policy may evolve in future conferences as we understand LLMs and their impacts on scientific publishing better.ĭuring the past few years, we have observed and been part of rapid progress in large-scale language models (LLM), both in research and deployment. The LLM policy is largely predicated on the principle of being conservative with respect to guarding against potential issues of using LLMs, including plagiarism.This does not prohibit authors from using LLMs for editing or polishing author-written text. The Large Language Model (LLM) policy for ICML 2023 prohibits text produced entirely by LLMs (i.e., “generated”).We appreciate your feedback and comments and would like to clarify further the intention behind this statement and how we plan to implement this policy for ICML 2023. This statement has raised a number of questions from potential authors and led some to proactively reach out to us. Papers that include text generated from a large-scale language model (LLM) such as ChatGPT are prohibited unless the produced text is presented as a part of the paper’s experimental analysis. We (Program Chairs) have included the following statement in the Call for Papers for ICML represented by 2023: Clarification on Large Language Model Policy LLM
0 Comments
Leave a Reply. |