ACL 2023 Conference

Hello everyone! My name is Justin Xie and I am a co-founder of the Youth for Empathetic AI organization. Recently, I attended The 61st Annual Meeting of the Association for Computational Linguistics (ACL 2023) in Toronto, Canada from the 9th to the 14th of July. ACL is the top linguistics conference in the world, containing publications from a variety of fields including (but not limited to) Natural Language Processing, computational linguistics, and computational social science.

I attended ACL because I had a paper published in the 13th Workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis (WASSA) held jointly with the conference. Published under the mentorship of Professor Ameeta Agrawal from Portland State University, my paper “Emotion and Sentiment Guided Paraphrasing” had an overarching goal of moderating emotions within online conversations, especially those that use text as the primary method of communication. To give a short summary:

With the rise of online forms of communication such as Discord, Reddit, and Twitter, we have noticed that it is harder for people to express their own emotions and interpret those of others. This has led to additional “cyber-issues” such as political polarization through online discourse, cyberbullying, and a rise in overall hate speech. Our work aims to create an “automatic emotion moderator” that would suggest online users with paraphrases of their input text that are less emotionally intense. This is done in order to foster a less emotional and more cooperative/civil cyberspace. In our work we introduce a new NLP task called Fine-Grained Emotional Paraphrasing along Emotion Gradients as a first step towards this goal. In addition, we also propose a new emotion-gradient based approach to this NLP task, as well as a full implementation of this approach through fine-tuned pre-trained language models. Through a process of fine-tuning and evaluation, we found that models fine-tuned using a new paraphrasing data set of emotion-labeled data (that we created) had significant performance gains on the task of Fine-Grained Emotional Paraphrasing and the ability to lower the emotions of a given segment of text compared to base language models.

Our full paper can be found at this link:
WASSA Proceedings:

The conference itself welcomed researchers from both academia and the industry. There were a lot of undergrads, masters, PhDs, and professors from universities across the globe. There were also industry professionals from companies including Google, Meta, Amazon, Apple, OpenAI, ByteDance, Baidu, and Huawei. This year the focus of the conference was on Large Language Models. Understandably, with the rapid rise of machine learning models such as ChatGPT, MidJourney, DALL-E, the attention of the conference was directed at testing different aspects of these models.

Being a newcomer to the fields of NLP and computational linguistics and the youngest individual at the conference (and the only high school student), the events that I found most intriguing were the keynote speakers for the main conference and workshops. Some notable speakers include:
  • Geoffrey Hinton (The Godfather of AI)
  • Diyi Yang (Associate Professor at Stanford University)
  • David Jurgens (Associate Professor at the University of Michigan)
  • Dirk Hovy (Associate Professor at Bocconi University)
  • Vinodkumar Prabhakaran (Senior Research Scientist at Google Research)
  • Iryna Gurevych (President of the Association for Computational Linguistics)
  • Larry Heck (Professor at Georgia Tech)
  • Jason Weston (Research Scientist at Meta AI)

The main message of these keynote talks was about the future of machine learning and artificial intelligence. Several topics that were focused on included:
  1. Sustainability: How can research and the pursuit of AI, especially LLMs, be more friendly for the environment?
  2. Open-Source v. Closed-Source: What are the pros and cons of a closed source approach taken by the large industry corporations in the creation of their large AI models?
  3. Personalization: Should the development of AI tools such as ChatGPT and virtual assistants be developed in a bulk effort, or in a more personalized manner.
  4. Diversity and Subjectivity: How can diversity be improved in the fields of NLP and computational linguistics?
    1. Is there only 1 truth, or are there multiple truths?
    2. Do we consider different cultures, traditions, and views in our current forms of research into NLP?
    3. How can we expand NLP and linguistics to cover more languages and cultures?

My time at ACL 2023 was an amazing experience. I met people from both the academia and industry sides. I gained new insights into the world of NLP and computational linguistics research. Overall, my most important takeaway was about the future of AI and machine learning. From the topics the keynote speakers addressed, it became abundantly clear that our mission as a non-profit organization is an extremely important one. AI is rapidly developing and growing into more areas of our lives and it is extremely important for the next generation of students to learn about the capabilities and challenges that AI and machine learning will pose for the future.