Elena Yunusov and I are excited to announce the Fall series of our reading group with scholars from OpenAIUniversity of OxfordCohereHugging FaceStanford University and others as we delve into pivotal papers in the field of NLP/LLM.

🗓️ Series Kick-off: September 27, 2023 at 3 PM (EST)
📄 Opening Paper: “Training language models to follow instructions with human feedback
🎙️ First Speaker: Long OuyangOpenAI
🔗 Join Us on Discord: https://lnkd.in/dgjcbZrn

🗓️ Upcoming Sessions:

✅ Training language models to follow instructions with human feedback with Long Ouyang, OpenAI, on Sept. 27 at 3pm (EST).

✅ The Curse of Recursion: Training On Generated Data Makes Models Forget with Ilia Shumailov, University of Oxford, on Oct 11 at 12pm (EST).

✅ Back to the Future: Unsupervised Backprop-based Decoding for Counterfactual and Abductive Commonsense Reasoning with Lianhui Qin, University of California San Diego, on Oct 25 at 12pm (EST).

✅ Theory of Mind May Have Spontaneously Emerged in Large Language Models with Michal Kosinski, Stanford University, on Nov 8 at 12pm (EST).

✅ Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks with Patrick Lewis, Cohere, on Nov 22 at 12pm (EST).

✅ Evaluating the Social Impact of Generative AI Systems in Systems and Society with Irene Solaiman and Zeerak Talat, Hugging Face, on Dec. 6 at 12pm (EST).

Mark your calendars 🗓️ and join us for a fun in-depth exploration into large language models and their expanding role in technology and society.