Embracing the LLM Moment: Diving Head-First Into a Future of NLP & AI Research

~2 minute read

I’m going back for a PhD!

This weekend I accepted an offer to join the PhD program in the University of Utah’s School of Computing where I’ll work with Vivek Srikumar in the Utah NLP Group.

I have no idea what I’m getting myself into, and not for the usual reasons. I have worked for several years’ experience in ML & NLP research, both in academia during my master’s degree and in industry as a member of Hugging Face’s science team.

My trepidation comes not from a lack of experience, but because of the rapidly-evolving nature of research in my chosen field.

From my vantage point, there is a seismic shift underway in the landscape of NLP & ML, making it difficult to predict what my work as a researcher will look like over the course of a 5-year doctoral program.

The recent success of LLMs like GPT-4 has turned the world of NLP and ML on its head, and many of us are left wondering what the future looks like for our field. On the one hand, that makes it feel foolish to commit myself to research whose shape is constantly changing form – almost like writing a blank check.

On the other hand, that is precisely what makes this the most thrilling time to dive back into academia. The questions we as a field will need to answer haven’t even been articulated yet – what better time to dive in and make an impact?

LLMs and a Seismic Shift in Research

The rise of LLMs and instruction-tuning methods like RLHF has shaken things up for researchers. Many have spent considerable time and attention developing methods for cleverly solving problems only to see LLMs come along and solve them better.

We’ve been building elaborate Rube Goldberg machines, carefully engineering each component, only for a giant LLM to come along and accomplish the task from a prompt. While humbling and exciting to watch, it can also be a tough pill to swallow. It’s called the bitter lesson for a reason.

But with this disruption comes opportunity. As we stand at the precipice of an uncertain future, we find ourselves at the most exciting moment in the history of NLP and ML research. We have the chance to ask new questions, explore new directions, and shape the trajectory of our field.

Steering the Future of Research

With the advent of LLMs, it’s easy to feel like our work is becoming obsolete. But I argue that the opposite is true. Now, more than ever, NLP and ML researchers are essential in shaping the future of AI.

Comment
by u/needlzor from discussion [D] Anyone else witnessing a panic inside NLP orgs of big tech companies?
in MachineLearning


The uncertainty is precisely what is most exciting about this moment in AI research. Sure, it’s a little daunting, but it’s also invigorating. There’s an opportunity to make a serious impact on our field and on the world – to push for higher standards of scientific rigor and to act as grounded voices of sobriety as a counterweight to the throngs of overzealous “AI influencers.”

We have the opportunity to harness the power of LLMs toward a greater understanding of language, computation, and cognition. How can we make AI more transparent, responsible, and ethical? How can we ensure that AI benefits everyone, not just those with access to vast computational resources? How can we leverage LLMs to tackle previously intractable problems? What other key questions remain to be asked?

I, for one, am excited to contribute to this effort.

Published:

Comments