Abstract
Large language models like ChatGPT have shown extraordinary abilities for writing. While impressive at first glance, large language models aren’t perfect and often make mistakes humans would not make. The main architecture behind ChatGPT mostly doesn’t differ from early neural networks, and as a consequence, carries some of the same limitations. My work revolves around the use of neural networks like ChatGPT mixed with symbolic methods from early AI and how these two families of methods can combine to create more robust AI. I talk about some of the neurosymbolic methods I used for applications in story generation and understanding — with the goal of eventually creating AI that can play Dungeons & Dragons. I also discuss pain points that I found for improving accessible communication and show how large language models can supplement such communication.
Biography
Dr. Lara J. Martin (she/they) is an assistant professor at the University of Maryland, Baltimore County in the CSEE department, researching human-centered artificial intelligence with a focus on natural language processing applications. They have worked in the areas of automated story generation, augmentative and alternative communication (AAC) tools, AI for tabletop roleplaying games, speech processing, and affective computing—publishing in top-tier conferences such as AAAI, ACL, EMNLP, and IJCAI. They have also been featured in Wired and BBC Science Focus magazine. Previously, Dr. Martin was a 2020 Computing Innovation Fellow (CIFellow) postdoctoral researcher at the University of Pennsylvania working with Dr. Chris Callison-Burch. She earned her PhD in Human-Centered Computing from the Georgia Institute of Technology, where she worked with Dr. Mark Riedl. She also has a MS in Language Technologies from Carnegie Mellon University and a BS in Computer Science & Linguistics from Rutgers University—New Brunswick.
Hackerman Hall B17 @ 3400 N. Charles Street, Baltimore, MD 21218