I build NLP tools and study language models.
I'm working on making intelligent systems smarter today, understanding intelligence over time, and making sure it helps humanity in the long run.
Studying language is key to this work. I've built two open-source tools - LFTK and LingFeat - now used in several research labs (GitHub). Outside of research, I row for my college team (Photo) and served in the Marines (Photo).
Papers
2024 |
Programming Refusal with Conditional Activation Steering
We succeed in systematically controlling language model behavior with programmables rules like "if input is about xxx, then refuse."
|
2024 |
Language Models Don't Learn the Physical Manifestation of Language
We argue that language-only models lack understanding of the physical manifestation of language, as demonstrated through a series of tasks called the H-Test.
|
2023 |
Instruction Tuning with Human Curriculum
We present a synthetic instruction-response generation framework designed to mimic the sequential and orderly nature of human learning.
|
2021 |
Pushing on Text Readability Assessment: A Transformer Meets Handcrafted Linguistic Features
We show that combining handcrafted linguistic features with transformers can create the state-of-the-art readability classification model.
|
Softwares
2024 | IBM/activation-steering |
2023 | LFTK |
2021 | LingFeat |
2020 ~ |
Other projects: Lexical databases, NLP task APIs, and language model evaluation tools. Focus on reusable code. |
Get in Touch
Email: brucelws@seas.{school}.edu. Replace school with 'upenn'.