Research

If you're interested in knowing more about my research, check out this talk !

Google Scholar

Publications

Kazemnejad, A., M. Aghajohari, E. Portelance, A. Sordoni, S. Reddy, A. Courville, N. Le Roux. (2024). VinePPO: Unlocking RL potential for LLM reasoning through refined credit assignment. ArXiv:2410.01679.(Manuscript under review).

Portelance, E., S. Reddy, T.J. O'Donnell. (2024). Reframing linguistic bootstrapping as joint inference using visually-grounded grammar induction models. ArXiv:2406.11977.(Manuscript under review).

Krojer, B., D. Vattikonda, L. Lara, V. Jampani, E. Portelance, C. Pal, S. Reddy. (2024). Learning action and reasoning-centric image editing from videos and simulations. ArXiv:2407.03471.(Accepted NeurIPS dataset track).

Portelance, E., M. Jasbi. (2024). The roles of neural networks in language acquisition. Language and Linguistics Compass.

Portelance, E., M.C. Frank, D. Jurafsky. (2024). Learning the meanings of function words from grounded language using a Visual Question Answering model. Cognitive Science.

Chen, X. and E. Portelance. (2023). Grammar induction pretraining for language modeling in low resource contexts. Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning(CoNLL).

Portelance, E., Y. Duan, M.C. Frank, G. Lupyan. (2023). Predicting age of acquisition for children's early vocabulary in five languages using language model surprisal. Cognitive Science.

Portelance, E.. (2022). Neural Network Approaches to the Study of Word Learning.. [Doctoral dissertation, Stanford University]. Stanford Digital Repository.

Portelance, E., M. C. Frank, D. Jurafsky, A. Sordoni, R. Laroche. (2021). The Emergence of the Shape Bias Results from Communicative Efficiency. Proceedings of the 25th Conference on Computational Natural Language Learning (CoNLL).

Potts, C., T. Icard, E. Portelance, D. Card, K. Zhou, J. Etchemendy. (2021). Philosophy of Understanding. In On the Opportunities and Risks of Foundation Models., Ed. by Center for Research on Foundation Models (CRFM) at Stanford University. arXiv:2108.07258.

Portelance, E., J. Degen, M. C. Frank. (2020). Predicting Age of Acquisition in Early Word Learning Using Recurrent Neural Networks. Proceedings of CogSci 2020.

Portelance, E. (2020). Genuine Verb stranding VP-ellipsis in Lithuanian. Proceedings of the 50th meeting of the North East Linguistic Society (NELS 50).

Portelance, E., A. Bruno, D. Harasim, L. Bergen, T. J. O'Donnell. (2019). Grammar Induction for Minimalist Grammars using Variational Bayesian Inference. arXiv:1710.11350

Harasim, D., A. Bruno, E. Portelance, M. Rohrmeier, T. J. O’Donnell. (2018). A generalised parsing framework for Abstract Grammars. arXiv:1710.11301.

Portelance, E. and A. Piper. (2016). How Cultural Capital Works: Prizewinners, Bestsellers, and the Time of Reading. Post-45.

Available Posters

Portelance, E., G. Kachergis, M.C. Frank. (2019). Comparing memory-based and neural network models of early syntactic development. Poster presentation at the BUCLD, Boston, MA.

Portelance, E., A. Bruno, D. Harasim, L. Bergen, T. J. O’Donnell. (2018). A Framework for Lexicalized Grammar Induction Using Variational Bayesian Inference. Poster presentation at the Learning Language in Humans and Machines conference, Paris, France.

Portelance, E. and A. Piper. (2017). Understanding Narrative: Computational approaches to detecting narrative frames. In proceedings of Digital Humanities Conference 2017 , Montreal, Canada.