Computer Science Graduate Student

Bar Ilan University

Hi! I am a first year PhD student in the Natural Language Processing Lab at Bar-Ilan University, supervised by prof. Yoav Goldberg. I am also a research intern at AI2 Israel.

I am interested in representation learning, analysis and interpretability of neural models, and the syntactic abilities of NNs. Specifically, I am interested in the way neural models learn distributed representations that encode structured information, in the way they utillize those representatons to solve tasks, and in our ability to control their content and map them back to interpretable concepts.

Interests

  • NLP
  • Representaton Learning
  • Interpretability

Education

  • MSc in Computer Science

    Bar Ilan University

  • BSc in Computer Science

    Bar Ilan University

  • BSc in Chemistry

    Bar Ilan University

Recent Activities

  • August-Septemebr 2021: Visitng student at Prof. Ryan Cotterell’s group, ETH Zurich.
  • January 2020: invited talk at NLPhD speaker series @ Saarland University.
  • December 2020: invited talk at prof. Roi Reichart’s group, Technion ( slides)
  • July 2020: presenting our paper (virtually) at ACL2020.
  • February-March 2020: Visiting student at prof. Tal Linzen’s research group, Johns Hopkins University.
  • March 2020: Visited prof. Bob Frank research group, Yale University.
  • January 2020: started an internship at AI2 israel.

Recent Publications

Amnesic Probing: Behavioral Explanation with Amnesic Counterfactuals

We propose a complementary probing technique which relies on behavioral interventions, focused on concepts we identify with Iterative …

It’s not Greek to mBERT: Inducing Word-Level Translations from Multilingual BERT

We propose a way to derive word-level translation from multilingual BERT, and explicitly decompose its representations to a language-dependent component and a lexical, language-invariant component.

Null It Out: Guarding Protected Attributes by Iterative Nullspace Projection

We propose a data-driven projection method to selectively remove information from neural representation.

Ab Antiquo: Proto-language Reconstruction with RNNs

We study whether neural models can learn the systemtic patterns of language evolution, and reconstruct proto-forms based on words on existing langauges.

Studying the Inductive Biases of RNNs with Synthetic Variations of Natural Languages

Langauges differ in multiple ways, such as word order and morphological complexity. We study how does this complexity interact with the ability of neural models to learn the syntax of the lagnauge.

Recent Posts

Iterative Nullspace Projection (INLP)

This post describes INLP, an algorithm we’ve proposed for removing information from representations, as an alternative to adversarial removal methods. It uses linear algbera to “edit” the representation and control its content, and was found effective in mitigating gender bias.

Contact