no code implementations • WMT (EMNLP) 2020 • Lucia Specia, Zhenhao Li, Juan Pino, Vishrav Chaudhary, Francisco Guzmán, Graham Neubig, Nadir Durrani, Yonatan Belinkov, Philipp Koehn, Hassan Sajjad, Paul Michel, Xian Li
We report the findings of the second edition of the shared task on improving robustness in Machine Translation (MT).
no code implementations • IWSLT 2016 • Nadir Durrani, Fahim Dalvi, Hassan Sajjad, Stephan Vogel
This paper describes QCRI’s machine translation systems for the IWSLT 2016 evaluation campaign.
1 code implementation • IWCS (ACL) 2021 • Esther Seyffarth, Younes Samih, Laura Kallmeyer, Hassan Sajjad
This paper addresses the question to which extent neural contextual language models such as BERT implicitly represent complex semantic properties.
no code implementations • 27 May 2024 • Enes Altinisik, Safa Messaoud, Husrev Taha Sencar, Hassan Sajjad, Sanjay Chawla
Despite being a heavily researched topic, Adversarial Training (AT) is rarely, if ever, deployed in practical AI systems for two primary reasons: (i) the gained robustness is frequently accompanied by a drop in generalization and (ii) generating adversarial examples (AEs) is computationally prohibitively expensive.
no code implementations • 23 May 2024 • Domenic Rosati, Jan Wehner, Kai Williams, Łukasz Bartoszcze, David Atanasov, Robie Gonzales, Subhabrata Majumdar, Carsten Maple, Hassan Sajjad, Frank Rudzicz
We provide empirical evidence that the effectiveness of our defence lies in its "depth": the degree to which information about harmful representations is removed across all layers of the LLM.
no code implementations • 6 May 2024 • Sher Badshah, Hassan Sajjad
Scale is often attributed as one of the factors that cause an increase in the performance of LLMs, resulting in models with billion and trillion parameters.
no code implementations • 25 Apr 2024 • Sri Harsha Dumpala, Aman Jaiswal, Chandramouli Sastry, Evangelos Milios, Sageev Oore, Hassan Sajjad
This paper introduces the VISLA (Variance and Invariance to Semantic and Lexical Alterations) benchmark, designed to evaluate the semantic and lexical understanding of language models.
no code implementations • 18 Apr 2024 • Xuemin Yu, Fahim Dalvi, Nadir Durrani, Hassan Sajjad
Therefore, given a word in context, the latent space derived from our training process reflects a specific facet of that word.
no code implementations • 22 Mar 2024 • Mahtab Sarvmaili, Hassan Sajjad, Ga Wu
Existing example-based prediction explanation methods often bridge test and training data points through the model's parameters or latent representations.
no code implementations • 26 Feb 2024 • Domenic Rosati, Jan Wehner, Kai Williams, Łukasz Bartoszcze, Jan Batzner, Hassan Sajjad, Frank Rudzicz
Approaches to aligning large language models (LLMs) with human values has focused on correcting misalignment that emerges from pretraining.
1 code implementation • 14 Feb 2024 • Domenic Rosati, Robie Gonzales, Jinkun Chen, Xuemin Yu, Melis Erkan, Yahya Kayani, Satya Deepika Chavatapalli, Frank Rudzicz, Hassan Sajjad
Evaluations of model editing currently only use the `next few token' completions after a prompt.
no code implementations • 13 Nov 2023 • David Arps, Laura Kallmeyer, Younes Samih, Hassan Sajjad
We replicate the findings of M\"uller-Eberstein et al. (2022) on nonce test data and show that the performance declines on both MLMs and ALMs wrt.
1 code implementation • 26 May 2023 • Fahim Dalvi, Hassan Sajjad, Nadir Durrani
The Python toolkit is available at https://www. github. com/fdalvi/NeuroX.
no code implementations • 6 Mar 2023 • Fahim Dalvi, Nadir Durrani, Hassan Sajjad, Tamim Jaban, Musab Husaini, Ummar Abbas
NxPlain discovers latent concepts learned in a deep NLP model, provides an interpretation of the knowledge learned in the model, and explains its predictions based on the used concepts.
no code implementations • 12 Nov 2022 • Firoj Alam, Fahim Dalvi, Nadir Durrani, Hassan Sajjad, Abdul Rafae Khan, Jia Xu
We use an unsupervised method to discover concepts learned in these models and enable a graphical interface for humans to generate explanations for the concepts.
no code implementations • 10 Nov 2022 • Enes Altinisik, Hassan Sajjad, Husrev Taha Sencar, Safa Messaoud, Sanjay Chawla
Specifically, we study the effect of pre-training data augmentation as well as training time input perturbations vs. embedding space perturbations on the robustness and generalization of transformer-based language models.
no code implementations • 23 Oct 2022 • Nadir Durrani, Hassan Sajjad, Fahim Dalvi, Firoj Alam
We study the evolution of latent space in fine-tuned NLP models.
no code implementations • 18 Oct 2022 • Ahmed Abdelali, Nadir Durrani, Fahim Dalvi, Hassan Sajjad
Given the success of pre-trained language models, many transformer models trained on Arabic and its dialects have surfaced.
no code implementations • 27 Jun 2022 • Nadir Durrani, Fahim Dalvi, Hassan Sajjad
Our data-driven, quantitative analysis illuminates interesting findings: (i) we found small subsets of neurons that can predict different linguistic tasks, ii) with neurons capturing basic lexical information (such as suffixation) localized in lower most layers, iii) while those learning complex concepts (such as syntactic role) predominantly in middle and higher layers, iii) that salient linguistic neurons are relocated from higher to lower layers during transfer learning, as the network preserve the higher layers for task specific information, iv) we found interesting differences across pre-trained models, with respect to how linguistic information is preserved within, and v) we found that concept exhibit similar neuron distribution across different languages in the multilingual transformer models.
1 code implementation • NAACL 2022 • Hassan Sajjad, Nadir Durrani, Fahim Dalvi, Firoj Alam, Abdul Rafae Khan, Jia Xu
We propose a novel framework ConceptX, to analyze how latent concepts are encoded in representations learned within pre-trained language models.
no code implementations • ICLR 2022 • Fahim Dalvi, Abdul Rafae Khan, Firoj Alam, Nadir Durrani, Jia Xu, Hassan Sajjad
We address this limitation by discovering and analyzing latent concepts learned in neural network models in an unsupervised fashion and provide interpretations from the model's perspective.
1 code implementation • 13 Apr 2022 • David Arps, Younes Samih, Laura Kallmeyer, Hassan Sajjad
We find that 4 pretrained transfomer LMs obtain high performance on our probing tasks even on manipulated data, suggesting that semantic and syntactic knowledge in their representations can be separated and that constituency information is in fact learned by the LM.
no code implementations • 19 Jan 2022 • Ahmed Abdelali, Nadir Durrani, Fahim Dalvi, Hassan Sajjad
Arabic is a Semitic language which is widely spoken with many dialects.
no code implementations • 30 Aug 2021 • Hassan Sajjad, Nadir Durrani, Fahim Dalvi
The proliferation of deep neural networks in various domains has seen an increased need for interpretability of these models.
no code implementations • Findings (ACL) 2021 • Nadir Durrani, Hassan Sajjad, Fahim Dalvi
The pattern varies across architectures, with BERT retaining linguistic information relatively deeper in the network compared to RoBERTa and XLNet, where it is predominantly delegated to the lower layers.
no code implementations • NAACL 2021 • Hassan Sajjad, Narine Kokhlikyan, Fahim Dalvi, Nadir Durrani
This paper is a write-up for the tutorial on "Fine-grained Interpretation and Causation Analysis in Deep NLP Models" that we are presenting at NAACL 2021.
no code implementations • COLING 2022 • Hassan Sajjad, Firoj Alam, Fahim Dalvi, Nadir Durrani
However, post-processing for contextualized embeddings is an under-studied problem.
no code implementations • COLING 2020 • Hassan Sajjad, Ahmed Abdelali, Nadir Durrani, Fahim Dalvi
The evaluation suite and the dialectal system are publicly available for research purposes.
no code implementations • COLING 2020 • Reem Suwaileh, Muhammad Imran, Tamer Elsayed, Hassan Sajjad
For example, results show that, for training a location mention recognition model, Twitter-based data is preferred over general-purpose data; and crisis-related data is preferred over general-purpose Twitter data.
1 code implementation • EMNLP 2020 • Nadir Durrani, Hassan Sajjad, Fahim Dalvi, Yonatan Belinkov
We found small subsets of neurons to predict linguistic tasks, with lower level tasks (such as morphology) localized in fewer neurons, compared to higher level task of predicting syntax.
1 code implementation • 15 Jul 2020 • Firoj Alam, Fahim Dalvi, Shaden Shaar, Nadir Durrani, Hamdy Mubarak, Alex Nikolov, Giovanni Da San Martino, Ahmed Abdelali, Hassan Sajjad, Kareem Darwish, Preslav Nakov
With the outbreak of the COVID-19 pandemic, people turned to social media to read and to share timely information including statistics, warnings, advice, and inspirational stories.
1 code implementation • ACL 2020 • John M. Wu, Yonatan Belinkov, Hassan Sajjad, Nadir Durrani, Fahim Dalvi, James Glass
We use existing and novel similarity measures that aim to gauge the level of localization of information in the deep models, and facilitate the investigation of which design factors affect model similarity, without requiring any external linguistic annotation.
2 code implementations • Findings (EMNLP) 2021 • Firoj Alam, Shaden Shaar, Fahim Dalvi, Hassan Sajjad, Alex Nikolov, Hamdy Mubarak, Giovanni Da San Martino, Ahmed Abdelali, Nadir Durrani, Kareem Darwish, Abdulaziz Al-Homaid, Wajdi Zaghouani, Tommaso Caselli, Gijs Danoe, Friso Stolk, Britt Bruntink, Preslav Nakov
With the emergence of the COVID-19 pandemic, the political and the medical aspects of disinformation merged as the problem got elevated to a whole new level to become the first global infodemic.
no code implementations • 14 Apr 2020 • Firoj Alam, Hassan Sajjad, Muhammad Imran, Ferda Ofli
Time-critical analysis of social media streams is important for humanitarian organizations for planing rapid response during disasters.
1 code implementation • EMNLP 2020 • Fahim Dalvi, Hassan Sajjad, Nadir Durrani, Yonatan Belinkov
Transformer-based deep NLP models are trained using hundreds of millions of parameters, limiting their applicability in computationally constrained environments.
4 code implementations • 8 Apr 2020 • Hassan Sajjad, Fahim Dalvi, Nadir Durrani, Preslav Nakov
Transformer-based NLP models are trained using hundreds of millions or even billions of parameters, limiting their applicability in computationally constrained environments.
1 code implementation • 31 Mar 2020 • Abdul Rafae Khan, Asim Karim, Hassan Sajjad, Faisal Kamiran, Jia Xu
Roman Urdu is an informal form of the Urdu language written in Roman script, which is widely used in South Asia for online textual content.
no code implementations • 27 Feb 2020 • Prakhar Ganesh, Yao Chen, Xin Lou, Mohammad Ali Khan, Yin Yang, Hassan Sajjad, Preslav Nakov, Deming Chen, Marianne Winslett
Pre-trained Transformer-based models have achieved state-of-the-art performance for various Natural Language Processing (NLP) tasks.
no code implementations • IJCNLP 2019 • Hamdy Mubarak, Ahmed Abdelali, Kareem Darwish, Mohamed Eldesouki, Younes Samih, Hassan Sajjad
Short vowels, aka diacritics, are more often omitted when writing different varieties of Arabic including Modern Standard Arabic (MSA), Classical Arabic (CA), and Dialectal Arabic (DA).
no code implementations • CL 2020 • Yonatan Belinkov, Nadir Durrani, Fahim Dalvi, Hassan Sajjad, James Glass
(iii) Do the representations capture lexical semantics?
1 code implementation • WS 2019 • Xi-An Li, Paul Michel, Antonios Anastasopoulos, Yonatan Belinkov, Nadir Durrani, Orhan Firat, Philipp Koehn, Graham Neubig, Juan Pino, Hassan Sajjad
We share the findings of the first shared task on improving robustness of Machine Translation (MT).
no code implementations • NAACL 2019 • Nadir Durrani, Fahim Dalvi, Hassan Sajjad, Yonatan Belinkov, Preslav Nakov
Recent work has shown that contextualized word representations derived from neural machine translation are a viable alternative to such from simple word predictions tasks.
no code implementations • NAACL 2019 • Hamdy Mubarak, Ahmed Abdelali, Hassan Sajjad, Younes Samih, Kareem Darwish
Arabic text is typically written without short vowels (or diacritics).
2 code implementations • 21 Dec 2018 • Fahim Dalvi, Avery Nortonsmith, D. Anthony Bau, Yonatan Belinkov, Hassan Sajjad, Nadir Durrani, James Glass
We present a toolkit to facilitate the interpretation and understanding of neural network models.
1 code implementation • 21 Dec 2018 • Fahim Dalvi, Nadir Durrani, Hassan Sajjad, Yonatan Belinkov, Anthony Bau, James Glass
We further present a comprehensive analysis of neurons with the aim to address the following questions: i) how localized or distributed are different linguistic properties in the models?
no code implementations • ICLR 2019 • Anthony Bau, Yonatan Belinkov, Hassan Sajjad, Nadir Durrani, Fahim Dalvi, James Glass
Neural machine translation (NMT) models learn representations containing substantial linguistic information.
no code implementations • NAACL 2018 • Fahim Dalvi, Nadir Durrani, Hassan Sajjad, Stephan Vogel
We address the problem of simultaneous translation by modifying the Neural MT decoder to operate with dynamically built encoder and attention.
1 code implementation • IJCNLP 2017 • Yonatan Belinkov, Lluís Màrquez, Hassan Sajjad, Nadir Durrani, Fahim Dalvi, James Glass
In this paper, we investigate the representations learned at different layers of NMT encoders.
no code implementations • IJCNLP 2017 • Fahim Dalvi, Nadir Durrani, Hassan Sajjad, Yonatan Belinkov, Stephan Vogel
End-to-end training makes the neural machine translation (NMT) architecture simpler, yet elegant compared to traditional statistical machine translation (SMT).
no code implementations • ACL 2017 • Hassan Sajjad, Fahim Dalvi, Nadir Durrani, Ahmed Abdelali, Yonatan Belinkov, Stephan Vogel
Word segmentation plays a pivotal role in improving any Arabic NLP application.
no code implementations • IWSLT 2017 • Hassan Sajjad, Nadir Durrani, Fahim Dalvi, Yonatan Belinkov, Stephan Vogel
Model stacking works best when training begins with the furthest out-of-domain data and the model is incrementally fine-tuned with the next furthest domain and so on.
no code implementations • CL 2017 • Hassan Sajjad, Helmut Schmid, Alex Fraser, er, Hinrich Sch{\"u}tze
After training, the unlabeled data is disambiguated based on the posterior probabilities of the two sub-models.
1 code implementation • ACL 2017 • Yonatan Belinkov, Nadir Durrani, Fahim Dalvi, Hassan Sajjad, James Glass
Neural machine translation (MT) models obtain state-of-the-art performance while maintaining a simple, end-to-end architecture.
no code implementations • EACL 2017 • Fahim Dalvi, Yifan Zhang, Sameer Khurana, Nadir Durrani, Hassan Sajjad, Ahmed Abdelali, Hamdy Mubarak, Ahmed Ali, Stephan Vogel
This paper presents QCRI{'}s Arabic-to-English live speech translation system.
no code implementations • EACL 2017 • Renars Liepins, Ulrich Germann, Guntis Barzdins, Alex Birch, ra, Steve Renals, Susanne Weber, Peggy van der Kreeft, Herv{\'e} Bourlard, Jo{\~a}o Prieto, Ond{\v{r}}ej Klejch, Peter Bell, Alex Lazaridis, ros, Alfonso Mendes, Sebastian Riedel, Mariana S. C. Almeida, Pedro Balage, Shay B. Cohen, Tomasz Dwojak, Philip N. Garner, Andreas Giefer, Marcin Junczys-Dowmunt, Hina Imran, David Nogueira, Ahmed Ali, Mir, Sebasti{\~a}o a, Andrei Popescu-Belis, Lesly Miculicich Werlen, Nikos Papasarantopoulos, Abiola Obamuyide, Clive Jones, Fahim Dalvi, Andreas Vlachos, Yang Wang, Sibo Tong, Rico Sennrich, Nikolaos Pappas, Shashi Narayan, Marco Damonte, Nadir Durrani, Sameer Khurana, Ahmed Abdelali, Hassan Sajjad, Stephan Vogel, David Sheppey, Chris Hernon, Jeff Mitchell
We present the first prototype of the SUMMA Platform: an integrated platform for multilingual media monitoring.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +5
no code implementations • 14 Jan 2017 • Nadir Durrani, Fahim Dalvi, Hassan Sajjad, Stephan Vogel
This paper describes QCRI's machine translation systems for the IWSLT 2016 evaluation campaign.
no code implementations • WS 2016 • Mohamed Eldesouki, Fahim Dalvi, Hassan Sajjad, Kareem Darwish
We submitted four runs to the Arabic sub-task.
no code implementations • COLING 2016 • Nadir Durrani, Hassan Sajjad, Shafiq Joty, Ahmed Abdelali
We present a novel fusion model for domain adaptation in Statistical Machine Translation.
no code implementations • 4 Oct 2016 • Dat Tien Nguyen, Shafiq Joty, Muhammad Imran, Hassan Sajjad, Prasenjit Mitra
During natural or man-made disasters, humanitarian response organizations look for useful information to support their decision-making processes.
no code implementations • 12 Aug 2016 • Dat Tien Nguyen, Kamela Ali Al Mannai, Shafiq Joty, Hassan Sajjad, Muhammad Imran, Prasenjit Mitra
The current state-of-the-art classification methods require a significant amount of labeled data specific to a particular event for training plus a lot of feature engineering to achieve best results.
no code implementations • 18 Jun 2016 • Hassan Sajjad, Nadir Durrani, Francisco Guzman, Preslav Nakov, Ahmed Abdelali, Stephan Vogel, Wael Salloum, Ahmed El Kholy, Nizar Habash
The competition focused on informal dialectal Arabic, as used in SMS, chat, and speech.
no code implementations • LREC 2014 • Ahmed Abdelali, Francisco Guzman, Hassan Sajjad, Stephan Vogel
This paper presents the AMARA corpus of on-line educational content: a new parallel corpus of educational video subtitles, multilingually aligned for 20 languages, i. e. 20 monolingual corpora and 190 parallel corpora.