CS5740: Assignment 3
Due: April 23, 11:59pm
Milestone due: April 13, 11:59pm
The report for this assignment is structured. Please fill in content only in the appropriate places. Other submitted content cannot be evaluated or graded. Do not add a cover page.
This assignment focuses on building vector representations (or embeddings) of words. You will explore two strategies for this task, one where you train vector representations from scratch given a large corpus of text and another where you extract representations from large pretrained language models. In either case, your goal is to produce vector representations of each of the words in a given text in such a way that the dot product of these vectors accurately represents the semantic similarity of the words they represent. Your representations will be evaluated on two separate datasets, one where word pairs are provided to you in isolation and another where the word pairs occur in a shared context. The evaluation code calculates the similarity of a word pair by taking the dot product of the two words’ vectors. The entire dataset is then scored using Spearman’s rank correlation coefficient between these dot products and human annotations, separately for each dataset.
This assignment includes an early milestone (6pt). To complete the milestone and receive the relevant points, you will need a word2vec entry on the leaderboard for the context-independent test case (i.e., isolated) with a correlation of at least 0.1.
GitHub Classroom Setup Follow the starter repository invitation link. Enter your Cornell NetID and continue. GitHub Classroom will provide a link to a private repository for you to clone to your computer. All code should exist in the repository. Only the code that is present in the master branch of your repository by the due date will be evaluated as part of your submission!
Submission Please see the submission guidelines at the end of this document.
Starter Repository:
https://classroom.github.com/a/oJMvxj0V (GitHub Classroom invitation link)
Leaderboard (Isolated Pairs):
https://github.com/cornell-cs5740-sp24/leaderboards/blob/main/a3/leaderboard_isol.csv
Leaderboard (Contextualized Pairs):
https://github.com/cornell-cs5740-sp24/leaderboards/blob/main/a3/leaderboard_cont.csv
Report Template:
https://www.overleaf.com/read/rvmkkqskpwwh#212a19
Task 1: word2vec For this task, you will be training word2vec representations on a large corpus of text. You will be using the billion word language model benchmark dataset. It contains the first 1M sentences in raw text and also analyzed with apart-of-speech tagger and dependency parser. The parsed data is in CONLL format. These first 1M sentences are available to download here:
http://www.cs.cornell.edu/courses/cs5740/2024sp/r/a3-data.zip
If you want to use more data, you can download the entire corpus from here:
http://www.statmt.org/lm-benchmark/
You will be training your embeddings using the skip-gram word2vec model covered in class. You will have to experiment with and report results for different choices for at least three of:
. amounts of training data
. dimensions for the embeddings (with a maximum size of 1024)
. how wide you set your context window
. how many negative examples you sample per context-word pair
You may additionally experiment with constructing contexts using the syntactic information we provide you. You should follow the method in the following paper if you are interested in trying this:
https://www.aclweb.org/anthology/P14-2050/
You are required to have some reasonable handling of unknown words. Simply ignoring them will result in low results on the test data.
Task 2: Extracting representations from BERT and GPT-2 Although models like BERT and GPT-2 are trained on masked language modeling and language modeling objectives respectively, the representations they learn during training tend to be useful for a wide range of downstream tasks. In this section, you will be asked to extract representations for the word pairs provided in the evaluation datasets using pretrained BERT and GPT-2 models.
When constructing representations using either model, you have to experiment with and report results for different choices for all of:
. which layer(s) you extract representations from
. how you combine representations from multiple layers, if you do so
. whether your extracted representations are normalized to have unit norm
Additionally, the models’ tokenizers may tokenize words in the evaluation set into multiple subwords. You must determine how to construct a representation for the entire word when this occurs. You should similarly report results for different choices of combining subwords in the results section.
You must perform this task for both BERT and GPT-2 separately. You may only use the "base" size for both models. Leverage the Huggingface transformers repository to download the model and its associated tokenizer for both BERT and GPT-2.
Evaluation Details We will be evaluating your word representations on their ability to capture word similarity using two datasets. Each input datapoint in the first dataset contains word pairs in isolation such as (book, library). Each data point in the second dataset, on the other hand, contains word pairs in context. Specifically, each datapoint specifies both a word pair, such as (church, choir), alongside a context in which both occur, like “A campaign was also started to purchase stained-glass windows for the church and to date all but the largest windows in the choir loft have been installed.” These contexts can both disambiguate word senses and also add further shades of meaning. For instance, if someone is pouring butter in a sentence, we can infer that the butter is in liquid form. This is where models like BERT and GPT-2 may excel over approaches like word2vec which learn single, static embeddings for each word.
For both datasets, you will be tasked with outputting word embeddings for each word in a word pair. We will then compute similarity scores between word pairs using dot products between embeddings. We will score performance for a given dataset using the Spearman’s rank correlation coefficient between these dot products and human annotations of similarity.
Output Format For each dataset-model pair, you will be outputting two embedding files, one for the initial and another for the second words in the dataset’sword pairs. Each output file should contain one word vector per line, with the word and each embedding dimension separated by a single whitespace.
For example, if we have three isolated word pairs (cat, dog), (book, library), (church, choir) and 2D embeddings, the contents of the first file may look like:
cat 0.8 0.0
book 0.7 0.1
church 0.5 0.5
If the embeddings are not all of the stated dimension, the dot product score will not work! Please submit only embeddings for the words that appear in the test data in the order that they appear. If words repeat, your representation should repeat for context-independent representations. Context-dependent representations will be likely different.
The prediction embeddings files must be put under the results/ folder. The files must be named {word2vec,bert,gpt2}_{cont,isol}_test_words{1,2}_embeddings.txt . The results/ folder in your submission should only contain these predicted 12 test embeddings .txt files.
Evaluation Scripts Also included in the starter repository are two scripts to help with evaluation. For instance, let’s assume we have saved word embeddings for contextualized word pairs to the results folder. We can generate word pair similarity scores with the following command:
python similarity.py > prediction.csv
--embedding1 results/embedding_file_words1.txt
--embedding2 results/embedding_file_words2.txt
--words data/contextual_similarity/contextual_dev_x.csv
To evaluate the word pair similarity scores against human ratings, we can then run:
python evaluate.py
--predicted prediction.csv
--development data/contextual_similarity/contextual_dev_y.csv
Leaderboard Instructions We will maintain a leaderboard for both datasets. Please refer to the README.md in the code skeleton for details on formatting. Use the evaluation scripts as described above to develop and evaluate your embeddings. The leaderboard uses Spearman’s rank correlation coefficient to evaluate the word similarity scores generated by your embedding, similar to evaluate.py. You should expect a score between zero (embedding captures no relationship between similar words) and one (embedding similarity is perfectly correlated with human-rated word similarity).
Performance Grading Performance will be graded as follows:
min((Iw2v + IBERT + IGPT ) × 22, 25) + min((Cw(I)2v + CB(I)ERT /2 + CG(I)PT /2) × 22, 25) ,
where I{w2v|BERT|GPT} are the correlations for the context-independent set (i.e., isolated) and C{w2v|BERT|GPT} are the correlations for the context-dependent set.
Development Environment and Third-party Tools All allowed third-party tools are specified in the requirements.txt file in the assignment repository. The goal of the assignment is to gain experience with specific methods, and therefore using third-party tools and frameworks beyond these specified is not allowed. You may only import packages that are specified in the requirements.txt file or that come with Python’s Standard Library. The version of Python allowed for use is 3.10.x. Do not use older or newer version. We strongly recommend working within afresh virtual environment for each assignment. For example, you can create a virtual environment using conda and install the required packages:
conda create -n cs5740a3 python=3.10
conda activate cs5740a3
pip install -r requirements.txt
Leaderboard We will consider the mostrecent leaderboard result, and it must match what you provide in your report. Please be careful with the results you submit, and validate them as much as possible on the development set before submitting. If your results go down, they go down. This aims to approximate testing in the wild (and in good research). The leaderboard refresh schedule is: every 48 hours at 8pm, then eight and four hours before deadline. Each refresh will use what your repository contains at that point. Because our scripts take time to run, the exact time we pull your repository might be a bit after the refresh time. So please avoid pushing results that you do not wish to submit.
Submission, Grading, and Writeup Guidelines Your submission on Gradescope is a writeup in PDF format. The writeup must follow the template provided. Do not modify, add, or remove section, subsection, and paragraph headers. Do not modify the spacing and margins. The writeup must include at the top of the first page: the names of the student, NetID, and the URL of the Github repository. We have access to your repository, and will look at it. Your repository must contain the code in a form that allows to run it form the command line (i.e., Jupyter notebooks are not accepted).
The following factors will be considered: your technical solution, your development and learning method- ology, and the quality of your code. If this assignment includes a leaderboard, we will also consider your performance on the leaderboard. Our main focus in grading is the quality of your empirical work and implementation. Not fractional differences on the leaderboard. We value solid empirical work, well writ- ten reports, and well documented implementations. Of course, we do consider your performance as well. The assignment details how a portion of your grade is calculated based on your empirical performance.
Some general guidelines (not only specific to this assignment) to consider when writing your report and submission:
. Your code must be in a runnable form. We must be able to run your code from vanilla Python command line interpreter. You may assume the allowed libraries are installed. Make sure to document your code properly.
. Your submitted code must include a README.md file with execution instructions.
. Please use tables and plots to report results. If you embed plots, make sure they are high resolution so we can zoom in and see the details. However, they must be readable to the naked eye (i.e., without zooming in). Specify exactly what the numbers and axes mean (e.g., F1, precisions, etc).
. It should be made clear what data is used for each result computed.
. Please support all your claims with empirical results.
. All the analysis must be performed on the development data. It is OK to use tuning data. Only
the final results of your best models should be reported on the test data.
. All features and key decisions must be ablated and analyzed.
. All analysis must be accompanied with examples and error analysis.
. Major parameters (e.g., embedding size, amount of data) analysis must include sensitivity analysis. Plots are a great way to present sensitivity analysis for numerical hyper parameters, but tables sometimes work better. Think of the best way to present your data.
. If you are asked to experiment with multiple models and multiple tasks, you must experiment and report on all combinations. It should be clear what results come from which model and task.
. Clearly specify what are the conclusions from your experiments. This includes what can be learned about the tasks, models, data, and algorithms.
. Make figures clear in isolation using informative captions.
版权所有:编程辅导网 2021 All Rights Reserved 联系方式:QQ:99515681 微信:codinghelp 电子信箱:99515681@qq.com
免责声明:本站部分内容从网络整理而来,只供参考!如有版权问题可联系本站删除。