"Search In Canvas, undo extra time for Search reply to review?\n",
"\n",
"Search type annotation bug with dictionary.get in sorted key using mypy.\n",
"\n",
"Search require students to write out math expressions for test cases.\n",
"\n",
"Search Document how %%ipytest causes nb_mypy line numbers to be off by one. You need to look at the line below the documented one. Also, mypy can get confused by line comments?\n",
"\n",
"Search Remove or inline the mypy type hints cheat sheet as there are unnecessary things like capitalized types."
]
},
{
"cell_type": "markdown",
"id": "26e9ec12-815a-4073-8891-cb6888c41cdb",
...
...
%% Cell type:markdown id:e3878ca3 tags:
Search In Canvas, undo extra time for Search reply to review?
Search type annotation bug with dictionary.get in sorted key using mypy.
Search require students to write out math expressions for test cases.
Search Document how %%ipytest causes nb_mypy line numbers to be off by one. You need to look at the line below the documented one. Also, mypy can get confused by line comments?
Search Remove or inline the mypy type hints cheat sheet as there are unnecessary things like capitalized types.
In this assessment, you'll implement a basic search engine by defining your own Python classes. A **search engine** is an algorithm that takes a query and retrieves the most relevant documents for that query. In order to identify the most relevant documents, our search engine will use **term frequency–inverse document frequency** ([tf–idf](https://en.wikipedia.org/wiki/Tf%E2%80%93idf)), an information statistic for determining the relevance of a term to each document from a corpus consisting of many documents.
The **tf–idf statistic** is a product of two values: term frequency and inverse document frequency. **Term frequency** computes the number of times that a term appears in a **document** (such as a single Wikipedia page). If we were to use only the term frequency in determining the relevance of a term to each document, then our search result might not be helpful since most documents contain many common words such as "the" or "a". In order to downweight these common terms, the **document frequency** computes the number of times that a term appears across the **corpus** of all documents.
Students are expected to follow Washington state law on the [Student Conduct Code for the University of Washington](https://www.washington.edu/admin/rules/policies/WAC/478-121TOC.html). In this course, students must:
- Indicate on your submission any assistance received, including materials distributed in this course.
- Not receive, generate, or otherwise acquire any substantial portion or walkthrough to an assessment.
- Not aid, assist, attempt, or tolerate prohibited academic conduct in others.
Update the following code cell to include your name and list your sources. If you used any kind of computer technology to help prepare your assessment submission, include the queries and/or prompts. Submitted work that is not consistent with sources may be subject to the student conduct process.
Write and test a `Document` class in the code cell below that can be used to represent the text in a web page and includes methods to compute term frequency. (But not document frequency since that would require access to all the documents in the corpus.)
The `Document` class should include:
1. An initializer `__init__` takes a `str` path and initializes the document data based on the text in the specified file. Assume that the file exists, but that it could be empty. In order to implement `term_frequency` later, we'll need to precompute and save the term frequency for each term in the document in the initializer as a field by constructing a dictionary that maps each `str` term to its `float` term frequency. Term frequency is defined as *the count of the given term* divided by *the total number of words in the text*.
> Consider the term frequencies for this document containing 4 total words: "the cutest cutest dog".
>
> - "the" appears 1 time out of 4 total words, so its term frequency is 0.25.
> - "cutest" appears 2 times out of 4 total words, so its term frequency is 0.5.
> - "dog" appears 1 time out of 4 total words, so its term frequency is 0.25.
When constructing this dictionary, call the `clean` function to convert the input token to lowercase and ignore non-letter characters so that "corgi", "CoRgi", and "corgi!!" are all considered the same string "corgi" to the search algorithm.
1. A method `term_frequency` that takes a given `str` term and returns its term frequency by looking it up in the precomputed dictionary. Remember to normalize the term before looking it up to find the corresponding match. If the term does not occur, return 0.
1. A method `get_path` that returns the `str` path of the file that this document represents.
1. A method `get_words` that returns a `set` of the unique, cleaned words in this document.
1. A method `__repr__` that returns a string representation of this document in the format `Document('{path}')` (with literal single quotes in the output) where `{path}` is the path to the document from the initializer. The `__repr__` method is called when Jupyter Notebook needs to display a `Document` as output, so we should be able to copy the string contents into a new code cell and immediately run it to create a new `Document`.
**For each of the 4 methods (excluding the initializer) in the `Document` class, write a testing function that contains at least 3 `pytest`-style assertions based on your own testing corpus**. As always, your test cases should expand the domain and range. Documentation strings are optional for testing functions.
We've provided some example corpuses in the `doggos` directory and the `small_wiki` directory. For this task, create your own testing corpus by creating a **New Folder** and adding your own text files to it.
> Be sure to exhaustively test your `Document` class before moving on: bugs in `Document` will make implementing the following `SearchEngine` task much more difficult.
Write and test a `SearchEngine` class in the code cell below that represents a corpus of `Document` objects and includes methods to compute the tf–idf statistic between a given query and every document in the corpus. The `SearchEngine` begins by constructing an **inverted index** that associates each term in the corpus to the list of `Document` objects that contain the term.
To iterate over all the files in a directory, call `os.listdir` to list all the file names and join the directory to the filename with `os.path.join`. The example below will print only the `.txt` files in the `doggos` directory.
1. An initializer that takes a `str` path to a directory such as `"small_wiki"` and a `str` file extension and constructs an inverted index from the files in the specified directory matching the given extension. By default, the extension should be `".txt"`. Assume the string represents a valid directory, and that the directory contains only valid files. Do not recreate any behavior that is already done in the `Document` class—call the `get_words()` method! Create at most one `Document` per file.
1. A method `_calculate_idf` that takes a `str` term and returns the inverse document frequency of that term. If the term is not in the corpus, return 0. Inverse document frequency is defined by calling `math.log` on the *the total number of documents in the corpus* divided by *the number of documents containing the given term*.
1. A method `__repr__` that returns a string representation of this search engine in the format `SearchEngine('{path}')` (with literal single quotes in the output) where `{path}` is the path to the document from the initializer. The `__repr__` method is called when Jupyter Notebook needs to display a `SearchEngine` as output, so we should be able to copy the string contents into a new code cell and immediately run it to create a new `SearchEngine`.
1. A method `search` that takes a `str`**query** consisting of one or more terms and returns a `list` of relevant document paths that match at least one of the cleaned terms sorted by descending tf–idf statistic: the product of the term frequency and inverse document frequency. If there are no matching documents, return an empty list.
**For each of the 3 methods (excluding the initializer) in the `SearchEngine` class, write a testing function that contains at least 3 `pytest`-style assertions based on your own testing corpus** except for `SearchEngine.__repr__`, which may use the given corpuses. Documentation strings are optional for testing functions.
We recommend the following iterative software development approach to implement the `search` method.
1. Write code to handle queries that contain only a single term by collecting all the documents that contain the given term, computing the tf–idf statistic for each document, and returning the list of document paths sorted by descending tf–idf statistic.
1. Write tests to ensure that your program works on single-term queries.
1. Write code to handle queries that contain more than one term by returning all the documents that match any of the terms in the query sorted by descending tf–idf statistic. The tf–idf statistic for a document that matches more than one term is defined as the sum of its constituent single-term tf–idf statistics.
1. Write tests to ensure that your program works on multi-term queries.
Here's a walkthrough of the `search` function from beginning to end. Say we have a corpus in a directory called `"doggos"` containing 3 documents with the following contents:
-`doggos/doc1.txt` with the text `Dogs are the greatest pets.`
-`doggos/doc2.txt` with the text `Cats seem pretty okay`
-`doggos/doc3.txt` with the text `I love dogs!`
The initializer should construct the following inverted index.
```python
{"dogs":[doc1,doc3],
"are":[doc1],
"the":[doc1],
"greatest":[doc1],
"pets":[doc1],
"cats":[doc2],
"seem":[doc2],
"pretty":[doc2],
"okay":[doc2],
"i":[doc3],
"love":[doc3]}
```
Searching this corpus for the multi-term query `"love dogs"` should return a list `["doggos/doc3.txt", "doggos/doc1.txt"]` by:
1. Finding all documents that match at least one query term. The word `"love"` is found in `doc3` while the word `"dogs"` is found in `doc1` and `doc3`.
1. Computing the tf–idf statistic for each matching document. For each matching document, the tf–idf statistic for a multi-word query `"love dogs"` is the sum of the tf–idf statistics for `"love"` and `"dogs"` individually.
1. For `doc1`, the sum of 0 + 0.081 = 0.081. The tf–idf statistic for `"love"` is 0 because the term is not in `doc1`.
1. For `doc3`, the sum of 0.366 + 0.135 = 0.501.
1. Returning the matching document paths sorted by descending tf–idf statistic.
After completing your `SearchEngine`, run the following cell to search our small Wikipedia corpus for the query "data". Try some other search queries too!