Skip to content
Snippets Groups Projects
Commit 7d673bb6 authored by Yuxuan Mei's avatar Yuxuan Mei
Browse files

remove notes from search assessment

parent 92a2302d
No related branches found
No related tags found
No related merge requests found
%% Cell type:markdown id:e3878ca3 tags:
Search In Canvas, undo extra time for Search reply to review?
Search type annotation bug with dictionary.get in sorted key using mypy.
Search require students to write out math expressions for test cases.
Search Document how %%ipytest causes nb_mypy line numbers to be off by one. You need to look at the line below the documented one. Also, mypy can get confused by line comments?
Search Remove or inline the mypy type hints cheat sheet as there are unnecessary things like capitalized types.
%% Cell type:markdown id:26e9ec12-815a-4073-8891-cb6888c41cdb tags:
# Search
In this assessment, you'll implement a basic search engine by defining your own Python classes. A **search engine** is an algorithm that takes a query and retrieves the most relevant documents for that query. In order to identify the most relevant documents, our search engine will use **term frequency–inverse document frequency** ([tf–idf](https://en.wikipedia.org/wiki/Tf%E2%80%93idf)), an information statistic for determining the relevance of a term to each document from a corpus consisting of many documents.
The **tf–idf statistic** is a product of two values: term frequency and inverse document frequency. **Term frequency** computes the number of times that a term appears in a **document** (such as a single Wikipedia page). If we were to use only the term frequency in determining the relevance of a term to each document, then our search result might not be helpful since most documents contain many common words such as "the" or "a". In order to downweight these common terms, the **document frequency** computes the number of times that a term appears across the **corpus** of all documents.
%% Cell type:code id:5ed02adc-9d59-4e97-bf14-ef02865ea17a tags:
``` python
!pip install -q nb_mypy pytest ipytest
%reload_ext nb_mypy
%nb_mypy mypy-options --strict
```
%% Cell type:code id:96ac5cd5-dd9f-4c73-81f2-e2beb91c8952 tags:
``` python
import os
import math
import re
import pytest
import ipytest
ipytest.autoconfig()
def clean(token: str, pattern: re.Pattern[str] = re.compile(r"\W+")) -> str:
"""
Returns all the characters in the token lowercased and without matches to the given pattern.
>>> clean("Hello!")
'hello'
"""
return pattern.sub("", token.lower())
```
%% Cell type:markdown id:df9ac9b7-af68-4a0e-930b-1d8b759e9978 tags:
## Collaboration and Conduct
Students are expected to follow Washington state law on the [Student Conduct Code for the University of Washington](https://www.washington.edu/admin/rules/policies/WAC/478-121TOC.html). In this course, students must:
- Indicate on your submission any assistance received, including materials distributed in this course.
- Not receive, generate, or otherwise acquire any substantial portion or walkthrough to an assessment.
- Not aid, assist, attempt, or tolerate prohibited academic conduct in others.
Update the following code cell to include your name and list your sources. If you used any kind of computer technology to help prepare your assessment submission, include the queries and/or prompts. Submitted work that is not consistent with sources may be subject to the student conduct process.
%% Cell type:code id:67f60799-2496-4886-bbbb-65f6d29be57d tags:
``` python
your_name = ""
sources = [
...
]
assert your_name != "", "your_name cannot be empty"
assert ... not in sources, "sources should not include the placeholder ellipsis"
assert len(sources) >= 2, "must include at least 2 sources, inclusive of lectures and sections"
```
%% Cell type:markdown id:30f3902f-401a-4637-9b30-5d6fb050f40e tags:
## Task: `Document`
Write and test a `Document` class in the code cell below that can be used to represent the text in a web page and includes methods to compute term frequency. (But not document frequency since that would require access to all the documents in the corpus.)
The `Document` class should include:
1. An initializer `__init__` takes a `str` path and initializes the document data based on the text in the specified file. Assume that the file exists, but that it could be empty. In order to implement `term_frequency` later, we'll need to precompute and save the term frequency for each term in the document in the initializer as a field by constructing a dictionary that maps each `str` term to its `float` term frequency. Term frequency is defined as *the count of the given term* divided by *the total number of words in the text*.
> Consider the term frequencies for this document containing 4 total words: "the cutest cutest dog".
>
> - "the" appears 1 time out of 4 total words, so its term frequency is 0.25.
> - "cutest" appears 2 times out of 4 total words, so its term frequency is 0.5.
> - "dog" appears 1 time out of 4 total words, so its term frequency is 0.25.
When constructing this dictionary, call the `clean` function to convert the input token to lowercase and ignore non-letter characters so that "corgi", "CoRgi", and "corgi!!" are all considered the same string "corgi" to the search algorithm.
1. A method `term_frequency` that takes a given `str` term and returns its term frequency by looking it up in the precomputed dictionary. Remember to normalize the term before looking it up to find the corresponding match. If the term does not occur, return 0.
1. A method `get_path` that returns the `str` path of the file that this document represents.
1. A method `get_words` that returns a `set` of the unique, cleaned words in this document.
1. A method `__repr__` that returns a string representation of this document in the format `Document('{path}')` (with literal single quotes in the output) where `{path}` is the path to the document from the initializer. The `__repr__` method is called when Jupyter Notebook needs to display a `Document` as output, so we should be able to copy the string contents into a new code cell and immediately run it to create a new `Document`.
**For each of the 4 methods (excluding the initializer) in the `Document` class, write a testing function that contains at least 3 `pytest`-style assertions based on your own testing corpus**. As always, your test cases should expand the domain and range. Documentation strings are optional for testing functions.
We've provided some example corpuses in the `doggos` directory and the `small_wiki` directory. For this task, create your own testing corpus by creating a **New Folder** and adding your own text files to it.
> Be sure to exhaustively test your `Document` class before moving on: bugs in `Document` will make implementing the following `SearchEngine` task much more difficult.
%% Cell type:code id:d7e58631-86a0-4dde-a43c-e8bc0cd9e6fa tags:
``` python
%%ipytest
class Document:
...
class TestDocument:
doc1 = Document("doggos/doc1.txt")
euro = Document("small_wiki/Euro - Wikipedia.html")
...
def test_term_frequency(self) -> None:
assert self.doc1.term_frequency("dogs") == pytest.approx(1 / 5)
assert self.euro.term_frequency("Euro") == pytest.approx(0.0086340569495348)
...
def test_get_words(self) -> None:
assert self.doc1.get_words() == set("dogs are the greatest pets".split())
assert set(w for w in self.euro.get_words() if len(w) == 1) == set([
*"0123456789acefghijklmnopqrstuvxyz".lower() # All one-letter words in Euro
])
...
...
```
%% Cell type:markdown id:793ae51f-db9c-4859-97a2-a58267ec4627 tags:
## Task: `SearchEngine`
Write and test a `SearchEngine` class in the code cell below that represents a corpus of `Document` objects and includes methods to compute the tf–idf statistic between a given query and every document in the corpus. The `SearchEngine` begins by constructing an **inverted index** that associates each term in the corpus to the list of `Document` objects that contain the term.
To iterate over all the files in a directory, call `os.listdir` to list all the file names and join the directory to the filename with `os.path.join`. The example below will print only the `.txt` files in the `doggos` directory.
%% Cell type:code id:9893aeb0-1048-4527-851f-78b77df8c0c1 tags:
``` python
path = "doggos"
extension = ".txt"
for filename in os.listdir(path):
if filename.endswith(extension):
print(os.path.join(path, filename))
```
%% Cell type:markdown id:69b7ff12-8c72-49bf-9e5f-45024044340f tags:
The `SearchEngine` class should include:
1. An initializer that takes a `str` path to a directory such as `"small_wiki"` and a `str` file extension and constructs an inverted index from the files in the specified directory matching the given extension. By default, the extension should be `".txt"`. Assume the string represents a valid directory, and that the directory contains only valid files. Do not recreate any behavior that is already done in the `Document` class—call the `get_words()` method! Create at most one `Document` per file.
1. A method `_calculate_idf` that takes a `str` term and returns the inverse document frequency of that term. If the term is not in the corpus, return 0. Inverse document frequency is defined by calling `math.log` on the *the total number of documents in the corpus* divided by *the number of documents containing the given term*.
1. A method `__repr__` that returns a string representation of this search engine in the format `SearchEngine('{path}')` (with literal single quotes in the output) where `{path}` is the path to the document from the initializer. The `__repr__` method is called when Jupyter Notebook needs to display a `SearchEngine` as output, so we should be able to copy the string contents into a new code cell and immediately run it to create a new `SearchEngine`.
1. A method `search` that takes a `str` **query** consisting of one or more terms and returns a `list` of relevant document paths that match at least one of the cleaned terms sorted by descending tf–idf statistic: the product of the term frequency and inverse document frequency. If there are no matching documents, return an empty list.
**For each of the 3 methods (excluding the initializer) in the `SearchEngine` class, write a testing function that contains at least 3 `pytest`-style assertions based on your own testing corpus** except for `SearchEngine.__repr__`, which may use the given corpuses. Documentation strings are optional for testing functions.
%% Cell type:code id:ac19da1b-da24-46da-add8-7065e09b5789 tags:
``` python
%%ipytest
class SearchEngine:
...
class TestSearchEngine:
doggos = SearchEngine("doggos")
small_wiki = SearchEngine("small_wiki", ".html")
...
def test_search(self) -> None:
assert self.doggos.search("love") == ["doggos/doc3.txt"]
assert self.doggos.search("dogs") == ["doggos/doc3.txt", "doggos/doc1.txt"]
assert self.doggos.search("cats") == ["doggos/doc2.txt"]
assert self.doggos.search("love dogs") == ["doggos/doc3.txt", "doggos/doc1.txt"]
assert self.small_wiki.search("data")[:10] == [
"small_wiki/Internet privacy - Wikipedia.html",
"small_wiki/Machine learning - Wikipedia.html",
"small_wiki/Bloomberg L.P. - Wikipedia.html",
"small_wiki/Waze - Wikipedia.html",
"small_wiki/Digital object identifier - Wikipedia.html",
"small_wiki/Chief financial officer - Wikipedia.html",
"small_wiki/UNCF - Wikipedia.html",
"small_wiki/Jackson 5 Christmas Album - Wikipedia.html",
"small_wiki/KING-FM - Wikipedia.html",
"small_wiki/The News-Times - Wikipedia.html",
]
...
...
```
%% Cell type:markdown id:b63229ee-a94e-4b2d-ba34-4a87cccb2eec tags:
We recommend the following iterative software development approach to implement the `search` method.
1. Write code to handle queries that contain only a single term by collecting all the documents that contain the given term, computing the tf–idf statistic for each document, and returning the list of document paths sorted by descending tf–idf statistic.
1. Write tests to ensure that your program works on single-term queries.
1. Write code to handle queries that contain more than one term by returning all the documents that match any of the terms in the query sorted by descending tf–idf statistic. The tf–idf statistic for a document that matches more than one term is defined as the sum of its constituent single-term tf–idf statistics.
1. Write tests to ensure that your program works on multi-term queries.
Here's a walkthrough of the `search` function from beginning to end. Say we have a corpus in a directory called `"doggos"` containing 3 documents with the following contents:
- `doggos/doc1.txt` with the text `Dogs are the greatest pets.`
- `doggos/doc2.txt` with the text `Cats seem pretty okay`
- `doggos/doc3.txt` with the text `I love dogs!`
The initializer should construct the following inverted index.
```python
{"dogs": [doc1, doc3],
"are": [doc1],
"the": [doc1],
"greatest": [doc1],
"pets": [doc1],
"cats": [doc2],
"seem": [doc2],
"pretty": [doc2],
"okay": [doc2],
"i": [doc3],
"love": [doc3]}
```
Searching this corpus for the multi-term query `"love dogs"` should return a list `["doggos/doc3.txt", "doggos/doc1.txt"]` by:
1. Finding all documents that match at least one query term. The word `"love"` is found in `doc3` while the word `"dogs"` is found in `doc1` and `doc3`.
1. Computing the tf–idf statistic for each matching document. For each matching document, the tf–idf statistic for a multi-word query `"love dogs"` is the sum of the tf–idf statistics for `"love"` and `"dogs"` individually.
1. For `doc1`, the sum of 0 + 0.081 = 0.081. The tf–idf statistic for `"love"` is 0 because the term is not in `doc1`.
1. For `doc3`, the sum of 0.366 + 0.135 = 0.501.
1. Returning the matching document paths sorted by descending tf–idf statistic.
After completing your `SearchEngine`, run the following cell to search our small Wikipedia corpus for the query "data". Try some other search queries too!
%% Cell type:code id:0555e6ef-ea88-427b-957d-8cc0bfcc6e0b tags:
``` python
SearchEngine("small_wiki", ".html").search("data")
```
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment