From b56fd137d81b17c29bb36d70fd3cbc872de53889 Mon Sep 17 00:00:00 2001 From: varkon256 <kondavarsha@hotmail.com> Date: Sun, 25 Apr 2021 20:28:52 -0700 Subject: [PATCH] fixed typos in as4 --- assignments/hw2-soundRecognition.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/assignments/hw2-soundRecognition.md b/assignments/hw2-soundRecognition.md index 8541472a..d3700a58 100644 --- a/assignments/hw2-soundRecognition.md +++ b/assignments/hw2-soundRecognition.md @@ -11,22 +11,22 @@ due: May 5, 2021 revised: April 25, 2021 -objective: Teach machines to recognise non-speech sounds that occur around you and visualize these recognised sounds. +Objective: Teach machines to recognise non-speech sounds that occur around you and visualize these recognised sounds. --- ## Timeline (tentative) -- <strong>in-class check-in</strong> on May 3, 2021. - - <strong> final deliverable</strong> in class on May 5, 2021 +- <strong>In-class check-in</strong> on May 3, 2021. + - <strong> Final deliverable</strong> in class on May 5, 2021 In this homework, you will do two things: -1. Use a Jupytor notebook (provided to you on canvas) that contains code to train a machine learning model to recognise sounds and build visualizations to display the recognised sounds. We will post the notebook on canvas, and you are strongly encouraged to host the notebook on Google Collab. login to your UW CSE provided google account, upload the notebook on your drive, and open the notebook in google colab on your browser. [Here is a getting started tutorial on colab](https://colab.research.google.com/notebooks/intro.ipynb#). -2. caption a video, read related papers, and reflect on your experience captioning these videos. You should submit a caption file in a standard format like wsb or srt. Please review some helpful links in the deliverables section of this page to create a caption file in these standard formats. You are allowed to start with an AI-based service to generate captions, however, you should actively fixe eronious captions if you are using the AI-based service. +1. Use a Jupyter notebook (provided to you on canvas) that contains code to train a machine learning model to recognise sounds and build visualizations to display the recognised sounds. We will post the notebook on canvas, and you are strongly encouraged to host the notebook on Google Collab. Login to your UW CSE provided google account, upload the notebook on your drive, and open the notebook in google colab on your browser. [Here is a getting started tutorial on colab](https://colab.research.google.com/notebooks/intro.ipynb#). +2. Caption a video, read related papers, and reflect on your experience captioning these videos. You should submit a caption file in a standard format like wsb or srt. Please review some helpful links in the deliverables section of this page to create a caption file in these standard formats. You are allowed to start with an AI-based service to generate captions, however, you should actively fix erroneous captions if you are using the AI-based service. ## Deliverables You will submit the following -* a completed notebook or a link to the completed notebook with sufficient permissions to the staff to access, run and evaluate it. -* link to, or the original file of the video you chose to caption. -* a link to, or the original caption file in a standard format like SBV or SRT. [this blogpost from rev.com](https://www.rev.com/blog/close-caption-file-format-guide-for-youtube-vimeo-netflix-and-more) is a good summary of different formats. once you have captions, making a caption file in one of these standard formats should not be hard. For example, here is a [tutorial on using youtube to add captions to your videos](https://support.google.com/youtube/answer/2734796?hl=en). +* A completed notebook or a link to the completed notebook with sufficient permissions to the staff to access, run and evaluate it. +* Link to, or the original file of the video you chose to caption. +* A link to, or the original caption file in a standard format like SBV or SRT. [this blogpost from rev.com](https://www.rev.com/blog/close-caption-file-format-guide-for-youtube-vimeo-netflix-and-more) is a good summary of different formats. Once you have captions, making a caption file in one of these standard formats should not be hard. For example, here is a [tutorial on using youtube to add captions to your videos](https://support.google.com/youtube/answer/2734796?hl=en). We have allowed maximum flexibility to upload these deliverables to canvas. Please upload multiple files, or use the comments section to submit links. Please indicate how exactly have you submitted the deliverables using the comment functionality of canvas once you submit your work. \ No newline at end of file -- GitLab