|
|
# MapReduce
|
|
|
**Due: Wed Jan 13, 9:00pm**
|
|
|
|
|
|
*Modified from the [MIT 6.824 Labs](http://css.csail.mit.edu/6.824/2014/labs/lab-1.html)*
|
|
|
|
|
|
## Introduction
|
|
|
In this lab you'll build a MapReduce library as a way to learn the Go programming language and as a way to learn about fault tolerance in distributed systems. In the first part you will write a simple MapReduce program. In the second part you will write a Master that hands out jobs to workers, and handles failures of workers. The interface to the library and the approach to fault tolerance is similar to the one described in the original [MapReduce paper](http://research.google.com/archive/mapreduce-osdi04.pdf).
|
|
|
|
|
|
### Collaboration Policy
|
|
|
You must write all the code you hand in for 452, except for code that we give you as part of the assignment. You are not allowed to look at anyone else's solution, and you are not allowed to look at code from previous years. You may discuss the assignments with other students, but you may not look at or copy each others' code. Please do not publish your code or make it available to future 452 students -- for example, please do not make your code visible on github.
|
|
|
|
|
|
Undergrads taking 452 may do the labs with a partner. Masters students should complete the labs individually.
|
|
|
|
|
|
### Software
|
|
|
You'll implement this lab (and all the labs) in [Go 1.3](http://www.golang.org/) (the last released version). This version is available as packages for Linux and Mac OSX through [MacPorts](https://www.macports.org/). You can also download binaries from the Go web site.
|
|
|
|
|
|
The Go web site contains lots of tutorial information which you may want to look at. We supply you with a non-distributed MapReduce implementation, and a partial implementation of a distributed implementation (just the boring bits).
|
|
|
|
|
|
You'll fetch the initial lab software with git (a version control system). To learn more about git, take a look at the [git user's manual](http://www.kernel.org/pub/software/scm/git/docs/user-manual.html), or, if you are already familiar with other version control systems, you may find this [CS-oriented overview of git](http://eagain.net/articles/git-for-computer-scientists/) useful.
|
|
|
|
|
|
We'll be using our new department git server, which is hosted at `gitlab.cs.washington.edu`. This is basically a local deployment of GitHub, so if you have used GitHub before, you are all set! Before downloading the software, you'll need to sign in and upload your ssh key. A handy tutorial on how to generate and upload your ssh key is available from [GitHub](https://help.github.com/articles/generating-ssh-keys/).
|
|
|
|
|
|
```
|
|
|
$ git clone git@gitlab.cs.washington.edu:iyzhang/452-labs.git 452-labs
|
|
|
$ cd 452-labs
|
|
|
$ ls
|
|
|
src
|
|
|
$
|
|
|
```
|
|
|
Git allows you to keep track of the changes you make to the code. For example, if you want to checkpoint your progress, you can commit your changes by running:
|
|
|
|
|
|
```
|
|
|
$ git commit -am 'partial solution to lab 1'
|
|
|
$
|
|
|
```
|
|
|
|
|
|
If you would like to host your code on gitlab as well, the easiest way is to fork the 452-labs repo and make your edits to your fork. [Instructions on how to fork a repo.](https://help.github.com/articles/fork-a-repo/) If you do fork the repo, do not forget to keep your fork synced! We may be adding hints to the code throughout the quarter.
|
|
|
|
|
|
### Getting started
|
|
|
|
|
|
There is an input file `kjv12.txt` in `~/452-labs/src/main`, which was downloaded from [here](https://web.archive.org/web/20130530223318/http://patriot.net/~bmcgin/kjv12.txt). Compile the initial software we provide you and run it with the downloaded input file:
|
|
|
|
|
|
```sh
|
|
|
$ export GOPATH=$HOME/452-labs
|
|
|
$ cd ~/452-labs/src/main
|
|
|
$ go run wc.go master kjv12.txt sequential
|
|
|
# command-line-arguments
|
|
|
./wc.go:11: missing return at end of function
|
|
|
./wc.go:15: missing return at end of function
|
|
|
```
|
|
|
|
|
|
The compiler produces two errors, because the implementation of the `Map` and `Reduce` functions is incomplete.
|
|
|
|
|
|
## Part I: Word count
|
|
|
Modify `Map` and `Reduce` so that `wc.go` reports the number of occurrences of each word in alphabetical order.
|
|
|
|
|
|
```
|
|
|
$ go run wc.go master kjv12.txt sequential
|
|
|
Split kjv12.txt
|
|
|
Split read 4834757
|
|
|
DoMap: read split mrtmp.kjv12.txt-0 966954
|
|
|
DoMap: read split mrtmp.kjv12.txt-1 966953
|
|
|
DoMap: read split mrtmp.kjv12.txt-2 966951
|
|
|
DoMap: read split mrtmp.kjv12.txt-3 966955
|
|
|
DoMap: read split mrtmp.kjv12.txt-4 966944
|
|
|
DoReduce: read mrtmp.kjv12.txt-0-0
|
|
|
DoReduce: read mrtmp.kjv12.txt-1-0
|
|
|
DoReduce: read mrtmp.kjv12.txt-2-0
|
|
|
DoReduce: read mrtmp.kjv12.txt-3-0
|
|
|
DoReduce: read mrtmp.kjv12.txt-4-0
|
|
|
DoReduce: read mrtmp.kjv12.txt-0-1
|
|
|
DoReduce: read mrtmp.kjv12.txt-1-1
|
|
|
DoReduce: read mrtmp.kjv12.txt-2-1
|
|
|
DoReduce: read mrtmp.kjv12.txt-3-1
|
|
|
DoReduce: read mrtmp.kjv12.txt-4-1
|
|
|
DoReduce: read mrtmp.kjv12.txt-0-2
|
|
|
DoReduce: read mrtmp.kjv12.txt-1-2
|
|
|
DoReduce: read mrtmp.kjv12.txt-2-2
|
|
|
DoReduce: read mrtmp.kjv12.txt-3-2
|
|
|
DoReduce: read mrtmp.kjv12.txt-4-2
|
|
|
Merge phaseMerge: read mrtmp.kjv12.txt-res-0
|
|
|
Merge: read mrtmp.kjv12.txt-res-1
|
|
|
Merge: read mrtmp.kjv12.txt-res-2
|
|
|
```
|
|
|
|
|
|
The output will be in the file "mrtmp.kjv12.txt". Your implementation is correct if the following command produces the following top 10 words:
|
|
|
|
|
|
```
|
|
|
$ sort -n -k2 mrtmp.kjv12.txt | tail -10
|
|
|
unto: 8940
|
|
|
he: 9666
|
|
|
shall: 9760
|
|
|
in: 12334
|
|
|
that: 12577
|
|
|
And: 12846
|
|
|
to: 13384
|
|
|
of: 34434
|
|
|
and: 38850
|
|
|
the: 62075
|
|
|
```
|
|
|
To make testing easy for you, run:
|
|
|
|
|
|
```
|
|
|
$ ./test-wc.sh
|
|
|
```
|
|
|
and it will report if your solution is correct or not.
|
|
|
|
|
|
Before you start coding read Section 2 of the [MapReduce paper](http://research.google.com/archive/mapreduce-osdi04.pdf) and our code for mapreduce, which is in `mapreduce.go` in package `mapreduce`. In particular, you want to read the code of the function `RunSingle` and the functions it calls. This well help you to understand what MapReduce does and to learn Go by example.
|
|
|
|
|
|
Once you understand this code, implement `Map` and `Reduce` in `wc.go`.
|
|
|
|
|
|
**Hint:** you can use [strings.FieldsFunc](http://golang.org/pkg/strings/#FieldsFunc) to split a string into components. The following lambda expression will be useful for checking whether a character is a letter:
|
|
|
|
|
|
```go
|
|
|
func(r rune) bool { return !unicode.IsLetter(r) }
|
|
|
```
|
|
|
|
|
|
**Hint:** for the purposes of this exercise, you can consider a word to be any contiguous sequence of letters, as determined by [unicode.IsLetter](http://golang.org/pkg/unicode/#IsLetter). A good read on what strings are in Go is the [Go Blog on strings](http://blog.golang.org/strings).
|
|
|
|
|
|
**Hint:** The [strconv package](http://golang.org/pkg/strconv/) is handy to convert strings to integers etc.
|
|
|
|
|
|
**Hint:** The solution to this part of the lab should take you about 10 lines of code. If your code is much >100 LoC, you might want to discuss your design with a TA.
|
|
|
|
|
|
You can remove the output file and all intermediate files with:
|
|
|
|
|
|
```
|
|
|
$ rm mrtmp.*
|
|
|
```
|
|
|
## Part II: Distributing MapReduce jobs
|
|
|
|
|
|
In this part you will design and implement a master who distributes jobs to a set of workers. We give you the code for the RPC messages (see `common.go` in the `mapreduce` package) and the code for a worker (see `worker.go` in the `mapreduce` package).
|
|
|
|
|
|
Your job is to complete `master.go` in the `mapreduce` package. In particular, the `RunMaster()` function in `master.go` should return only when all of the map and reduce tasks have been executed. This function will be invoked from the `Run()` function in `mapreduce.go`.
|
|
|
|
|
|
The code in `mapreduce.go` already implements the `MapReduce.Register` RPC function for you, and passes the new worker's information to `mr.registerChannel`. You should process new worker registrations by reading from this channel.
|
|
|
|
|
|
Information about the MapReduce job is in the `MapReduce` struct, defined in `mapreduce.go`. Modify the `MapReduce` struct to keep track of any additional state (e.g., the set of available workers), and initialize this additional state in the `InitMapReduce()` function. The master does not need to know which `Map` or `Reduce` functions are being used for the job; the workers will take care of executing the right code for `Map` or `Reduce`.
|
|
|
|
|
|
In Part II, you don't have worry about failures of workers. You are done with Part II when your implementation passes the first test set in `test_test.go` in the `mapreduce` package.
|
|
|
|
|
|
`test_test.go` uses Go's unit testing. From now on all exercises (including subsequent labs) will use it, but you can always run the actual programs from the main directory. You run unit tests in a package directory as follows:
|
|
|
|
|
|
```
|
|
|
$ go test
|
|
|
```
|
|
|
|
|
|
The master should send RPCs to the workers in parallel so that the workers can work on jobs concurrently. You will find the `go` statement useful for this purpose and the [Go RPC documentation](http://golang.org/pkg/net/rpc/).
|
|
|
|
|
|
The master may have to wait for a worker to finish before it can hand out more jobs. You may find channels useful to synchronize threads that are waiting for reply with the master once the reply arrives. Channels are explained in the document on [Concurrency in Go](http://golang.org/doc/effective_go.html#concurrency).
|
|
|
|
|
|
We've given you code that sends RPCs via "UNIX-domain sockets". This means that RPCs only work between processes on the same machine. It would be easy to convert the code to use TCP/IP-based RPC instead, so that it would communicate between machines; you'd have to change the first argument to calls to Listen() and Dial() to "tcp" instead of "unix", and the second argument to a port number like ":5100". You will need a shared distributed file system.
|
|
|
|
|
|
The easiest way to track down bugs is to insert `log.Printf()` statements, collect the output in a file with `go test > out`, and then think about whether the output matches your understanding of how your code should behave. The last step is the most important.
|
|
|
|
|
|
**Hint:** Use a `select` to check for new worker registrations as well as checking for finished workers that need new jobs.
|
|
|
|
|
|
**Hint:** When designing, think about how you might handle jobs that do not finish and have to be restarted. This will limit how much re-design you will have to do in the next section when you have to handle worker failures.
|
|
|
|
|
|
## Part III: Handling worker failures
|
|
|
|
|
|
In this part you will make the master handle workers failures. computing a job. In MapReduce handling failures of workers is relatively straightforward, because the workers don't have persistent state. If the worker fails, any RPCs that the master issued to that worker will fail (e.g., due to a timeout). Thus, if the master's RPC to the worker fails, the master should re-assign the job given to the failed worker to another worker.
|
|
|
|
|
|
An RPC failure doesn't necessarily mean that the worker failed; the worker may just be unreachable but still computing. Thus, it may happen that two workers receive the same job and compute it. However, because jobs are idempotent, it doesn't matter if the same job is computed twice---both times it will generate the same output. So, you don't have to anything special for this case. (Our tests never fail workers in the middle of job, so you don't even have to worry about several workers writing to the same output file.)
|
|
|
|
|
|
You don't have to handle failures of the master; we will assume it won't fail. Making the master fault-tolerant is more difficult because it keeps persistent state that must be replicated to make the master fault tolerant. Keeping replicated state consistent in the presence of failures is challenging. Much of the later labs are devoted to this challenge.
|
|
|
|
|
|
Your implementation must pass the two remaining test cases in `test_test.go`. The first case tests the failure of one worker. The second test case tests handling of many failures of workers. Periodically, the test cases starts new workers that the master can use to make forward progress, but these workers fail after handling a few jobs.
|
|
|
|
|
|
**Hint:** The solution to Parts II and III of the lab should be around 60 lines of code.
|
|
|
|
|
|
## Submission Instructions
|
|
|
Make sure that you have done the following:
|
|
|
* COMMENT your code! We should be able to understand what your code is doing.
|
|
|
* Make sure all of your code passes the test cases. Do not modify them, as we will be replacing all `test_test.go` files before grading.
|
|
|
* Add a README.lab1 in 452-labs/src with:
|
|
|
* Your name
|
|
|
* Your partner's name (if you had one)
|
|
|
* How many hours you each spent on the lab.
|
|
|
* A high-level description of your design.
|
|
|
* See [Submission Requirements](submission) for specific format requirements for the file you will upload to the Catalyst dropbox |
|
|
\ No newline at end of file |