Sample-hadoop-basic » History » Version 18
Henning Blohm, 03.08.2014 12:36
1 | 1 | Henning Blohm | h1. A simple Hadoop with Z2 sample |
---|---|---|---|
2 | |||
3 | 2 | Henning Blohm | This sample is an adaptation of the classical Wordcount sample in the Z2 context. This sample is supposed to show you how Hadoop can be used from within Z2 and in particular how to write Map/Reduce jobs in that context. |
4 | |||
5 | 9 | Henning Blohm | *Note #1:* This sample is made to be run on Linux or Mac-OS. Supposedly it is possible to run Hadoop on Windows. Sorry, but we have not been able to adapt the sample yet. A machine with 8GB of RAM should be sufficient. |
6 | 14 | Henning Blohm | *Note #2:* For your convenience everything in this sample assumes you use Eclipse. As such, that is of course no prerequisite to running the software, but it just makes everything much more integrated for now. Please have Eclipse ready and the Eclipsoid installed. See [[How to install Eclipsoid]]. |
7 | 1 | Henning Blohm | |
8 | 14 | Henning Blohm | This sample is provided via the repository "z2-samples-hadoop-basic":http://redmine.z2-environment.net/projects/z2-samples/repository/z2-samples-hadoop-basic. |
9 | 3 | Henning Blohm | |
10 | 2 | Henning Blohm | h2. Prerequisites |
11 | |||
12 | 1 | Henning Blohm | This sample makes use of the [[Hadoop add-on]] that is based on Cloudera's CDH4 distribution of Hadoop. As client access is version dependent, so is the sample. In order to simplify this for you, there is a pre-configured CDH4 distribution available to you from this site. Apart from its development style configuration (i.e. no security), this is anyway the way we prefer to install Hadoop and friends: Just one root installation folder, one OS user, one log folder etc. |
13 | |||
14 | 9 | Henning Blohm | Please follow the procedure described here: [[Install prepacked CDH4]]. |
15 | 3 | Henning Blohm | |
16 | To use with this sample, it is most convenient, if you clone and configure the CDH4 install next to your Eclipse workspace and the sample repository clone. |
||
17 | |||
18 | 1 | Henning Blohm | h2. Setting up the sample |
19 | |||
20 | 4 | Henning Blohm | From here on, the sample is run like all samples, that is, following [[How to run a sample]]. |
21 | |||
22 | 16 | Udo Offermann | Assuming everything (including the z2 core and the CDH4 setup) is under *install* and your workspace is in *install/workspace* please clone "z2-samples-hadoop-basic":http://redmine.z2-environment.net/projects/z2-samples/repository/z2-samples-hadoop-basic under *install* as well. Either from the command line as |
23 | 4 | Henning Blohm | |
24 | <pre><code class="ruby"> |
||
25 | cd install |
||
26 | git clone -b master http://git.z2-environment.net/z2-samples.hadoop-basic |
||
27 | </code></pre> |
||
28 | |||
29 | or from within Eclipse using the Git repositories view (but make sure the folder is right next to your z2-base.core clone). |
||
30 | |||
31 | You should have an Eclipse workspace and next to it *z2-samples.hadoop-basic*, *z2-samples.cdh4-base*, and *z2-base.core*. Import all projects into your workspace. |
||
32 | |||
33 | 10 | Henning Blohm | We assume that you followed the steps in [[Install prepacked CDH4]] and Hadoop is running (we do not need HBase in this case). |
34 | 5 | Henning Blohm | |
35 | h2. Running the sample |
||
36 | |||
37 | 11 | Henning Blohm | h3. Starting Z2. |
38 | 5 | Henning Blohm | |
39 | 11 | Henning Blohm | Use the Eclipse launcher or start from the command line. The first time this will take a short moment. When up, we want to first write a file into the Hadoop file system that we are going to split into words and count their occurences later on. |
40 | 1 | Henning Blohm | |
41 | 11 | Henning Blohm | h3. Loading data |
42 | |||
43 | 5 | Henning Blohm | If you want to load some file you already have at hand, use the "copyFromLocal" operation to copy it into */hadoop-wordcount/input*. E.g. if the file is called *myfile.txt* go into the CDH4 install and run |
44 | |||
45 | <pre><code class="ruby"> |
||
46 | . ./env.sh |
||
47 | 17 | Henning Blohm | hadoop fs -mkdir /hadoop-wordcount |
48 | 5 | Henning Blohm | hadoop fs -copyFromLocal myfile.txt /hadoop-wordcount/input |
49 | </code></pre> |
||
50 | |||
51 | (the env.sh call is only required once per shell session). |
||
52 | |||
53 | Alternatively there is a z2Unit test (see [[How to z2Unit]]) that you can invoke to generate some input. As that is interesting on its own right, here is how that is done. |
||
54 | |||
55 | 14 | Henning Blohm | You should have all the projects, in particular *com.zfabrik.samples.hadoop-basic.wordcount* already in your workspace. Otherwise import them from the repository you cloned previously. |
56 | 5 | Henning Blohm | |
57 | 14 | Henning Blohm | Use Eclipsoid to resolve all required compile dependency (Alt-R or click on the right Z in the toolbar), if you have not done so already. |
58 | 5 | Henning Blohm | |
59 | Look for the type *WriteWordsFile* (Ctrl+Shift+T). |
||
60 | |||
61 | 14 | Henning Blohm | The method *writeWordsFile* will write a file of 100 million words in lines containing between 1 and 9 words each (but you can change that of course). Invoke it by right-clicking and "Run as / JUnit test". If you want to play around with the settings, simply change the code, synchronize Z2 (Alt-Y or click on the left Z in the toolbar) and rerun. |
62 | 5 | Henning Blohm | |
63 | The interesting piece about this code is how it is connecting to HFDS: |
||
64 | |||
65 | <pre><code class="java"> |
||
66 | 6 | Henning Blohm | ... |
67 | @Test |
||
68 | public void writeWordsFile() throws Exception { |
||
69 | FileSystem fs = FileSystem.get(IComponentsLookup.INSTANCE.lookup(WordCountMRJob.CONFIG, Configuration.class)); |
||
70 | fs.delete(WordCountMRJob.INPUT_PATH, true); |
||
71 | fs.mkdirs(WordCountMRJob.INPUT_PATH.getParent()); |
||
72 | ... |
||
73 | 1 | Henning Blohm | </code></pre> |
74 | 6 | Henning Blohm | |
75 | 14 | Henning Blohm | Here, the actual connection configuration, one of Hadoop's XML configuration files, is looked up from a Z2 component called "com.zfabrik.samples.hadoop-basic.wordcount/nosql":http://redmine.z2-environment.net/projects/z2-samples/repository/z2-samples-hadoop-basic/revisions/master/show/com.zfabrik.samples.hadoop-basic.wordcount/nosql. The component type for that is defined by the Hadoop integration module *com.zfabrik.hadoop* of the [[Hadoop add-on]]. |
76 | 1 | Henning Blohm | |
77 | The purpose of this is to separate the client configuration information from the using implementation. We will see another application of that below. |
||
78 | |||
79 | 11 | Henning Blohm | So now we assume you have the input file uploaded or generated in HDFS and we turn to a Map/Reduce job that counts the number of occurances of single words. |
80 | |||
81 | h3. Running the WordCount Map/Reduce Job |
||
82 | |||
83 | There is two ways of doing that. |
||
84 | |||
85 | The generic, interactive, method is to open http://localhost:8080/z_hadoop use (z*/z by default) and schedule or run the job *com.zfabrik.samples.hadoop-basic.wordcount/wordcount* with the remote connectivity config above. If you choose schedule, the web site will not wait for the job completion, otherwise it will wait for the job and keep displaying it progress. Alternatively to watching the job progress from there, you can go to Yarn's Nodemanager at http://localhost:8088. |
||
86 | |||
87 | Once the job has completed, the results are HDFS at */hadoop-wordcount/output*. On the shell where CDH4 was installed run |
||
88 | |||
89 | <pre><code class="ruby"> |
||
90 | hadoop fs -cat /hadoop-wordcount/output/* |
||
91 | </code></pre> |
||
92 | |||
93 | To make things more interesting, there is another method to run the Job: Programmatically from a z2Unit test. Look for the type "CountWords":http://redmine.z2-environment.net/projects/z2-samples/repository/z2-samples-hadoop-basic/revisions/master/entry/com.zfabrik.samples.hadoop-basic.wordcount/java/src.test/com/zfabrik/samples/hadoop_basic/test/CountWords.java (Ctrl-Shift-T) and "Run as / JUnit test". This will wait for the job and log its progress and finally its results to the Z2 console. |
||
94 | |||
95 | Here's the relevant code fragments: |
||
96 | |||
97 | <pre><code class="java"> |
||
98 | @Test |
||
99 | public void countWords() throws Exception { |
||
100 | // get the config |
||
101 | Configuration c = getConfiguration(); |
||
102 | |||
103 | // prepare the fs. |
||
104 | |||
105 | // <taken out> |
||
106 | |||
107 | // get the job configurator and configure it |
||
108 | IJobConfigurator jc = IComponentsLookup.INSTANCE.lookup("com.zfabrik.samples.hadoop-basic.wordcount/wordcount",IJobConfigurator.class); |
||
109 | jc.configure(c); |
||
110 | // submit the job |
||
111 | Job j = jc.submit(); |
||
112 | |||
113 | // wait for it to complete and log progress |
||
114 | |||
115 | // <taken out> |
||
116 | |||
117 | } |
||
118 | </code></pre> |
||
119 | |||
120 | The general principle is the following: When you need to run a Map/Reduce job from your application, which is actually the typical case in our experience, you proceed as follows: |
||
121 | |||
122 | # Do anything you need to prepare before the execution. |
||
123 | # Get the client config |
||
124 | 12 | Henning Blohm | # Retrieve the "Job Main class" (see [[Hadoop add-on]], "IMapReduceJob":http://www.z2-environment.net/javadoc/com.zfabrik.hadoop!2Fjava/api/com/zfabrik/hadoop/job/IMapReduceJob.html). |
125 | 11 | Henning Blohm | # Call configure to retrieve a configured Job object |
126 | # Submit the job. |
||
127 | 1 | Henning Blohm | # If you need to, wait for the job to finish. |
128 | 12 | Henning Blohm | |
129 | Let's have a look at the job's main class. |
||
130 | |||
131 | h2. The job implementation |
||
132 | |||
133 | The WordCount M/R job is implemented in "WordCountMRJob":http://redmine.z2-environment.net/projects/z2-samples/repository/z2-samples-hadoop-basic/revisions/master/entry/com.zfabrik.samples.hadoop-basic.wordcount/java/src.impl/com/zfabrik/samples/hadoop_basic/impl/WordCountMRJob.java. |
||
134 | |||
135 | Here are the relevant code fragments: |
||
136 | 13 | Henning Blohm | |
137 | In its <code>configure</code> method, the job sets all the relevant job config given a client configuration. This is pretty much as always in Hadoop, with the difference that you do not specify task classes (map, combine, reduce). Instead, Z2 will set those to generic implementations that make sure the actual implementations run in the right context. |
||
138 | |||
139 | <pre><code class="java"> |
||
140 | public Job configure(Configuration configuration) throws Exception { |
||
141 | // create the job instance |
||
142 | Job job = Job.getInstance(configuration, name); |
||
143 | |||
144 | // configure all the input and output types |
||
145 | job.setOutputKeyClass(Text.class); |
||
146 | job.setOutputValueClass(IntWritable.class); |
||
147 | job.setInputFormatClass(TextInputFormat.class); |
||
148 | job.setOutputFormatClass(TextOutputFormat.class); |
||
149 | |||
150 | // and where stuff is coming from and where it is going in the end |
||
151 | FileInputFormat.setInputPaths(job, INPUT_PATH); |
||
152 | FileOutputFormat.setOutputPath(job, OUTPUT_PATH); |
||
153 | |||
154 | // if the output already exists, delete it |
||
155 | FileSystem fs = FileSystem.get(configuration); |
||
156 | fs.delete(WordCountMRJob.OUTPUT_PATH, true); |
||
157 | |||
158 | // do not set mapper, reducer, or combiner classes, as that is done by the Hadoop integration |
||
159 | return job; |
||
160 | } |
||
161 | </code></pre> |
||
162 | |||
163 | All the rest is really just plumbing. |
||
164 | |||
165 | <pre><code class="java"> |
||
166 | /* |
||
167 | * During the life cycle of the Job these questions will be asked: |
||
168 | */ |
||
169 | @Override |
||
170 | public Reducer<Text, IntWritable, Text, IntWritable> getCombiner(Configuration configuration) { return new WordCountReducer(); } |
||
171 | @Override |
||
172 | public Mapper<LongWritable, Text, Text, IntWritable> getMapper(Configuration configuration) {return new WordCountMapper();} |
||
173 | @Override |
||
174 | public Reducer<Text, IntWritable, Text, IntWritable> getReducer(Configuration configuration) { return new WordCountReducer(); } |
||
175 | @Override |
||
176 | public boolean hasCombiner() { return true; } |
||
177 | @Override |
||
178 | public boolean hasMapper() { return true; } |
||
179 | @Override |
||
180 | public boolean hasReducer() { return true; }; |
||
181 | </code></pre> |
||
182 | |||
183 | and the mapper implementation WordCountMapper and the reducer implementation WordCountReducer are just doing what the word count sample always does: |
||
184 | |||
185 | # When reading a line of text, the mapper splits it into words and emits (<word>,1) for every word. |
||
186 | # The combiner and reducer get a sequence of counts per word, (<word>,(<count_i>)_i) and emit (<word>, sum(<count_i>,i)) |
||
187 | |||
188 | 18 | Henning Blohm | The whole example is of course not practically usable and in terms of using the data structures it tells very little. The sample [[Sample-hbase-mail-digester]] is much more interesting in those respects. |
189 | 1 | Henning Blohm | |
190 | h2. Summary |
||
191 | 13 | Henning Blohm | |
192 | In real world applications Map/Reduce jobs are just one part of the application scenario. In particular they do usually need access to domain types, if not other application services and even access to other databases. That is one element of the Hadoop integration: Provide first-class application component support. |
||
193 | |||
194 | Secondly, jobs may get triggered based on application state changes or - for example - time based events that are evaluated by an application. That's why it is so important to be able to trigger jobs programmatically - much more so than manually from the command line (as much as that may be useful for demos and testing). That is the other part of the Hadoop integration: Provide an abstraction to programmatic job execution that respects modularity and abstraction of connectivity configuration. |
||
195 | 14 | Henning Blohm | |
196 | Please check out [[Hadoop add-on]] to learn more about what is happening behind the scenes. |