Install prepacked CDH4 » History » Revision 6
Revision 5 (Henning Blohm, 20.09.2012 14:59) → Revision 6/20 (Henning Blohm, 20.09.2012 15:02)
h2. Install CDH4 from a preconfigured repository This site provides a pre-configured one-ckeck out user space installation of Cloudera's CDH4 Hadoop and HBase distributions. This page explains how to install it on your machine - which is really, really simple compared to normally suggested Hadoop installation procedures. *Note #1:* This will only work on Linux or Mac OS *Note #2:* The repository also contains an Eclipse project file and has Eclipse launchers for most functions required. In short there are the followings steps: # Clone the repository # Adapt your local environment # Format HDFS # Start and stop h3. Clone the repository The pre-configured distribution is stored in the repository "z2-samples-cdh4-base":http://redmine.z2-environment.net/projects/z2-samples/repository/z2-samples-cdh4-base. We assume you install everything (including an Eclipse workspace - if you run the samples) in *install*. <pre><code class="ruby"> cd install git clone -b http://git.z2-environment.net/z2-samples.cdh4-base </code></pre> h3. Adapt your environment Before you can run anything really there are two customizations needed: h4. Set important environment variables There is a shell script "env.sh":http://redmine.z2-environment.net/projects/z2-samples/repository/z2-samples-cdh4-base/revisions/master/entry/env.sh that you should open and change as described. At the time of this writing it is required that you define your JAVA_HOME (please do, even if set elsewhere already) and the NOSQL_HOME, which is the absolute path to the folder that has the *env.sh* file. This script is called from many places. h4. Enable password-less SSH Currently this is still required to have the start / stop scripts running. This requirement may be dropped in the future. If you have not created a unique key for SSH or have no idea what that is, run <pre><code class="ruby"> ssh-keygen </code></pre> (just keep hitting enter). Next copy that key over to the machine you want to log on to without password, i.e. localhost in this case: <pre><code class="ruby"> ssh-copy-id <your user name>@localhost </code></pre> If this fails because your SSH works differently, or ssh will refuse to log on without password please "ask the internet". Sorry. All that matters is that in the end <pre><code class="ruby"> ssh <your user name>@localhost </code></pre> (substituting <your user name> with your actual user name of course) works without asking for a password. h3. Formatting HDFS Finally, the last step before you can start up, is to prepare the local node to store data. This is done by running the *format_dfs.sh* script. Alternatively you can use the Eclipse launcher of the same name. This should complete without any questions or errors. Otherwise please verify your setings above. h3. Start and Stop Depending on your sample requirements, you can start Hadoop (HDFS, Yarn, the History Server) or HBase (including all the Hadoop services) using the *start_hadoop.sh* script (or launcher) or the *start_hbase.sh* script (or launcher) respectively. Similarly you can stop everything with the stop scripts. When you have started, after a short while, using *jps* on the command line, while you should see the following Java processes (and possibly others of course): <pre><code class="ruby"> |HRegionServer| HRegionServer |HQuorumPeer | HQuorumPeer |DataNode| DataNode |NodeManager| NodeManager |HMaster| HMaster |NameNode| NameNode |SecondaryNameNode| SecondaryNameNode |JobHistoryServer| JobHistoryServer ResourceManager </code></pre> |ResourceManager| There is lots of other scripts in the distribution that you can use to start or stop single components. If you do however, please run (in the shell): <pre><code class="ruby"> . ./env.sh </code></pre> (note the leading period) If you ran the start script and it returned, here is some URLs you should check to verify everything is looking good: * Try to reach the Namenode Nameserver at http://localhost:50070 * Try to reach the Yarn Nodemanager at http://localhost:8088 and if you are running HBase: * Try to reach the HBase Master at http://localhost:60010 *Note:* If you notice that you cannot restart or that HBase is not stopping correctly, that is most likely exactly the case. Sometimes HBase processes do not stop. To make sure there is no process left, use *jps* from the command line and kill remaining processes.