Installing HBase
We can install HBase in any of the
three modes: Standalone mode, Pseudo Distributed mode, and Fully Distributed
mode.
Installing HBase in Standalone Mode
$cd usr/local/
$wget
http://www.interior-dsgn.com/apache/hbase/stable/hbase-0.98.8-
hadoop2-bin.tar.gz
$tar -zxvf
hbase-0.98.8-hadoop2-bin.tar.gz
Shift to super user mode and move
the HBase folder to /usr/local as shown below.
$su
$password: enter
your password here
mv
hbase-0.99.1/* Hbase/
Configuring HBase in Standalone Mode
Before proceeding with HBase, you
have to edit the following files and configure HBase.
hbase-env.sh
Set the java Home for HBase and
open hbase-env.sh file from the conf folder. Edit
JAVA_HOME environment variable and change the existing path to your current
JAVA_HOME variable as shown below.
cd
/usr/local/Hbase/conf
gedit
hbase-env.sh
This will open the env.sh file of
HBase. Now replace the existing JAVA_HOME value
with your current value as shown below.
export
JAVA_HOME=/usr/lib/jvm/java-1.7.0
hbase-site.xml
This is the main configuration file
of HBase. Set the data directory to an appropriate location by opening the
HBase home folder in /usr/local/HBase. Inside the conf folder, you will find
several files, open the hbase-site.xml file as shown below.
#cd
/usr/local/HBase/
#cd
conf
#
gedit hbase-site.xml
Inside the hbase-site.xml file, you will find the
<configuration> and </configuration> tags. Within them, set the
HBase directory under the property key with the name “hbase.rootdir” as shown
below.
<configuration>
//Here you have to set the path where you
want HBase to store its files.
<property>
<name>hbase.rootdir</name>
<value>file:/home/hadoop/HBase/HFiles</value>
</property>
//Here you have to set the path where you
want HBase to store its built in zookeeper
files.
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/home/hadoop/zookeeper</value>
</property>
</configuration>
With this, the HBase installation
and configuration part is successfully complete. We can start HBase by using start-hbase.sh script provided in the bin folder of
HBase. For that, open HBase Home Folder and run HBase start script as shown
below.
$cd
/usr/local/HBase/bin
$./start-hbase.sh
If everything goes well, when you
try to run HBase start script, it will prompt you a message saying that HBase
has started.
starting
master, logging to
/usr/local/HBase/bin/../logs/hbase-tpmaster-localhost.localdomain.out
Installing HBase in Pseudo-Distributed Mode
Let us now check how HBase is
installed in pseudo-distributed mode.
CONFIGURING HBASE
Before proceeding with HBase, configure
Hadoop and HDFS on your local system or on a remote system and make sure they
are running. Stop HBase if it is running.
hbase-site.xml
Edit hbase-site.xml file to add the
following properties.
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
It will mention in which mode HBase
should be run. In the same file from the local file system, change the
hbase.rootdir, your HDFS instance address, using the hdfs://// URI syntax. We
are running HDFS on the localhost at port 8030.
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:8030/hbase</value>
</property>
Starting HBase
After configuration is over, browse to
HBase home folder and start HBase using the following command.
$cd
/usr/local/HBase
$bin/start-hbase.sh
Note: Before starting HBase, make sure Hadoop is running.
Checking the HBase Directory in HDFS
HBase creates its directory in
HDFS. To see the created directory, browse to Hadoop bin and type the following
command.
$ ./bin/hadoop
fs -ls /hbase
If everything goes well, it will
give you the following output.
Found
7 items
drwxr-xr-x
- hbase users 0 2014-06-25 18:58 /hbase/.tmp
drwxr-xr-x
- hbase users 0 2014-06-25 21:49 /hbase/WALs
drwxr-xr-x
- hbase users 0 2014-06-25 18:48 /hbase/corrupt
drwxr-xr-x
- hbase users 0 2014-06-25 18:58 /hbase/data
-rw-r--r--
3 hbase users 42 2014-06-25 18:41 /hbase/hbase.id
-rw-r--r--
3 hbase users 7 2014-06-25 18:41 /hbase/hbase.version
drwxr-xr-x
- hbase users 0 2014-06-25 21:49 /hbase/oldWALs
Starting and Stopping a Master
Using the “local-master-backup.sh”
you can start up to 10 servers. Open the home folder of HBase, master and
execute the following command to start it.
$
./bin/local-master-backup.sh 2 4
To kill a backup master, you need its process id, which
will be stored in a file named “/tmp/hbase-USER-X-master.pid.” you can kill the backup master using
the following command.
$ cat
/tmp/hbase-user-1-master.pid |xargs kill -9
Starting and Stopping RegionServers
You can run multiple region servers
from a single system using the following command.
$
.bin/local-regionservers.sh start 2 3
To stop a region server, use the
following command.
$
.bin/local-regionservers.sh stop 3
Starting HbaseShell
After Installing HBase successfully, you
can start HBase Shell. Below given are the sequence of steps that are to be
followed to start the HBase shell. Open the terminal, and login as super user.
Start Hadoop File System
Browse through Hadoop home sbin
folder and start Hadoop file system as shown below.
$cd
$HADOOP_HOME/sbin
$start-all.sh
Start HBase
Browse through the HBase root
directory bin folder and start HBase.
$cd
/usr/local/HBase
$./bin/start-hbase.sh
Start HBase Master Server
This will be the same directory.
Start it as shown below.
$./bin/local-master-backup.sh
start 2 (number signifies specific
server.)
Start Region
Start the region server as shown below.
$./bin/./local-regionservers.sh
start 3
Start HBase Shell
You can start HBase shell using the
following command.
$cd bin
$./hbase shell
This will give you the HBase Shell
Prompt as shown below.
2016-12-09
14:24:27,526 INFO [main] Configuration.deprecation:
hadoop.native.lib
is deprecated. Instead, use io.native.lib.available
HBase
Shell; enter 'help<RETURN>' for list of supported commands.
Type
"exit<RETURN>" to leave the HBase Shell
Version
0.98.8-hadoop2, r6cfc8d064754251365e070a10a82eb169956d5fe, Fri
Nov
14 18:26:29 PST 2016
hbase(main):001:0>
Thank you a lot for providing individuals with a very spectacular possibility to read critical reviews from this site.
ReplyDeletebig data training in bangalore
hadoop training in bangalore
Nice and good article. It is very useful for me to learn and understand easily. Thanks for sharing your valuable information and time. Please keep updating Hadoop Admin online training
ReplyDeleteAwesome post sir,
ReplyDeletereally appreciate for your writing. This blog is very much useful...
Hi guyz click here Hadoop Training in Bangalore to get the best knowledge and details and also 100% job assistance hurry up... !!
DO NOT MISS THE CHANCE...