Hadoop入门配置系列博客目录一览

1、Eclipse中使用Hadoop伪分布模式开发配置及简单程序示例(Linux下)

2、使用Hadoop命令行执行jar包详解(生成jar、将文件上传到dfs、执行命令、下载dfs文件至本地)

3、Hadoop完全分布式集群安装及配置(基于虚拟机)

4、Eclipse中使用Hadoop集群模式开发配置及简单程序示例(Windows下)

5、Zookeeper3.4.9、Hbase1.3.1、Pig0.16.0安装及配置(基于Hadoop2.7.3集群)

6、mysql5.7.18安装、Hive2.1.1安装和配置(基于Hadoop2.7.3集群)

7、Sqoop-1.4.6安装配置及Mysql->HDFS->Hive数据导入(基于Hadoop2.7.3)

8、Hadoop完全分布式在实际中优化方案

9、Hive:使用beeline连接和在eclispe中连接

10、Scala-2.12.2和Spark-2.1.0安装配置(基于Hadoop2.7.3集群)

11、Win下使用Eclipse开发scala程序配置(基于Hadoop2.7.3集群)

12、win下Eclipse远程连接Hbase的配置及程序示例(create、insert、get、delete)

Hadoop入门的一些简单实例详见本人github:https://github.com/Nana0606/hadoop_example

本篇博客主要介绍“win下Eclipse远程连接Hbase的配置及程序示例(create、insert、get、delete)”。 

 

写在前面

Ubuntu版本: Ubuntu16.04

Hadoop版本: Hadoop-2.7.3

Hbase版本: HBase-1.3.1

zookeeper版本: zookeeper-3.4.9

一、配置

1、新建Java项目

依次点击: File --> New --> Other --> Java Project --> 输入项目名称(这里是HBaseBasic)。

2、导入jar包

(1)新建文件夹lib,右击项目 --> New --> Folder --> 输入文件夹名称“lib”。

(2)将hbase-1.3.1-bin\hbase-1.3.1\lib下的所有jar包放到lib下(真正使用的时候只使用了部分jar包,这里为避免一些问题,将所有jar包放入lib),全选lib下的所有jar包,右击Build Path --> Add to Build Path。

3、添加hbase-site.xml文件

(1)右击项目名 --> New --> Source Folder --> 输入项目名称,这里取名为conf

(2)将虚拟机集群上配置的hbase-site.xml文件复制到conf目录下

4、修改hosts文件

因为hbase-site.xml涉及到一些主机名,这里是在window10下的eclipse中远程连接HBase,所以更改win10下的hosts文件(存放位置:C:\Windows\System32\drivers\etc),添加如下内容(Note:ip和主机名根据集群上的配置做出相应的修改。):

二、程序示例

 

package com.hbase.test;

import java.io.IOException;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.HColumnDescriptor;
import org.apache.hadoop.hbase.HTableDescriptor;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.client.Admin;
import org.apache.hadoop.hbase.client.Connection;
import org.apache.hadoop.hbase.client.ConnectionFactory;
import org.apache.hadoop.hbase.client.Get;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.client.ResultScanner;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.client.Table;
import org.apache.hadoop.hbase.util.Bytes;

public class HbaseBean {

	static Configuration configuration = HBaseConfiguration.create();
	public static Connection connection;
	public static Admin admin;

	// create a new table.
	//columnFamily代表列族
	public static void create(TableName tablename, String columnFamily) throws Exception {
		connection = ConnectionFactory.createConnection(configuration);
		admin = connection.getAdmin();
		if (admin.tableExists(tablename)) {
			System.out.println("table Exists!");
			System.exit(0);
		} else {
			//can use HTableDescriptor and HColumnDescriptor to modify table pattern.
			HTableDescriptor tableDesc = new HTableDescriptor(tablename);
			tableDesc.addFamily(new HColumnDescriptor(columnFamily));
			admin.createTable(tableDesc);
			System.out.println("create table successfully!");
		}
	}

	// insert a record.
	public static void put(TableName tablename, String row, String columnFamily, String column, String data)
			throws Exception {
		connection = ConnectionFactory.createConnection(configuration);
		Table table = connection.getTable(tablename);
		Put p = new Put(Bytes.toBytes(row));
		p.addColumn(Bytes.toBytes(columnFamily), Bytes.toBytes(column), Bytes.toBytes(data));
		table.put(p);
		System.out.println("put '" + row + "','" + columnFamily + ":" + column + "','" + data + "'");
	}

	// get data of some row for one table.
	// which equals to hbase shell command of " get 'tablename','rowname' "
	public static void get(TableName tablename, String row) throws IOException {
		connection = ConnectionFactory.createConnection(configuration);
		Table table = connection.getTable(tablename);
		Get g = new Get(Bytes.toBytes(row));
		Result result = table.get(g);
		System.out.println("Get Info: " + result);
	}

	// get all data of this table, using "Scan" to operate.
	public static void scan(TableName tablename) throws Exception {
		connection = ConnectionFactory.createConnection(configuration);
		Table table = connection.getTable(tablename);
		Scan s = new Scan();
		ResultScanner rs = table.getScanner(s);
		for (Result r : rs) {
			System.out.println("Scan info: " + r);
		}
	}

	// delete a table, this operation needs to disable table firstly and then delete it.
	public static boolean delete(TableName tablename) throws IOException {
		connection = ConnectionFactory.createConnection(configuration);
		admin = connection.getAdmin();
		if (admin.tableExists(tablename)) {
			try {
				admin.disableTable(tablename);
				admin.deleteTable(tablename);
			} catch (Exception ex) {
				ex.printStackTrace();
				return false;
			}

		}
		return true;
	}

	public static void main(String[] agrs) {
		TableName tablename = TableName.valueOf("hbase_test");
		String columnFamily = "columnVal";

		try {
            //Step1: create a new table named "hbase_test".
			HbaseBean.create(tablename, columnFamily);
			//Step2: insert 3 records.
			HbaseBean.put(tablename, "row1", columnFamily, "1", "value1");
			HbaseBean.put(tablename, "row2", columnFamily, "2", "value2");
			HbaseBean.put(tablename, "row3", columnFamily, "3", "value3");
			//Step3: get value of row1.
			HbaseBean.get(tablename, "row1");
			//Step4: scan the full table.
			HbaseBean.scan(tablename);
			//Step4: delete this table.
			if (HbaseBean.delete(tablename) == true)
				System.out.println("Delete table:" + tablename + " success!");

		} catch (Exception e) {
			e.printStackTrace();
		}
	}
}

 

三、程序执行结果

Logo

华为开发者空间,是为全球开发者打造的专属开发空间,汇聚了华为优质开发资源及工具,致力于让每一位开发者拥有一台云主机,基于华为根生态开发、创新。

更多推荐