C currency mall>Book>Technical books> HBase Learning Chinese version

Commodity code:S20151116182115252

HBase Learning Chinese version

To understand the real case of NoSQL|BigTables| big data entry and development of real case management and development of both "go to which network" database director to carry the Hbase team

Commodity details

Very easy to get started, the actual combat and principle

Cover management and development, at the same time for operation and maintenance and programmers

Real scene, real case, seamless connection with frontline practice

"Where" the Hbase team Leader senior engineer to offer translation articles

You can learn from this book:

Understand the basic principles of HBase

Understand the prerequisites for building HBase

Install and configure the new HBase cluster

Hadoop and HBase parameters optimization cluster

Use all kinds of troubleshooting and maintenance technology, to protect the high availability of cluster

Master HBase data model and its operation

Learn about the benefits of using the Hadoop Toolkit


Hbase Learning (Chinese version) is a professional book which introduces the knowledge of HBase. It introduces the basic concept of HBase, the function and characteristic of the traditional relational database, the configuration method and the installation method, and introduces the management and fault treatment of HBase. Hbase Learning Chinese version also describes the Java based HBase programming methods, as well as some of the use of HBase as a big data tools, these are enough to help readers better understand the HBase architecture, more smoothly in their own projects using HBase.

Hbase Learning (Chinese version) is not only suitable for HBase beginners to learn to use, but also for the development of HBase experience as a tool for inquiry, is a relatively complete HBase technology for the general tool, I hope this book can help readers in the actual work.

Shriparv ShashwatBorn in India, Bihar Bangmu Zafar Boolean county. He worked in Mozafar Bull and Shillong in Meghalaya study. He received a Bachelor of computer applications (BCA) at the national Open University in Delhi, Gandhi Ca La Lanban Kitchen, and received a master's degree in computer applications (C-DAC) at the University of science and technology in (Trivandrum MCA). He began to study big data technology earlier in 2010, when he needed to do a large data storage and processing of the concept of log log (POC). He also has another project that needs to be stored in a large file of different files and to handle them. At this time, he began to configure, build and test the HBase Hadoop cluster, and write some code for them. After doing a successful POC, he uses REST Java and Web SOAP services to develop and build a system that uses Hadoop to store and process logs in HBase and then store them through a custom table in HBase to read data from API HBase-Hive and Web. Shashwat successfully implemented this project, followed by the 1TB to the 3TB's massive binary file header processing work, he put the file metadata stored in HBase, the file itself is HDFS.

C-DAC in Trivandrum Shashwat network forensic center began his software development career, for forensic analysis and development of mobile related software. Then, he went to Computer Solutions Genilok, where his work included: cluster computing, HPC technology, and Web technology.

After that, he went to Trivandrum from Bangalore and joined the PointCross, where he started the big data technology, using Java development software, Web services and big data platform. In PointCross, many of his projects are around big data technology, such as Hadoop, Hive, HBase, Pig, Sqoop, Flume, etc.. From here he went to Infosystems HCL, the UIDAI project, which is a very prestigious project in India, which provides a unique identification number for each India resident. Here, he works in the use of the technology are: Hive, Hadoop,, Pig, Linux, HBase, HBase, management Hadoop cluster, writing scripts, automated tasks and processing, to create a dashboard for cluster monitoring.

Now, Shashwat works in Cognilytics, focusing on big data technology, HANA and other high performance technology. You can learn more about him by https://github.com/shriparv and com http://helpmetocode.blogspot.. Through http://www.linkedin.com/pub/, shashwat-shriparv/19/214/2a9 LinkedIn to contact him, but also to send e-mail to him, gmail.com dwivedishashwat@.

Shashwat once proofreading too pig design pattern, pradeep Pasupuleti, packt publishing a book. He also has served as a editor of his university journal InfinityTech.

The first chapter is about the HBase ecosystem 1

The second chapter is on the journey to HBase 26

The third chapter is to build HBase 46

The fourth chapter optimizes the HBase/Hadoop cluster 82

The fifth chapter is the storage, the framework and the data type of HBase 99

Sixth chapter HBase cluster operation and maintenance and fault handling 120

Seventh chapter HBase script programming 176

Eighth chapter Java HBase programming 191

Ninth chapter Java HBase advanced programming 216

The tenth chapter HBase use case 240