Index
A
- add() option / Writing data
- addColumn(byte[] family, byte[] qualifier) method / Reading data, HBase table scans
- addFamily(byte[] family) method / HBase table scans
- administrative API
- about / The administrative API
- data definition API / The data definition API
- HBaseAdmin API / The HBaseAdmin API
- alter command / Data definition commands
- Apache Hadoop software library / The world of Big Data
- Apache Thrift
- URL / The Thrift client
- application-managed approach / Understanding keys
- assign command / Data-handling tools
- authentication
- enabling / Enabling authentication
- authorization
- enabling / Enabling authorization
B
- balancer / Load balancing
- balancer command / Data-handling tools
- Big Data
- about / The world of Big Data
- BigTable / The origin of HBase
- block cache metrics
- count metric / Region server metrics
- size metric / Region server metrics
- free metric / Region server metrics
- evicted metric / Region server metrics
- boolean hasFamily(byte[] c) method / Column family methods
- boolean isBlockCacheEnabled() method / Other methods
- boolean isInMemory() method / Other methods
- boolean isMasterRunning() method / The HBaseAdmin API
- byte getVersion() method / The HBaseAdmin API
C
- close() method / HBase table scans, The HBaseAdmin API
- cluster
- upgrading / Upgrading a cluster
- cluster consistency, HBase
- consistency check / HBase cluster consistency
- integrity check / HBase cluster consistency
- fixing, flags used / HBase cluster consistency
- cluster management
- about / Cluster management
- HBase cluster, starting / The Start/stop HBase cluster
- HBase cluster, stopping / The Start/stop HBase cluster
- nodes, adding / Adding nodes
- node, decommissioning / Decommissioning a node
- cluster, upgrading / Upgrading a cluster
- HBase cluster consistency / HBase cluster consistency
- HBase data import tools / HBase data import/export tools
- data export tools / HBase data import/export tools
- CopyTable MapReduce job / Copy table
- cluster monitoring
- about / Cluster monitoring
- HBase metrics framework / The HBase metrics framework
- ColumnCountGetFilter / Utility filters
- column key / Understanding keys
- ColumnPaginationFilter / Utility filters
- commands, HBase
- status / Start playing
- create'<table_name>', '<column_family_name>' / Start playing
- list / Start playing
- put '<table_name>', '<row_num>', 'column_familyTopicnkey', 'value' / Start playing
- get'<table_name>', '<row_num>' / Start playing
- scan '<table_name > / Start playing
- delete '<table_name>', '<row_num>', 'column_familyTopicnkey' / Start playing
- describe '<table_name>' / Start playing
- drop '<table_name>' / Start playing
- URL / Start playing
- compact command / Data-handling tools
- compaction metrics
- compaction size metric / Region server metrics
- compaction time metric / Region server metrics
- compaction queue size metric / Region server metrics
- comparison filters
- about / Comparison filters
- RowFilter / Comparison filters
- ValueFilter / Comparison filters
- FamilyFilter / Comparison filters
- DependentColumnFilter / Comparison filters
- QualifierFilter / Comparison filters
- compression algorithms, HBase
- about / Compression
- Lempel-Ziv-Oberhumer (LZO) / Available codecs
- Snappy / Available codecs
- GZIP / Available codecs
- Concurrent-Mark-Sweep GC (CMS) / JVM tuning
- Configuration getConfiguration() method / The HBaseAdmin API
- connection, HBase
- establishing / Establishing a connection
- constructors, HTableDescriptor class
- HColumnDescriptor(byte[] familyName) / Other methods
- HColumnDescriptor(String familyName) / Other methods
- constructors, scan class
- Scan(byte[] startRow) / HBase table scans
- Scan(byte[] startRow, byte[] stopRow) / HBase table scans
- Scan(byte[] startRow, Filter filter) / HBase table scans
- Scan(Get get) / HBase table scans
- Scan(Scan scan) / HBase table scans
- coprocessor
- about / Coprocessors
- coprocessor, categories
- observer coprocessor / The observer coprocessor
- endpoint coprocessor / The endpoint coprocessor
- coprocessor, scopes
- system level / Coprocessors
- table level / Coprocessors
- coprocessor package
- URL / The endpoint coprocessor
- coprocessor package, categories
- RegionObserver / The observer coprocessor
- RegionServerObserver / The observer coprocessor
- WALObserver / The observer coprocessor
- MasterObserver / The observer coprocessor
- CopyTable MapReduce job / Copy table
- count command / Data manipulation commands
- counters
- about / Counters
- single counters / Single counters
- multiple counters / Multiple counters
- create '<table_name>', '<column_family_name>command / Start playing
- create command / Data definition commands
- create operation / CRUD using Kundera
- CRUD operations
- performing, with Kundera / CRUD using Kundera
- create operation / CRUD using Kundera
- read operation / CRUD using Kundera
- update operation / CRUD using Kundera
- delete operation / CRUD using Kundera
- CRUD operations, HBase
- about / CRUD operations
- data, writing / Writing data
- data, reading / Reading data
- data, updating / Updating data
- data, deleting / Deleting data
- custom endpoint coprocessor
- building / The endpoint coprocessor
- customer / Data modeling in HBase
- custom filters
- wrapper filters / Custom filters
- pure custom filters / Custom filters
- cyclic replication / Data replication
D
- data
- writing / Writing data
- reading / Reading data
- updating / Updating data
- deleting / Deleting data
- data-handling tools
- assign / Data-handling tools
- balancer / Data-handling tools
- compact / Data-handling tools
- flush / Data-handling tools
- move / Data-handling tools
- split / Data-handling tools
- data definition API
- about / The data definition API
- table name methods / Table name methods
- column family methods / Column family methods
- data definition commands
- about / Data definition commands
- create / Data definition commands
- alter / Data definition commands
- disable / Data definition commands
- drop / Data definition commands
- enable / Data definition commands
- describe / Data definition commands
- exists / Data definition commands
- list / Data definition commands
- data export tools, HBase / HBase data import/export tools
- data import tools, HBase / HBase data import/export tools
- data manipulation commands
- put / Data manipulation commands
- scan / Data manipulation commands
- get / Data manipulation commands
- truncate / Data manipulation commands
- delete / Data manipulation commands
- deleteall / Data manipulation commands
- count / Data manipulation commands
- incr / Data manipulation commands
- data modeling, in HBase / Data modeling in HBase
- data replication
- about / Data replication
- data storage
- about / Data storage
- files / Data storage
- delete '<table_name>', '<row_num>', 'column_family
- key'command / Start playing
- deleteall command / Data manipulation commands
- Delete class
- deleteColumn(byte[] family, byte[] qualifier) method / Deleting data
- deleteColumn(byte[] family, byte[] qualifier, long timestamp) method / Deleting data
- deleteColumns(byte[] family, byte[] qualifier) method / Deleting data
- deleteFamily(byte[] family) method / Deleting data
- deleteFamily(byte[] family, long timestamp) method / Deleting data
- deleteFamilyVersion(byte[] family, long timestamp) method / Deleting data
- deleteColumn(byte[]family, byte[] qualifier) method, Delete class / Deleting data
- deleteColumn(byte[] family, byte[] qualifier , long timestamp) method, Delete class / Deleting data
- deleteColumns(byte[] family, byte[] qualifier) method, Delete class / Deleting data
- delete command / Data manipulation commands
- deleteFamily(byte[] family) method, Delete class / Deleting data
- deleteFamily(byte[]family, long timestamp) method, Delete class / Deleting data
- deleteFamilyVersion(byte[] family, long timestamp) method, Delete class / Deleting data
- delete operation / CRUD using Kundera
- DependentColumnFilter / Comparison filters
- describe '<table_name>'command / Start playing
- describe command / Data definition commands
- disable command / Data definition commands
- drop '<table_name>'command / Start playing
- drop command / Data definition commands
E
- enable command / Data definition commands
- endpoint coprocessor
- about / The endpoint coprocessor
- Entity Transaction / Kundera – object mapper
- exists command / Data definition commands
F
- Facebook
- URL / Use cases of HBase
- about / Use cases of HBase
- FamilyFilter / Comparison filters
- file-based monitoring
- about / File-based monitoring
- files types, for data storage
- filters
- implementing / Implementing filters, Utility filters, Comparison filters
- utility filters / Utility filters
- comparison filters / Comparison filters
- custom filters / Custom filters
- using, with Kundera / Using filters within query
- flush command / Data-handling tools
- fully distributed mode, HBase installation / The fully distributed mode
G
- Ganglia
- URL / Cluster monitoring, Ganglia
- Ganglia, components
- get '<table_name>', '<row_num>'command / Start playing
- get(byte[] family, byte[] qualifier) method, Put class / Writing data
- getCacheBlocks() method, get class / Reading data
- Get class
- has(getFamilyMap() method / Reading data
- getMaxResultsPerColumnFamily() method / Reading data
- getCacheBlocks() method / Reading data
- setTimestamp(long timestamp) method / Deleting data
- get command / Data manipulation commands
- getDelegate method / Using filters within query
- getFamilyMap() method, get class / Reading data
- getMaxResultsPerColumnFamily() method, get class / Reading data
- getScanner(byte[] family) method / HBase table scans
- getScanner(byte[] family, byte[] qualifier) method / HBase table scans
- getScanner(Scan scan) method / HBase table scans
- GZIP / Available codecs
H
- Hadoop 2.x / Installing HBase
- Hadoop Distributed File System (HDFS) / HBase and MapReduce
- Hadoop ecosystem client
- about / The Hadoop ecosystem client
- Hive / Hive
- Hadoop MapReduce
- about / Hadoop MapReduce
- Hadoop MapReduce framework / The Hadoop ecosystem client
- has(byte[] family, byte[] addFamily(byte[] family) method / Reading data
- has(byte[] family, byte[]qualifier, byte[] value) method, Put class / Writing data
- hashing pproach / Understanding keys
- HBase
- origin / The origin of HBase
- URL / The origin of HBase, The local mode
- about / The origin of HBase, Installing HBase
- use cases / Use cases of HBase
- use cases, companies / Use cases of HBase
- installing / Installing HBase
- data modeling / Data modeling in HBase
- tables, designing / Designing tables, Understanding keys
- accessing / Accessing HBase
- connection, establishing / Establishing a connection
- CRUD operations / CRUD operations
- table scans / HBase table scans
- securing / Securing HBase
- integrating, with MapReduce / HBase and MapReduce
- MapReduce, running over / Running MapReduce over HBase
- as data source / HBase as a data source
- as data sink / HBase as a data sink
- as data source and sink / HBase as a data source and sink
- API documentation, URL / The HBaseAdmin API
- querying, with Kundera / Query HBase using Kundera
- cluster consistency / HBase cluster consistency
- HBase, securing
- about / Securing HBase
- authentication, enabling / Enabling authentication
- authorization, enabling / Enabling authorization
- REST Clients, configuring / Configuring REST clients
- HBaseAdmin API
- about / The HBaseAdmin API
- HBase administration concepts
- cluster management / Cluster management
- cluster monitoring / Cluster monitoring
- performance tuning / Performance tuning
- HBase architecture
- data storage / Data storage
- data replication / Data replication
- HBase, securing / Securing HBase
- HBase cluster
- components / Understanding HBase cluster components
- troubleshooting / Troubleshooting
- HBase cluster components
- HBase Master / Understanding HBase cluster components
- ZooKeeper / Understanding HBase cluster components
- RegionServers / Understanding HBase cluster components
- HBase data storage system / Understanding HBase cluster components
- commands, trying / Start playing
- HBase data storage system / Understanding HBase cluster components
- HBase Master / Understanding HBase cluster components
- HBase metrics framework
- about / The HBase metrics framework
- implementations / The HBase metrics framework
- master server metrics / Master server metrics
- region server metrics / Region server metrics
- JVM metrics / JVM metrics
- info metrics / Info metrics
- Ganglia / Ganglia
- Nagios / Nagios
- JMX / JMX
- file-based monitoring / File-based monitoring
- HBase replication
- URL / Data replication
- HBase shell
- about / The HBase shell
- data definition commands / Data definition commands
- data manipulation commands / Data manipulation commands
- data-handling administrative tools / Data-handling tools
- HBase Version 0.98.7 / Installing HBase
- HColumnDescriptor(byte[] familyName method / Other methods
- HColumnDescriptor(String familyName) method / Other methods
- HColumnDescriptor class / Other methods
- HColumnDescriptor getFamily(byte[]column) method / Column family methods
- HColumnDescriptor removeFamily(byte[] column) method / Column family methods
- HColumnDescriptor setInMemory(boolean inMemory) method / Other methods
- HColumnDescriptor setInMemory(boolean inMemory method / Other methods
- HColumnDescriptor[] getColumnFamilies() method / Column family methods
- HConnection getConnection() method / The HBaseAdmin API
- HFile / Data modeling in HBase
- Hive
- HLog
- about / HLog (the write-ahead log – WAL)
- HTable class / Establishing a connection
- HTableDescriptor class / Other methods
- HTablePool class / Establishing a connection
I
- I/O metrics
- FS read latency / Region server metrics
- FS write latency / Region server metrics
- FS sync latency / Region server metrics
- incr command / Data manipulation commands
- indexing solutions for HBase approach / Understanding keys
- info metrics / Info metrics
- interface definition (ID) / The Thrift client
- int getRegionsCount() method / The HBaseAdmin API
- int getRequestsCount() method / The HBaseAdmin API
J
- Java 1.7
- installing / Installing Java 1.7
- Java 1.7 installation
- local mode / The local mode
- pseudo-distributed mode / The pseudo-distributed mode
- fully distributed mode / The fully distributed mode
- Java Transaction API (JTA) / Kundera – object mapper
- JMX
- JMXToolkit
- URL / JMX
- JSON format (key-value pair) / The JSON format (defined as a key-value pair)
- JVM metrics
- garbage collection / JVM metrics
- memory / JVM metrics
- thread / JVM metrics
- JVM tuning / JVM tuning
K
- Kerberos Key Distribution Center (KDC) / Securing HBase
- keys
- about / Understanding keys
- row key / Understanding keys
- column Key / Understanding keys
- key value / Understanding keys
- Kundera
- advantages / Kundera – object mapper
- using, ways / Kundera – object mapper
- binaries, using / Kundera – object mapper
- binaries, URL / Kundera – object mapper
- Maven dependency, using / Kundera – object mapper
- building, from source / Kundera – object mapper
- used, for performing CRUD operations / CRUD using Kundera
- used, for querying HBase / Query HBase using Kundera
- filters, using with / Using filters within query
- features / Using filters within query
L
- Lempel-Ziv-Oberhumer (LZO) / Available codecs
- Lily HBase indexer
- URL / Understanding keys
- list command / Start playing, Data definition commands
- local mode, HBase installation / The local mode
M
- Main class, implementing
- guidelines / Running MapReduce over HBase
- Map<String, RegionState>getRegionsInTransition() method / The HBaseAdmin API
- Mapper class, implementing
- guidelines / Running MapReduce over HBase
- MapReduce
- running, over HBase / Running MapReduce over HBase
- MapReduce (MR) / The world of Big Data
- master-master replication / Data replication
- master-push pattern
- about / Data replication
- master-push pattern, designing
- master-slave replication / Data replication
- master-master replication / Data replication
- cyclic replication / Data replication
- master-slave replication / Data replication
- MasterObserver type / The observer coprocessor
- master server metrics
- about / Master server metrics
- cluster requests / Master server metrics
- split time / Master server metrics
- split size / Master server metrics
- Meetup
- URL / Use cases of HBase
- about / Use cases of HBase
- MemStore-local allocation buffers (MSLABs) / MemStore-local allocation buffers
- MemStore metrics
- flush queue size metric / Region server metrics
- flush size metric / Region server metrics
- flush time metric / Region server metrics
- methods, ClusterStatus class
- int getRegionsCount() / The HBaseAdmin API
- int getRequestsCount() / The HBaseAdmin API
- String getHBaseVersion() / The HBaseAdmin API
- byte getVersion() / The HBaseAdmin API
- String getClusterId() / The HBaseAdmin API
- Map<String, RegionState>getRegionsInTransition() / The HBaseAdmin API
- methods, HBaseAdmin class
- boolean isMasterRunning() / The HBaseAdmin API
- HConnection getConnection() / The HBaseAdmin API
- Configuration getConfiguration() / The HBaseAdmin API
- close() / The HBaseAdmin API
- methods, HTableDescriptor class
- void addFamily(HColumnDescriptor family) / Column family methods
- boolean hasFamily(byte[] c) / Column family methods
- HColumnDescriptor[] getColumnFamilies()Topicn / Column family methods
- HColumnDescriptor getFamily(byte[]column) / Column family methods
- HColumnDescriptor removeFamily(byte[] column / Column family methods
- move command / Data-handling tools
- multiple counters
- about / Multiple counters
N
- Nagios
- URL / Cluster monitoring, Nagios
- about / Nagios
- next() method / HBase table scans
- next(int nbRows) method / HBase table scans
- nodes
- adding / Adding nodes
- decommissioning / Decommissioning a node
- NoSQL database / The world of Big Data
O
- observer coprocessor
- about / The observer coprocessor
- Open Time Series Database (OpenTSDB) / Use cases of HBase
- orders / Data modeling in HBase
P
- PageFilter / Utility filters
- performance tuning
- about / Performance tuning
- compression algorithms / Compression
- load balancing / Load balancing
- regions, splitting / Splitting regions
- regions, merging / Merging regions
- MemStore-local allocation buffers (MSLABs) / MemStore-local allocation buffers
- JVM tuning / JVM tuning
- plain format
- using / The plain format
- protocol buffers (protobuf)
- about / The endpoint coprocessor
- URL / The endpoint coprocessor
- pseudo-distributed mode, HBase installation / The pseudo-distributed mode
- pure custom filters
- about / Custom filters
- put '<table_name>', '<row_num>', 'column_family
- key', 'value'command / Start playing
- Put class
- get(byte[] family, byte[] qualifier) method / Writing data
- has(byte[] family, byte[] qualifier, byte[] value) method / Writing data
- Put class instance / Writing data
- put command / Data manipulation commands
Q
- QualifierFilter / Comparison filters
R
- RandonRowFilter / Utility filters
- read operation / CRUD using Kundera
- recommendations, performance tuning
- heavy sequential reads / Other recommendations
- heavy random reads / Other recommendations
- Reducer class, implementing
- guidelines / Running MapReduce over HBase
- RegionObserver type / The observer coprocessor
- regions, HBase
- splitting / Splitting regions
- merging / Merging regions
- region server / Data modeling in HBase
- region server metrics
- block cache metrics / Region server metrics
- compaction metrics / Region server metrics
- MemStore metrics / Region server metrics
- store metrics / Region server metrics
- I/O metrics / Region server metrics
- other metrics / Region server metrics
- RegionServerObserver type / The observer coprocessor
- RegionServers / Understanding HBase cluster components, Data storage
- REST client
- about / REST clients
- overview / Getting started
- plain format / The plain format
- XML format / The XML format
- JSON format (key-value pair) / The JSON format (defined as a key-value pair)
- REST Java client / The REST Java client
- REST Clients
- configuring / Configuring REST clients
- REST Java client / The REST Java client
- RowFilter / Comparison filters
- row key / Understanding keys
S
- sales region / Data modeling in HBase
- salting approach / Understanding keys
- scan '<table_name >command / Start playing
- scan() operation / HBase table scans
- scan command / Data manipulation commands
- script files, $HBASE_HOME/bin directory
- hbase-daemon.sh / The Start/stop HBase cluster
- hbase-daemons.sh / The Start/stop HBase cluster
- start-hbase.sh / The Start/stop HBase cluster
- stop-hbase.sh / The Start/stop HBase cluster
- setFilter(Filter filter) filter / Implementing filters
- setFilter(Filter filter) method / HBase table scans
- setMaxVersions(int maxVersions) method / HBase table scans
- setMaxVersions(int max versions) method / Reading data
- setScannerCaching(int scannerCaching) method / HBase table scans
- setStartRow(byte[] startRow) method / HBase table scans
- setStopRow(byte[] stopRow) method / HBase table scans
- setTimeRange(long minStamp, long maxStamp) method / HBase table scans
- setTimeRange(long minStamp, long maxStamp method / Reading data
- setTimeStamp(long timestamp) method / Reading data, HBase table scans
- setTimestamp(long timestamp) method, Get class / Deleting data
- Simple Authentication and Security Layer (SASL) / Securing HBase
- SingleColumnValueExcludedFilter / Utility filters
- SingleColumnValueFilter / Utility filters
- single counters
- about / Single counters
- SkipFilter / Custom filters
- Snappy / Available codecs
- split command / Data-handling tools
- status command / Start playing
- store metrics
- stores metrics / Region server metrics
- file index size metric / Region server metrics
- String getClusterId() method / The HBaseAdmin API
- String getHBaseVersion() method / The HBaseAdmin API
T
- tables, HBase
- designing / Designing tables
- design practices / Designing tables
- table scans, HBase / HBase table scans
- Thrift client
- about / The Thrift client
- starting / Getting started
- TimeStampFilter / Utility filters
- troubleshooting tools, HBase cluster
- jps / Troubleshooting
- jmap / Troubleshooting
- ps / Troubleshooting
- jstat / Troubleshooting
- truncate command / Data manipulation commands
- Twitter
- about / Use cases of HBase
- URL / Use cases of HBase
- types, Hadoop metrics frameworks
- long value / The HBase metrics framework
- integer value / The HBase metrics framework
- rate / The HBase metrics framework
- string / The HBase metrics framework
- time-varying integer / The HBase metrics framework
- time-varying long / The HBase metrics framework
- time-varying rate / The HBase metrics framework
- persistent time-varying rate / The HBase metrics framework
- types, HBase metrics framework
- integer value / The HBase metrics framework
- long value / The HBase metrics framework
- rate / The HBase metrics framework
- string / The HBase metrics framework
- time-varying integer / The HBase metrics framework
- time-varying long / The HBase metrics framework
- time-varying rate / The HBase metrics framework
- persistent time-varying rate / The HBase metrics framework
U
- update operation / CRUD using Kundera
- use cases, HBase
- content, handling / Use cases of HBase
- incremental data, handling / Use cases of HBase
- utility filters
- TimeStampFilter / Utility filters
- SingleColumnValueFilter / Utility filters
- SingleColumnValueExcludedFilter / Utility filters
- PageFilter / Utility filters
- ColumnCountGetFilter / Utility filters
- ColumnPaginationFilter / Utility filters
- RandonRowFilter / Utility filters
V
- ValueFilter / Comparison filters
- void addFamily(HColumnDescriptor family) method / Column family methods
- void setBlockCacheEnabled(boolean blockCacheEnabled method / Other methods
W
- WALObserver type / The observer coprocessor
- WhileMatchFilter / Custom filters
- wrapper filters
- SkipFilter / Custom filters
- WhileMatchFilter / Custom filters
- write head log (WAL) / Data modeling in HBase
X
- XML format
- using / The XML format
Y
- Yahoo
- URL / Use cases of HBase
- about / Use cases of HBase
Z
- ZooKeeper / Understanding HBase cluster components