The basic cluster setup involves editing the and the files in the $BLUR_HOME/conf directory. It is recommended that a standalone ZooKeeper be setup. Also a modern version of Hadoop with append support is required for proper data management (the write ahead log requires the sync operation).


If you setup a standalone ZooKeeper you will need to configure Blur to NOT manage the ZooKeeper. You will need to edit file:
export BLUR_MANAGE_ZK=false

# The ZooKeeper connection string, consider adding a root path to the string, it
# can help when upgrading Blur.
# Example: zknode1:2181,zknode2:2181,zknode3:2181/blur-0.2.0
# NOTE: If you provide the root path "/blur-0.2.0", that will have to be manually
# created before Blur will start.


# If you are only going to run a single shard cluster then leave this as default.

# Sets the default table location in hdfs.  If left null or omitted the table uri property in 
# the table descriptor will be required for all tables.



The current version of Blur has Hadoop 1.2.1 embedded in the "apache-blur-*/lib/hadoop-1.2.1" path. However if you are using a different version of Hadoop or want Blur to use the Hadoop configuration in your installed version you will need to set the "HADOOP_HOME" environment variable in the "" script found in "apache-blur-*/conf/".

# Edit the
export HADOOP_HOME=<path to your Hadoop install directory>

These are the default settings for the shard server that can be overridden in the file. Consider increasing the various thread pool counts (*.thread.count). The blur.controller.server.remote.thread.count is very important to increase for larger clusters, basically one thread is used per shard server per query. Some production clusters have set this thread pool to 2000 or more threads.

# Sets the hostname for the controller, if blank the hostname is automatically detected

# The binding address of the controller

# The default binding port of the controller server

# The connection timeout, NOTE: this will be the maximum amount of time you can wait for a query.

# The number of threads used for thrift requests

# The number of threads used for remote thrift requests to
# the shards server.  This should be a large number.

# The number of hits to fetch per request to the shard servers

# The max number of retries to the shard server when there
# is an error during fetch

# The max number of retries to the shard server when there
# is an error during mutate

# The max number of retries to the shard server when there
# is an error during all other request

# The starting backoff delay for the first retry for a
# fetch errors

# The starting backoff delay for the first retry for a
# mutate errors

# The starting backoff delay for the first retry for a
# all other request errors

# The ending backoff delay for the last retry for a
# fetch errors

# The ending backoff delay for the last retry for a
# mutate errors

# The ending backoff delay for the last retry for a
# all other request errors

# The http status page port for the controller server

# JAVA JVM OPTIONS for the controller servers, jvm tuning parameters are placed here.
# Consider adding the -XX:OnOutOfMemoryError="kill -9 %p" option to kill jvms that are failing due to memory issues.

# Time to sleep between controller server commands.

# The of controller servers to spawn per machine.

Minimum Settings to Configure

It is highly recommended that the ulimits are increase on the server specifically:

  • open files
  • max user processes

In Hadoop the dfs.datanode.max.xcievers should be increased to at least 4096 if not more.

In set the cache memory for the shard processes. DO NOT over allocate this will likely crash your server.


Swap can kill java perform, you may want to consider disabling swap.

These are the default settings for the shard server that can be overridden in the file. Consider increasing the various thread pool counts (*.thread.count). Also the blur.max.clause.count sets the BooleanQuery max clause count for Lucene queries.

# The hostname for the shard, if blank the hostname is automatically detected

# The binding address of the shard

# The default binding port of the shard server

# The number of fetcher threads

# The number of the thrift threads

# The number of threads that are used for opening indexes

# The number of cached queries

# The time to live for the cache query

# Default implementation of the blur cache filter, which is 
# a pass through filter that does nothing

# Default Blur index warmup class that warms the fields provided
# in the table descriptor

# Throttles the warmup to 30MB/s across all the warmup threads

# By default the block cache using off heap memory

# By default the experimental block cache is off

# The slabs in the blockcache are automatically configured by 
# default (-1) otherwise 1 slab equals 128MB. The auto config
# is detected through the MaxDirectoryMemorySize provided to 
# the JVM

# The number of 1K byte buffers

# The number of 8K byte buffers

# The number of milliseconds to wait for the cluster to settle
# once changes have ceased

# The default time between index commits

# The default time between index refreshs

# The maximum number of clauses in a BooleanQuery

# The number of thread used for parallel searching in the index manager

# The number of threads used for parallel searching in the index searchers

# Number of threads used for warming up the index

# The fetch count per Lucene search, this fetches pointers to hits

# Heap limit on row fetch, once this limit has been reached the
# request will return

# The maximum number of records in a single row fetch

# The http status page port for the shard server

# JAVA JVM OPTIONS for the shard servers, jvm tuning parameters are placed here.
export BLUR_SHARD_JVM_OPTIONS="-Xmx1024m -XX:MaxDirectMemorySize=256m "

# Time to sleep between shard server commands.

# The of shard servers to spawn per machine.

Block Cache Configuration


HDFS is a great filesystem for streaming large amounts data across large scale clusters. However the random access latency is typically the same performance you would get in reading from a local drive if the data you are trying to access is not in the operating systems file cache. In other words every access to HDFS is similar to a local read with a cache miss. There have been great performance boosts in HDFS over the past few years but it still can't perform at the level that a search engine needs.

Now you might be thinking that Lucene reads from the local hard drive and performs great, so why wouldn't HDFS perform fairly well on it's own? However most of time the Lucene index files are cached by the operating system's file system cache. So Blur has it's own file system cache allows it to perform low latency data look-ups against HDFS.


On shard server start-up Blur creates 1 or more block cache slabs blur.shard.blockcache.slab.count that are each 128 MB in size. These slabs can be allocated on or off the heap Each slab is broken up into 16,384 blocks with each block size being 8K. Then on the heap there is a concurrent LRU cache that tracks what blocks of what files are in which slab(s) at what offset. So the more slabs of cache you create the more entries there will be in the LRU thus more heap.


Scenario: Say the shard server(s) that you are planning to run Blur on have 32G of ram. These machines are probably also running HDFS data nodes as well with very high xcievers (dfs.datanode.max.xcievers in hdfs-site.xml) say 8K. If the data nodes are configured with 1G of heap then they may consume up to 4G of memory due to the high thread count because of the xcievers. Next let's say you configure Blur to 4G of heap as well, and you want to use 12G of off heap cache.

Auto Configuration

In the file you would need to change BLUR_SHARD_JVM_OPTIONS to include "-XX:MaxDirectMemorySize=12g" and possibly "-XX:+UseLargePages" depending on your Linux setup. If you leave the blur.shard.blockcache.slab.count to the default -1 the shard startup will automatically detect the -XX:MaxDirectMemorySize size and automatically use almost all of the memory. By default the JVM has 64m in reserve for direct memory so by default Blur leaves at least that amount available to the JVM.

Custom Configuration

Again in the file you would need to change BLUR_SHARD_JVM_OPTIONS to include "-XX:MaxDirectMemorySize=13g" and possibly "-XX:+UseLargePages" depending on your Linux setup. I set the MaxDirectMemorySize to more than 12G to make sure we don't hit the maximum limit and cause a OOM exception, this does not reserve 13G it's a control to not allow more than that. Below is a working example, it also contains GC logging and GC configuration:

export BLUR_SHARD_JVM_OPTIONS="-XX:MaxDirectMemorySize=13g \
            -XX:+UseLargePages \
            -Xms4g \
            -Xmx4g \
            -Xmn512m \
            -XX:+UseCompressedOops \
            -XX:+UseConcMarkSweepGC \
            -XX:+CMSIncrementalMode \
            -XX:CMSIncrementalDutyCycleMin=10 \
            -XX:CMSIncrementalDutyCycle=50 \
            -XX:ParallelGCThreads=8 \
            -XX:+UseParNewGC \
            -XX:MaxGCPauseMillis=200 \
            -XX:GCTimeRatio=10 \
            -XX:+DisableExplicitGC \
            -verbose:gc \
            -XX:+PrintGCDetails \
            -XX:+PrintGCDateStamps \
            -Xloggc:$BLUR_HOME/logs/gc-blur-shard-server_`date +%Y%m%d_%H%M%S`.log"

Next you will need to setup by changing blur.shard.blockcache.slab.count to 96. This is telling blur to allocate 96 128MB slabs of memory at shard server start-up. Note, that the first time you do this that the shard servers may take long time to allocate the memory. This is because the OS could be using most of that memory for it's own filesystem caching and it will need to unload it which may cause some IO due the cache synching to disk.

Also the is set to true by default, this will tell the JVM to try and allocate the memory off heap. If you want to run the slabs in the heap (which is not recommended) set this value to false.

Internally Blur uses the Metrics library from Coda Hale ( So by default all metrics are available through JMX here is a screenshot of what is available in the Shard server.

Shard Server - MBean Screenshot

Configuring Other Reporters

New reporters can be added configured in the Multiple reporters can be configured.



Reporters to Enable