The basic cluster setup involves editing the blur-site.properties and the blur-env.sh files in the $BLUR_HOME/conf directory. It is recommended that a standalone ZooKeeper be setup. Also a modern version of Hadoop with append support is required for proper data management (the write ahead log requires the sync operation).

Caution

If you setup a standalone ZooKeeper you will need to configure Blur to NOT manage the ZooKeeper. You will need to edit blur-env.sh file:
export BLUR_MANAGE_ZK=false

blur-site.properties

# The ZooKeeper connection string, consider adding a root path to the string, it
# can help when upgrading Blur.
# Example: zknode1:2181,zknode2:2181,zknode3:2181/blur-0.2.0
#
# NOTE: If you provide the root path "/blur-0.2.0", that will have to be manually
# created before Blur will start.

blur.zookeeper.connection=127.0.0.1

# If you are only going to run a single shard cluster then leave this as default.

blur.cluster.name=default

# Sets the default table location in hdfs.  If left null or omitted the table uri property in 
# the table descriptor will be required for all tables.

blur.cluster.default.table.uri=hdfs://namenode/blur/tables

Hadoop

The current version of Blur has Hadoop 1.2.1 embedded in the "apache-blur-*/lib/hadoop-1.2.1" path. However if you are using a different version of Hadoop or want Blur to use the Hadoop configuration in your installed version you will need to set the "HADOOP_HOME" environment variable in the "blur-env.sh" script found in "apache-blur-*/conf/".

# Edit the blur-env.sh
export HADOOP_HOME=<path to your Hadoop install directory>

blur-site.properties

These are the default settings for the shard server that can be overridden in the blur-site.properties file. Consider increasing the various thread pool counts (*.thread.count). The blur.controller.server.remote.thread.count is very important to increase for larger clusters, basically one thread is used per shard server per query. Some production clusters have set this thread pool to 2000 or more threads.

# Sets the hostname for the controller, if blank the hostname is automatically detected
blur.controller.hostname=

# The binding address of the controller
blur.controller.bind.address=0.0.0.0

# The default binding port of the controller server
blur.controller.bind.port=40010

# The connection timeout, NOTE: this will be the maximum amount of time you can wait for a query.
blur.controller.shard.connection.timeout=60000

# The number of threads used for thrift requests
blur.controller.server.thrift.thread.count=32

# The number of threads used for remote thrift requests to
# the shards server.  This should be a large number.
blur.controller.server.remote.thread.count=64

# The number of hits to fetch per request to the shard servers
blur.controller.remote.fetch.count=100

# The max number of retries to the shard server when there
# is an error during fetch
blur.controller.retry.max.fetch.retries=3

# The max number of retries to the shard server when there
# is an error during mutate
blur.controller.retry.max.mutate.retries=3

# The max number of retries to the shard server when there
# is an error during all other request
blur.controller.retry.max.default.retries=3

# The starting backoff delay for the first retry for a
# fetch errors
blur.controller.retry.fetch.delay=500

# The starting backoff delay for the first retry for a
# mutate errors
blur.controller.retry.mutate.delay=500

# The starting backoff delay for the first retry for a
# all other request errors
blur.controller.retry.default.delay=500

# The ending backoff delay for the last retry for a
# fetch errors
blur.controller.retry.max.fetch.delay=2000

# The ending backoff delay for the last retry for a
# mutate errors
blur.controller.retry.max.mutate.delay=2000

# The ending backoff delay for the last retry for a
# all other request errors
blur.controller.retry.max.default.delay=2000

# The http status page port for the controller server
blur.gui.controller.port=40080

blur-env.sh

# JAVA JVM OPTIONS for the controller servers, jvm tuning parameters are placed here.
# Consider adding the -XX:OnOutOfMemoryError="kill -9 %p" option to kill jvms that are failing due to memory issues.
export BLUR_CONTROLLER_JVM_OPTIONS="-Xmx1024m -Djava.net.preferIPv4Stack=true "

# Time to sleep between controller server commands.
export BLUR_CONTROLLER_SLEEP=0.1

# The of controller servers to spawn per machine.
export BLUR_NUMBER_OF_CONTROLLER_SERVER_INSTANCES_PER_MACHINE=1

Minimum Settings to Configure

It is highly recommended that the ulimits are increase on the server specifically:

  • open files
  • max user processes

In Hadoop the dfs.datanode.max.xcievers should be increased to at least 4096 if not more.
<property>
    <name>dfs.datanode.max.xcievers</name>
    <value>4096</value>
</property>

In blur-env.sh set the cache memory for the shard processes. DO NOT over allocate this will likely crash your server.
-XX:MaxDirectMemorySize=13g

Caution

Swap can kill java perform, you may want to consider disabling swap.

blur-site.properties

These are the default settings for the shard server that can be overridden in the blur-site.properties file. Consider increasing the various thread pool counts (*.thread.count). Also the blur.max.clause.count sets the BooleanQuery max clause count for Lucene queries.

# The hostname for the shard, if blank the hostname is automatically detected
blur.shard.hostname=

# The binding address of the shard
blur.shard.bind.address=0.0.0.0

# The default binding port of the shard server
blur.shard.bind.port=40020

# The number of fetcher threads
blur.shard.data.fetch.thread.count=8

# The number of the thrift threads
blur.shard.server.thrift.thread.count=8

# The number of threads that are used for opening indexes
blur.shard.opener.thread.count=8

# The number of cached queries
blur.shard.cache.max.querycache.elements=128

# The time to live for the cache query
blur.shard.cache.max.timetolive=60000

# Default implementation of the blur cache filter, which is 
# a pass through filter that does nothing
blur.shard.filter.cache.class=org.apache.blur.manager.DefaultBlurFilterCache

# Default Blur index warmup class that warms the fields provided
# in the table descriptor
blur.shard.index.warmup.class=org.apache.blur.manager.indexserver.DefaultBlurIndexWarmup

# Throttles the warmup to 30MB/s across all the warmup threads
blur.shard.index.warmup.throttle=30000000

# By default the block cache using off heap memory
blur.shard.blockcache.direct.memory.allocation=true

# By default the experimental block cache is off
blur.shard.experimental.block.cache=false

# The slabs in the blockcache are automatically configured by 
# default (-1) otherwise 1 slab equals 128MB. The auto config
# is detected through the MaxDirectoryMemorySize provided to 
# the JVM
blur.shard.blockcache.slab.count=-1

# The number of 1K byte buffers
blur.shard.buffercache.1024=8192

# The number of 8K byte buffers
blur.shard.buffercache.8192=8192

# The number of milliseconds to wait for the cluster to settle
# once changes have ceased
blur.shard.safemodedelay=5000

# The default time between index commits
blur.shard.time.between.commits=30000

# The default time between index refreshs
blur.shard.time.between.refreshs=3000

# The maximum number of clauses in a BooleanQuery
blur.max.clause.count=1024

# The number of thread used for parallel searching in the index manager
blur.indexmanager.search.thread.count=8

# The number of threads used for parallel searching in the index searchers
blur.shard.internal.search.thread.count=8

# Number of threads used for warming up the index
blur.shard.warmup.thread.count=8

# The fetch count per Lucene search, this fetches pointers to hits
blur.shard.fetchcount=100

# Heap limit on row fetch, once this limit has been reached the
# request will return
blur.max.heap.per.row.fetch=10000000

# The maximum number of records in a single row fetch
blur.max.records.per.row.fetch.request=1000

# The http status page port for the shard server
blur.gui.shard.port=40090

blur-env.sh

# JAVA JVM OPTIONS for the shard servers, jvm tuning parameters are placed here.
export BLUR_SHARD_JVM_OPTIONS="-Xmx1024m -Djava.net.preferIPv4Stack=true -XX:MaxDirectMemorySize=256m "

# Time to sleep between shard server commands.
export BLUR_SHARD_SLEEP=0.1

# The of shard servers to spawn per machine.
export BLUR_NUMBER_OF_SHARD_SERVER_INSTANCES_PER_MACHINE=1

Block Cache Configuration

Why

HDFS is a great filesystem for streaming large amounts data across large scale clusters. However the random access latency is typically the same performance you would get in reading from a local drive if the data you are trying to access is not in the operating systems file cache. In other words every access to HDFS is similar to a local read with a cache miss. There have been great performance boosts in HDFS over the past few years but it still can't perform at the level that a search engine needs.

Now you might be thinking that Lucene reads from the local hard drive and performs great, so why wouldn't HDFS perform fairly well on it's own? However most of time the Lucene index files are cached by the operating system's file system cache. So Blur has it's own file system cache allows it to perform low latency data look-ups against HDFS.

How

On shard server start-up Blur creates 1 or more block cache slabs blur.shard.blockcache.slab.count that are each 128 MB in size. These slabs can be allocated on or off the heap blur.shard.blockcache.direct.memory.allocation. Each slab is broken up into 16,384 blocks with each block size being 8K. Then on the heap there is a concurrent LRU cache that tracks what blocks of what files are in which slab(s) at what offset. So the more slabs of cache you create the more entries there will be in the LRU thus more heap.

Configuration

Scenario: Say the shard server(s) that you are planning to run Blur on have 32G of ram. These machines are probably also running HDFS data nodes as well with very high xcievers (dfs.datanode.max.xcievers in hdfs-site.xml) say 8K. If the data nodes are configured with 1G of heap then they may consume up to 4G of memory due to the high thread count because of the xcievers. Next let's say you configure Blur to 4G of heap as well, and you want to use 12G of off heap cache.

Auto Configuration

In the blur-env.sh file you would need to change BLUR_SHARD_JVM_OPTIONS to include "-XX:MaxDirectMemorySize=12g" and possibly "-XX:+UseLargePages" depending on your Linux setup. If you leave the blur.shard.blockcache.slab.count to the default -1 the shard startup will automatically detect the -XX:MaxDirectMemorySize size and automatically use almost all of the memory. By default the JVM has 64m in reserve for direct memory so by default Blur leaves at least that amount available to the JVM.

Custom Configuration

Again in the blur-env.sh file you would need to change BLUR_SHARD_JVM_OPTIONS to include "-XX:MaxDirectMemorySize=13g" and possibly "-XX:+UseLargePages" depending on your Linux setup. I set the MaxDirectMemorySize to more than 12G to make sure we don't hit the maximum limit and cause a OOM exception, this does not reserve 13G it's a control to not allow more than that. Below is a working example, it also contains GC logging and GC configuration:

export BLUR_SHARD_JVM_OPTIONS="-XX:MaxDirectMemorySize=13g \
            -XX:+UseLargePages \
            -Xms4g \
            -Xmx4g \
            -Xmn512m \
            -XX:+UseCompressedOops \
            -XX:+UseConcMarkSweepGC \
            -XX:+CMSIncrementalMode \
            -XX:CMSIncrementalDutyCycleMin=10 \
            -XX:CMSIncrementalDutyCycle=50 \
            -XX:ParallelGCThreads=8 \
            -XX:+UseParNewGC \
            -XX:MaxGCPauseMillis=200 \
            -XX:GCTimeRatio=10 \
            -XX:+DisableExplicitGC \
            -verbose:gc \
            -XX:+PrintGCDetails \
            -XX:+PrintGCDateStamps \
            -Xloggc:$BLUR_HOME/logs/gc-blur-shard-server_`date +%Y%m%d_%H%M%S`.log"

Next you will need to setup blur-site.properties by changing blur.shard.blockcache.slab.count to 96. This is telling blur to allocate 96 128MB slabs of memory at shard server start-up. Note, that the first time you do this that the shard servers may take long time to allocate the memory. This is because the OS could be using most of that memory for it's own filesystem caching and it will need to unload it which may cause some IO due the cache synching to disk.

Also the blur.shard.blockcache.direct.memory.allocation is set to true by default, this will tell the JVM to try and allocate the memory off heap. If you want to run the slabs in the heap (which is not recommended) set this value to false.

Internally Blur uses the Metrics library from Coda Hale (http://metrics.codahale.com/). So by default all metrics are available through JMX here is a screenshot of what is available in the Shard server.

Shard Server - MBean Screenshot

Configuring Other Reporters

New reporters can be added configured in the blur-site.properties. Multiple reporters can be configured.

Example

blur.metrics.reporters=GangliaReporter
blur.metrics.reporter.ganglia.period=3
blur.metrics.reporter.ganglia.unit=SECONDS
blur.metrics.reporter.ganglia.host=ganglia1
blur.metrics.reporter.ganglia.port=8649

Reporters to Enable

blur.metrics.reporters=[ConsoleReporter,CsvReporter,GangliaReporter,GraphiteReporter]

ConsoleReporter

blur.metrics.reporter.console.period=[5]
blur.metrics.reporter.console.unit=[NANOSECONDS,MICROSECONDS,MILLISECONDS,SECONDS,MINUTES,HOURS,DAYS]

CsvReporter

blur.metrics.reporter.csv.period=[5]
blur.metrics.reporter.csv.unit=[NANOSECONDS,MICROSECONDS,MILLISECONDS,SECONDS,MINUTES,HOURS,DAYS]
blur.metrics.reporter.csv.outputDir=[.]

GangliaReporter

blur.metrics.reporter.ganglia.period=[5]
blur.metrics.reporter.ganglia.unit=[NANOSECONDS,MICROSECONDS,MILLISECONDS,SECONDS,MINUTES,HOURS,DAYS]
blur.metrics.reporter.ganglia.host=[localhost]
blur.metrics.reporter.ganglia.port=[-1]
blur.metrics.reporter.ganglia.prefix=[""]
blur.metrics.reporter.ganglia.compressPackageNames=[false]

GraphiteReporter

blur.metrics.reporter.graphite.period=[5]
blur.metrics.reporter.graphite.unit=[NANOSECONDS,MICROSECONDS,MILLISECONDS,SECONDS,MINUTES,HOURS,DAYS]
blur.metrics.reporter.graphite.host=[localhost]
blur.metrics.reporter.graphite.port=[-1]
blur.metrics.reporter.graphite.prefix=[""]