copy config/live_config.sh or another config-file in the config folder to config/config.sh
Run script start_cluster.sh to start a cluster.<HTML></ol></HTML>
Querys can be done via HTTP to the nodes but should be done via PySolr+Zookeeper and the solr_client
Druid Knowledge
The whole project is a soup of bash scripts that work most of the time
If something does not work, double check you are restarting the zookeeper ensemble compleatly. That fuckers don't die easily.
Restarting the docker image will/should preserve the index (data)
If one node loses it's data because of a hard carsh and reset, it will copy the index from the remaining nodes. In that case, double check the number of shards via webinterface->collections and delete all unnecessary ones