====== Solr Operations ======
===== Webinterfaces =====
* http://solr01.picalike.corpex-kunden.de:8985/solr/#/
* http://solr02.picalike.corpex-kunden.de:8985/solr/#/
===== Git Repo =====
[[https://git.picalike.corpex-kunden.de/picalike/solr_feature_search|solr_feature_search Git]]
===== Usage =====
* copy //config/live_config.sh// or another config-file in the //config// folder to //config/config.sh//
* Run script [[https://git.picalike.corpex-kunden.de/picalike/solr_feature_search/-/blob/master/start_cluster.sh|start_cluster.sh]] to start a cluster.
* Querys can be done via HTTP to the nodes but should be done via PySolr+Zookeeper and the [[https://git.picalike.corpex-kunden.de/picalike/picalike_v5/-/blob/master/src/picalike_v5/solr_client.py|solr_client]]
===== Druid Knowledge =====
* The whole project is a soup of bash scripts that work //**most of the time**//
* If something does not work, double check you are restarting the zookeeper ensemble compleatly. That fuckers don't die easily.
* Restarting the docker image will/should preserve the index (data)
* If one node loses it's data because of a hard carsh and reset, it will copy the index from the remaining nodes. In that case, double check the number of shards via [[http://solr01.picalike.corpex-kunden.de:8985/solr/#/~collections/feature_sim_search|webinterface->collections]] and delete all unnecessary ones