CGSpace Notes

Documenting day-to-day work on the CGSpace repository.

October, 2024

2024-10-03

  • I filtered for journal articles that were Creative Commons and missing abstracts:
$ csvcut -c 'id,dc.title[en_US],dcterms.abstract[en_US],cg.identifier.doi[en_US],dcterms.type[en_US],dcterms.language[en_US],dcterms.license[en_US]' ~/Downloads/2024-09-30-cgspace.csv | csvgrep -c 'dcterms.type[en_US]' -r '^Journal Article$' | csvgrep -c 'cg.identifier.doi[en_US]' -r '^.+$' | csvgrep -c 'dcterms.license[en_US]' -r '^CC-' | csvgrep -c 'dcterms.abstract[en_US]' -r '^$' | csvgrep -c 'dcterms.language[en_US]' -r '^en$' | grep -v "||" | grep -v -- '-ND' | grep -v -E 'https://doi.org/10.(2499|4160|17528)/' > /tmp/missing-abstracts.csv
  • Then wrote a script to get them from OpenAlex
    • After inspecting and cleaning a few dozen up in OpenRefine (removing “Keywords:” and copyright, and HTML entities, etc) I managed to get about 440

2024-10-06

  • Since I increase Solr’s heap from 2 to 3G a few weeks ago it seems like Solr is always using 100% CPU
    • I don’t understand this because it was running well before, and I only increased it in anticipation of running the dspace-statistics-api-js, though never got around to it
    • I just realized that this may be related to the JMX monitoring, as I’ve seen gaps in the Grafana dashboards and remember that it took surprisingly long to scrape the metrics
    • Maybe I need to change the scrape interval

2024-10-08

  • I checked the VictoriaMetrics vmagent dashboard and saw that there were thousands of errors scraping the jvm_solr target from Solr
    • So it seems like I do need to change the scrape interval
    • I will increase it from 15s (global) to 20s for that job
    • Reading some documentation I found this reference from Brian Brazil that discusses this very problem
    • He recommends keeping a single scrape interval for all targets, but also checking the slow exporter (jmx_exporter in this case) and seeing if we can limit the data we scrape
    • To keep things simple for now I will increase the global scrape interval to 20s
    • Long term I should limit the metrics…
    • Oh wow, I found out that Solr ships with a Prometheus exporter! and even includes a Grafana dashboard
  • I’m trying to run the Solr prometheus-exporter as a one-off systemd unit to test it:
# cd /opt/solr-8.11.3/contrib/prometheus-exporter
# systemd-run --uid=victoriametrics --gid=victoriametrics --working-directory=/opt/solr-8.11.3/contrib/prometheus-exporter ./bin/solr-exporter -p 9854 -b http://localhost:8983/solr -f ./conf/solr-exporter-config.xml -s 20
  • The default scrape interval is 60 seconds, so if we scrape it more than that the metrics will be stale
    • From what I’ve seen this returns in less than one second so it should be safe to reduce the scrape interval